I think that you know if it is a C plugrama, and although it is a new content, the content pointed out by the static analysis tool happened to be interesting, so I will generalize it and write it.
#include <stdio.h>
#define VALUE 255
int main(void) {
  char c = VALUE;
  if (c == VALUE) {
    puts("TRUE");
  } else {
    puts("FALSE");
  }
  return 0;
}
What will be output when this program is executed?
In my environment (Ubuntu 16.04LTS, gcc (Ubuntu 5.4.0-6ubuntu1 ~ 16.04.9) 5.4.0 20160609), it is as follows.
FALSE
It's not intuitive at all.
Whether char is signed or unsigned depends on the processing system.
(It's not like ʻintandlong` are always signed. It's confusing ...)
Typically
--Signed: [-128, 127] --Unsigned: [0, 255]
Will take the value of.
It was "signed" in the above environment. Therefore, 255 cannot be expressed.
char c = VALUE;
This makes the value of c``-1 (in the case of 8-bit and 2's complement representation).
  if (c == VALUE) {
This is the problem. The left side is char and the right side is ʻint.  Due to the rules of C's integral promotion, c is converted to ** ʻint **. That is, ʻint's -1.  The one on the right side is ʻint 255.
Therefore, this conditional expression is false.
It's not a difficult story to look at, but if the place where this VALUE is defined and the place where it is actually used are far apart, it will be a problem that is quite difficult to follow.
With gcc, the options called -fsigned-char / -funsigned-char Yes, you can switch the behavior.
If you compile the above program with -funsigned-char, you will get the result of TRUE.
If you want to handle numbers in 8bit instead of characters or strings, use ʻint8_t / ʻuint8_t, at least sigined char / ʻunsigned char, and not char.  Also, especially in C ++, defining constants as typed const ( constexpr`) constants rather than preprocessor macros is less likely to fall into the extra pitfalls.
Recommended Posts