Skip to content

Catching Integer Errors with Clang

Peng Li and I at Utah, along with our collaborators Will Dietz and Vikram Adve at UIUC, wrote an integer overflow checker for Clang which has found problems in most C/C++ codes that we have looked at. Do you remember how pervasive memory safety errors were before Valgrind came out? Integer overflows are that way right now.

An exciting recent development is that due to a ton of work done by Will, our checker is now in the Clang trunk. Taking a cue from the excellent address sanitizer, it is called the integer sanitizer. To use it, check out and build the latest Clang (you must also build Compiler-RT) and then compile your code using the -fsanitize=integer or -fsanitize=undefined option. The former will tell you about well-defined but possibly erroneous unsigned overflows, whereas the latter will only tell you about undefined behaviors (including some non-integer-related ones). For more details on these options, see Clang’s documentation for its code generation options. The integer sanitizer is not yet in a released version of Clang, but we expect it to be part of the 3.3 release.

One thing we realized very early on in this work is that integer overflows are surprisingly difficult to understand, particularly when they occur in the middle of complex expressions. For example, see this somewhat undignified interaction between me and the main PHP guy. As a result, we put a lot of work into emitting good error messages. Some examples follow.

The integer sanitizer not only checks for divide by zero, which is kind of boring, but also for INT_MIN / -1 and INT_MIN % -1. Real codes don’t seem to perform these operations when left alone, but see here.

#include <limits.h>

int main (void) {
  return INT_MIN / -1;
}

Result:

$ clang -fsanitize=integer div.c
$ ./a.out 
div.c:4:18: runtime error: division of -2147483648 by -1 cannot be represented in type 'int'
Floating point exception (core dumped)
$ 

Unsigned overflows are well-defined by C/C++ and are often intentional, particularly in bitsy codes like hash functions and crypto. On the other hand, unintentional unsigned overflows can be bugs, and we can detect them if you want:

int main (void) {
  return 0U - 1;
}

Result:

$ clang -fsanitize=integer unsigned.c 
$ ./a.out 
unsigned.c:2:13: runtime error: unsigned integer overflow: 0 - 1 cannot be represented in type 'unsigned int'
$ clang -fsanitize=undefined unsigned.c 
$ ./a.out 
$ 

Signed integer overflows are undefined by C/C++. Compilers used to provide 2′s complement wraparound for signed overflow, but this is no longer reliable. Therefore, signed overflow should always be avoided. One example commonly seen in real codes is negation of INT_MIN:

#include <limits.h>

int main (void) {
  return -INT_MIN;
}

Result:

$ clang -fsanitize=integer signed.c 
$ ./a.out
signed.c:4:10: runtime error: negation of -2147483648 cannot be represented in type 'int'; cast to an unsigned type to negate this value to itself
$ 

Another class of integer error occurs when the right operand to a shift operator is negative or is not less than the bitwidth of the promoted left operand. But these are kind of boring so let’s look at a more arcane kind of shift error: in C99 and later many kinds of signed left shift have undefined behavior, such as this one:

int main (void) {
  return 0xffff << 16;
}

Result:

$ clang -fsanitize=integer shift1.c
$ ./a.out 
shift1.c:2:17: runtime error: left shift of 65535 by 16 places cannot be represented in type 'int'
$ 

In the more recent versions of C/C++, it is not legal to shift a 1 into, out of, or past the sign bit. See 6.5.7.4 of the C99 standard for more details.

Anyway, I think this hits the high points. The slowdown due to integer checking is generally less than 50%. We believe that we can reduce this, but so far have mainly focused on usability and correctness. Will already did some very nice work which marks the trap handling code as cold.

We would appreciate usability feedback from early adopters. On our TODO list are a few things such as:

  • Perhaps dropping unsigned overflows from the set of checks enabled by -fsanitize=integer (these would be enabled by a separate flag).
  • Compiler directives for suppressing integer sanitizer errors where they are not wanted.
  • Redirecting the error stream to a file or to syslog.
  • Porting over a few additional checks from IOC such as detecting lossy truncations and sign conversions.

Let us know if you would find these to be useful.

{ 3 } Comments

  1. Trevor | February 25, 2013 at 12:07 pm | Permalink

    I can’t wait until clang 3.3 becomes available for my distro so I can incorporate this incredible feature in my workflow.

    I’m certainly not the compiler expert you are, so I have to wonder: is it at all possible that some of these issues might not be inadvertently introduced by the compiler itself (I’m specifically thinking of the effects of different optimization levels)?

    In my situation I’m in charge of a 7 year-old project which uses gcc. I would never get approval to move to a newer version of gcc, never mind a switch to a completely different compiler altogether. But if I were able to get my codebase to build under clang 3.3 I would imagine that blatant examples of undefined and over/under flow behaviour would be reported. But is there a chance the actual object files produce by my compile might still contain issues not found due to the use of different compilers and compile options?

  2. regehr | February 25, 2013 at 12:40 pm | Permalink

    Hi Trevor, your question is an interesting one.

    Basically, in this work we are mostly only concerned with undefined behaviors, which are clearly bad.

    C/C++ also have a lot of unspecified and implementation-defined behaviors that can cause the code to change behavior when you change the compiler. Now as far as I know, GCC and Clang on x86 make identical choices for all of the important implementation-defined behaviors, so this should not make a difference. The same should be true on x86-64. But you could always run into a problem with unspecified behavior such as order of evaluation of side-effecting arguments to a function.

    It should be the case that any difference in program behavior when you switch compilers can be blamed on one of undefined behavior, unspecified behavior, or implementation-defined behavior. However, in reality things might not be so simple if you are using language extensions such as inline assembly, if your code has timing dependencies, or if you encounter a compiler bug.

  3. Ahmed Charles | March 1, 2013 at 10:48 pm | Permalink

    I think all of the proposed items on the TODO list are important. I’d want logging to a file for processes running on a server, for instance. I’d want more fine grained control over which defined behaviors are being checked and I’d also want to disable it for code where the defined behaviors have been checked for correctness (like crypto/hashing/etc).

    And having more types of checks that people don’t expect to happen is always good, I think.

    Thanks for working on this and congratulations. I know it’s been a long way coming.