Stories Behind Papers: Integer Overflow

A couple months ago Jean Yang and Vijay Chidambaram had a Twitter discussion about the stories behind research efforts that you might hear over coffee, but that usually don’t get written up. Vijay started a series of posts containing these. I thought I’d write up a couple of them myself. Alas, none will be particularly dramatic. This seems like a good one to start with.

Around the mid/late 2000s — perhaps starting with Nearly All Binary Searches and Mergesorts are Broken — I got interested in integer overflow bugs. At this point the security aspect of integer bugs in C and C++ was receiving plenty of attention, but I didn’t feel like very many people were looking at the broader issue of logic errors stemming from integer overflows. Even in functional languages with super-serious type systems and a focus on correctness, integer overflow was (and is) an often-neglected issue. These problems are fundamentally difficult to deal with at compile time.

Anyhow, the thing that really got me motivated was the very limited understanding of undefined behavior that seemed to be par for the course in those days. Additionally, most of the existing research tools for detecting or mitigating integer overflows were operating on compiled code, while I believed this problem needed to be attacked near the source level.

By summer 2010 my student Peng Li (now at Baidu USA) had a modified version of Clang that emitted dynamic checks for integer overflow, divide by zero, value-losing typecasts, shifts past bitwidth, and that kind of thing into a compiled C or C++ program. We used this to test open source software and it turned out that basically all programs were executing a constant stream of undefined behavior. I fired off a few dozen bug reports. Since UB wasn’t widely understood at that time, many developers still had the attitude “that is OK since we did it intentionally” or “I am allowed to expect signed overflow in C/C++ to have two’s complement behavior because that is what the hardware does.” See for example the discussions that happened at PostgreSQL and PHP.

In early 2011 I visited Grigore Rosu’s group at UIUC to learn about their awesome new KCC tool. We needed something like this to filter out undefined programs that C-Reduce was creating while making minimal versions of bug-triggering programs from Csmith. During this visit I happened to be able to grab a few minutes with Vikram Adve and learned that he and his student Will Dietz were also working on LLVM-based integer overflow detection, and they also had a working prototype. Yikes! This wasn’t even close to the worst case scenario — which would have been learning about their work once they had a paper accepted somewhere — but it’s never fun to be scooped. Competition in research may occasionally be useful as a forcing function, but I am personally uninterested in competing. If smart people are working on a problem, I would like to leave them to it, they’ll most likely come up with a good solution. There are way too many other fun and important problems to work on for competing on any single problem to be attractive. Anyhow, luckily, Vikram and Will were happy to collaborate, so we started working together. I’m still happy with the resulting paper and I’m confident that it is better than what either of the groups would have produced working on its own.

One of our goals all along was to get integer overflow checks into Clang. This took a while longer and Will did most of the legwork. The job was made easier by the fact that by this time there was plenty of momentum towards dynamic undefined behavior detection in LLVM. For example, ASan was already part of the tree. There was an existing -fcatch-undefined-behavior flag that we fit into, but this pretty rapidly (in time for LLVM 3.3, I believe) got phased out in favor of the -fsanitize=undefined usage that Clang still uses.

Overall, dynamic detection of integer-related undefined behaviors in C/C++ is not difficult, but convincing people to take these bugs seriously was a long struggle and the broader issue of how integer overflows relate to program bugs is interesting and deep. Fundamentally, people usually do not want, and are not good at reasoning about, fixed-width integers, but on the other hand we don’t want to just put bignums everywhere in our low-level programming languages. The other thing I take away from this effort is how a lucky break and a willingness to collaborate were really important in making the work successful, and in fact my group continues to collaborate with Vikram’s.


6 responses to “Stories Behind Papers: Integer Overflow”

  1. A second kind of non-standardization occurs with constructs such as INT_MIN%-1 which is—by our reading—well defined in ANSI C, C99, C++98, and C++11. However, we are not aware of a C or C++ compiler that reliably returns the correct result, zero, for this expression.

    This reminds me of this post (several of the examples should be well-defined but no sane/efficient implementation can handle them) as well as this thread (in which it was finally decided that the standard was clear and, hence, the compiler had to be updated).

  2. Hi Dirkjan, I like Swift’s story for integer overflow (always trap on overflow) better than Rust’s, and I hope Rust will change over to this model at some point.

  3. Hi John,

    thank you for the inspiring post, it really shows that collaboration can lead to better results then when working alone. I liked the point you stressed out that not competition is your main drive. Btw. are you aware of any active research going further into this direction (e.g., compiler based detection of int./buffer (over/under)flows)?

    Keep publishing such inspiring blog posts!

  4. It would be helpful for the Standard to recognize optional guarantees which implementations could offer and programs could demand (compilers could either adjust behavior based on programs’ demands, or allowed to reject programs whose demands they cannot met).

    For programs that will be given data from untrustworthy sources (a very common scenario), there would be great value in having all integer computations be guaranteed to either yield some kind of result or force an abnormal program termination unless a program has configured some alternative behavior. Requiring that integer operations yield a precisely-specified result in all such cases would impair many optimizations that would otherwise be safe and useful. Allowing a compiler to treat signed computations as though performed on larger type and then truncate results on an as-convenient basis [as is done with floating-point types when FLT_EVAL_METHOD is not greater than zero] would allow must useful optimizations to be retained while still constraining the effects of out-of-range computations.

    Such semantics would allow a compiler to treat (x+1>y) as (x>=y) at its convenience, but would not allow a compiler to treat the evaluation of x+1 as justification for omitting code elsewhere that evaluates (x<INT_MAX).