I have a hypothesis that compiler bugs impose a noticeable but hard-to-measure tax on software development. I’m not talking so much about compiler crashes, although they are annoying, but more about cases where an optimization or code generation bug causes a program to incorrectly segfault or generate a wrong result. Generally, when looking at a test case that triggers one of these bugs, I can’t find any reason why analogous code could not be embedded in some large application. As a random example, earlier this year a development version of GCC would miscompile this code when a and b have initial value 0:
b = (~a | 0 >= 0) & 0x98685255F; printf ("%d\n", b < 0);
The code looks stupid, but such things can easily arise out of macro expansion, constant propagation, function inlining, or a hundred other common transformations. Compiler bugs that are exposed by inlining are pernicious because they may not be triggered during separate compilation, and therefore may not be detectable during unit testing.
The question is: What happens when your code triggers a bug like this? Keep in mind that the symptom is simply that the code misbehaves when the optimization options are changed. In a C/C++ program, by far the most likely cause for this kind of effect is not a compiler bug, but rather one or more undefined behaviors being executed by the program being developed (in the presence of undefined behavior, the compiler is not obligated to behave consistently).
Since tools for detecting undefined behaviors are sorely lacking, at this point debugging the problem usually degenerates to a depressing process of adding break/watchpoints, adding print statements, and messing with the code. My suspicion (and my own experience, both personal and gained by looking over the shoulders of many years’ worth of students) is that much of the time, the buggy behavior goes away as the result of an incidental change, without being fully understood. Thus, the compiler bug goes undetected and lives to bite us another day.
I’m not hopeful that we can estimate the true cost of compiler bugs, but we can reduce the cost through better compiler v&v and by better tools for detecting undefined behaviors.
6 responses to “The Hidden Cost of Compiler Bugs”
In my experience, people turn off optimizations or turn them down to a level where the bug no longer occurs and then that setting stays around _forever_.
When I was at MSFT, one of the things I did was unify the build system for Visual Studio, the various compiler teams, and the .NET runtime/framework. As part of that, I completely horrified the C++ team when I gave them a breakdown of the number of different modules that were, for various good-at-the-time reasons, compiling with mixtures of reduced optimizations, disabled features, and even targeting achingly old architectures (e.g. 486, in 2003!).
The bugs had all been reported and long-since fixed (advantages of sitting in the same building as the compiler team), but un-breaking the makefiles had never happened. Everybody expected unifying the build system to speed up the build, but nobody had expected it to also speed up the product…
Hi Lars, great story. I’m thinking it might actually be rational to keep the suppressed optimizations around for a while, just because fixed compiler bugs may not always stay fixed, some people may be using really old compilers, and in general dealing with the effects of these bugs is so ugly.
For embedded systems, it is not uncommon to keep the same compiler version for the duration fo a product, so bugs encountered are unlikely to be fixed ever (for purposes of that product). The upside is that the team can learn to live with the set of bugs in the chosen compiler.
A hypothesis like this seems like something you test with a survey, rather than by building code. Do computer science researchers do surveys?
Anecdotally, I cannot think of a single time I’ve tracked down a bug in my code that was actually the compiler’s fault. Other programmers I’ve talked to who’ve jumped to blaming the compiler usually end up sheepishly admitting a day or two later that the bug was their own fault, usually some sort of undefined behavior.
Perhaps I’m lucky, or I write code that isn’t tripped up by aggressive optimizations, or I’m writing for mainstream architectures where the compilers are well tested.
Hi Patrick, I imagine a lot of developers are in the same boat as you. On the other hand there are a number of pretty reliable ways to run across compiler bugs. I’ve found probably a dozen by hand just because I know where to look. Compilers for embedded processors can be really bad, and I’ve had students trigger compiler bugs (that I verified) when writing quite small functions for embedded dev boards in my classes. If you need to use code generation options other than -O[0-3], the odds are greatly increased. If your code doesn’t look like the code that is used to test the compiler, you often hit bad corner cases. This can happen with hand-written code but it’s more common when generating code, for example by translating some other language into C. OS kernels, mathematical libraries, and other codes that look a bit different in any respect often run afoul of problems.
Not sure about what the question is really..
I can remind of a compiler bug which prevented us to rebuild incrementally in a 6h build, think about the time lost for developpers!
This issue lasted for 2 years: by the time we found a workaround I suspect that developpers were so used to use a full rebuild each time that few used incremental rebuild..
this is the IRC version – keeps it short.
for MSVC 2k10, adjust 140/140 to 80/80.
#define r template
#define v typename
rstruct a{typedef T* e;}; rstruct b:a<v b::e>{}; rstruct b{typedef T e;}; rstruct c:b<80,v c::e>{}; rstruct c{typedef T e;}; int main(){c::e p;}