A C or C++ program is expected to follow a collection of rules such as “don’t access out-of-bounds array elements.” There are a lot of these rules and they are listed (or are implicit) in the various language standards. A program that plays by all of these rules—called a conforming program—is one where we might, potentially, be able to make a guarantee about the program’s behavior, such as “This program doesn’t crash, regardless of input.” In contrast, when a C or C++ program breaks a rule, the standard doesn’t permit us to say anything about its behavior.
So where’s the problem? It comes in three parts:
- Some of the rules imposed by the C and C++ standards are routinely violated. For example, it is not uncommon to see creation (but not dereference) of invalid pointers, signed integer overflow, and violations of strict aliasing rules.
- Programmers expect a C/C++ implementation to behave in a certain way even when some of the rules are violated. For example, many people expect that creating an invalid pointer (but, again, not dereferencing it) is harmless. Program analyzers that warn about these problems are likely to lose users.
- C/C++ compilers have a standard-given right to exploit undefined behaviors in order to generate better code. They keep getting better and better at this. Thus, every year, some programs that used to work correctly become broken when compiled with the latest version of GCC or Clang or whatever.
This propensity for today’s working programs to be broken tomorrow is what I mean when I say these languages are not future proof. In principle a big C/C++ program that has been extensively tested would be future-proof if we never upgraded the compiler, but this is often not a viable option.
There is a long, sad history of programmers becoming seriously annoyed at the GCC developers over the last 10 years due to GCC’s increasingly sophisticated code generation exploiting the undefinedness of signed integer overflows. Similarly, any time a compiler starts to do a better job at interprocedural optimization (this has recently been happening with LLVM, I believe) a rash of programs that does stupid stuff like not returning values from non-void functions breaks horribly. Programmers used to think it was OK to read uninitialized storage and then compilers began destroying code that did this.
Let’s look at a specific example. In a recent series of posts (1, 2, 3), Pascal Cuoq has been using a formal verification tool called Frama-C to verify zlib. Why zlib? First, it’s not that big. Second, it’s ubiquitous. Third, it is believed to be high quality—if we ignore a build problem on Solaris, the last security issues were fixed in 2005. I would guess that it would be difficult to find a widely-used library that is clearly more solid than zlib.
So what kinds of problems has Pascal found in this solid piece of C code? Well, so far nothing absolutely awful, but it does appear to create invalid pointers and to compare these against valid pointers. Is this bad? That depends on your point of view. It is possible (and indeed likely) that no current compiler exploits this undefined behavior. On the other hand, it is not straightforward to perform formal verification of zlib unless we treat it as being written in a language that is quite similar to C, but that assigns a semantics to invalid pointers. Furthermore, a new compiler could show up at any time which does something horrible (like opening an exploitable vulnerability) any time zlib computes an invalid pointer.
Of course zlib isn’t the real problem; it’s small, and probably pretty close to being correct. The real problem is that there are billions of lines of C and C++ are out there. For every thousand lines of existing code there are probably a handful of undefined behavior time bombs waiting to go off. As we move forward, one of these things has to happen:
- We ditch the C and C++, and port our systems code to Objective Ruby or Haskell++ or whatever.
- Developers take undefined behavior more seriously, and proactively eliminate these bugs not only in new code, but in all of the legacy code.
- The C/C++ standards bodies and/or the compiler writers decide that correctness is more important than performance and start assigning semantics to certain classes of undefined operations.
- The undefined behavior time bombs keep going off, causing minor to medium-grade pain for decades to come.
I’m pretty sure I know which one of these will happen.
UPDATE from 1/24/2013: A comment on Hacker News pointed me to this excellent example of C not being future-proof. This is commonplace, folks. This particular undefined behavior, signed integer overflow, can be caught by our IOC tool which is now integrated into Clang as -fsanitize=integer, and will be in the 3.3 release.
43 responses to “C and C++ Aren’t Future Proof”
My hope would be option 1 for application code (as opposed to the “systems code” that you have there): fewer and fewer people use these languages to write application code. Moreover, I think that’s probably happening to some extent.
If C/C++ turn into niche languages, then the undefined behaviour time bombs won’t go away, but at least we’ll be putting down fewer of them.
Is there a fifth option of C/C++ compilers becoming more agressive about reporting as errors undefined parts of the semantics that programs may be using but which the compiler is relying on at a certain optimization level? Or do many of these undefined behaviors require analysis that’s way beyond what you could do during a routine error check?
Sorry if that’s a naive question – I live in the quite heavily-specified world of Standard ML.
Hi Lars, in his series of posts Chris Lattner argues pretty convincingly that that compiler can’t give good error messages when it performs transformations that are enabled by undefined behavior:
http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
Michael, I agree that C is largely dead for applications code, but this doesn’t yet seem true for C++, and unfortunately the non-application code is some of the most critical. The Linux kernel people have already chosen to use a less performant dialect of C when they adopted the GCC option which forbids it from doing undefined-behavior-driven removal of null pointer checks, so perhaps there is some hope in that direction.
I think we can rule out #1 for the next decades, the legacy code bases are just to big.
#2 looks very appealing, but we’ll need tools to detect undefined behavior (clang can help out here) and educate the programmers about it. The latter will probably take decades.
I’m not involved in any standardization committee, but I can imagine the amount of flak they’ll get if someone finds out that they’re making the next revision of C++ slower instead of faster.
That leaves us with #4. Who would’ve thought?! 😉
This problem is in no way unique to C or C++. Just look at the carnage each new Python release causes.
Don’t give up on 1. Don’t port, tho: come up with new platforms where people are excited to write new code and then make sure you set up things wisely.
We had a chance with phones….
“The real problem is that there are billions of lines of C and C++ are out there.”
I think things like that LLVM patch that instrumenting for integer undefined behavior is the way to go. I’d love to see debug builds do that by default for as wide a class of cases as possible. If you are willing to play games with fat pointers you could even get it for the invalid pointers case.
OTOH, to really make good with that would require good test cases, likely human written, heuristically derived and randomly generated.
Is anyone working on the generalization of csmith type tools to larger domains?
I agree with bcs. I’ve stated once or twice before my opinion that _DEBUG / NDEBUG be standardised, and that the compiler create code that traps as many undefined behaviours as possible at run-time (if it’s not possible at compile-time).
Here are my 2c opinions on each point:
1) There’s still a huge market for tightly-written C++ user apps, Herb Sutter has covered this extensively. Besides the optimisation benefits, C++ (especially v11) is still a great, flexible, general-purpose language. It just takes a long time to learn. We are professionals who use tools, not toys.
2) (i) Being aware of new u/b traps is just a natural part of ongoing professional development. (ii) You’re asking for trouble if you re-compile a code base in a new version of a compiler without checking the results. But a compiler that traps u/b like I described would be very useful.
3) Yes, but only in _DEBUG mode!
4) *cough*Java*cough*
I urge all C++ developers to make a habit of employing static and dynamic analysis tools as a regular part of your work cycle, or have some grunt coder who is assigned to run those tools. There are good clang and g++ options, and MS VC++ has great output as well. Valgrind has many tools built into it. Also cppcheck and others are available for free. I suggest to just run them all. Also, have a unit test folder so you can run all the test cases before making commits, which turns up any unexpected symptoms that may arise due to your changes, in otherwise apparently unrelated portions of the code. When you fix a bug, add a test case to prove that the bug is fixed. This gives you built-in regression testing, since often, old bugs can resurface from future changes. When that happens, your unit tests will catch it. I also recommend the use of canaries and stack-smashing protection.
As a developer, if it is your professional opinion to do the above things, and management still chooses not to take them seriously, at least make sure you have it on record that it was your professional advice. That’s what they’re paying you for.
Along the same lines, Q/A should not be neglected. There should ideally be a lab where several Q/A professionals run through test processes on various platforms and record the results into a bug tracking database so that the known bugs can be assigned to developers, fixed, and then verified by Q/A again before being closed. These are then tracked against the versions and releases.
Magnus: I don’t mean to be just contrarian, but I can’t resist asking: why limit yourself to different versions of a compiler? You should really be careful when you switch to a different implementation of the “x86 standard” too. You really should just stick with a limited subset of those. Abstraction is just asking for trouble.
The correct answer is #3 (potentially with explicit opt-in to intuitive-correctness-destroying optimizations, for common values of intuitive), what will actually happen is #4.
For hard core systems code (kernels, VMs), #1 is a fantasy, for almost all other C code, #2 is a fantasy.
When I was a Delphi compiler developer at Borland cum Embarcadero, we were proud that we didn’t introduce optimizations that changed the correctness of unoptimized code. That is, if the code worked correctly with the optimizer turned off, it still worked correctly with it turned on. That in turn meant most programs were debugged with the optimizer turned on.
And when you’re a commercial compiler vendor, breaking customer code is Just Not Done unless you have an incredibly, amazingly, outstandingly good reason. Free software has a much lower bar to breaking code, and is much more opinionated about the merits of being technically correct, like that is some kind of moral virtue over and above getting stuff done. All the technical correctness in the world is not worth a fig when Sally’s browser crashes or Joe’s document editor loses all his work.
You can make a reasonably strong case, BTW, for zero-initializing most data structures and return values, even in optimized code. Definite assignment analysis will tell you when the zero-initialization is redundant, so in most code the cost is zero.
First off this is s fantastic article.
As for how things will play out in the future..
option 1.) imho Porting system level code to another language and it having favorable outcome will not happen anytime soon for a number of reasons
2.) I seriously hope this is the way things go but with fewer and fewer new programmers learning c/c++ and with even fewer learning the super in-depth technical parts I highly doubt this will happen.
3.) As stated above the community will not embrace correctness while sacrificing speed. This will only be a viable option if they can manage to embrace correctness with little to no performance loss.
4.) Everyone knows this is what will happen and its going to be tough to deal with although it will ensure jobs for talented c/c++ who are willing to pick up the pieces and fix the bugs (at least until people get fed up and go to option 1 that is)
I’m wondering, what are your thoughts on the JSF Philosophy (JSF++)?
http://www.ldra.com/index.php?option=com_relatedimages&format=raw&fileid=213
I agree. Assuming that C & C++ are never going to be fixed, what are your thoughts on future languages in the embedded space? For example, how about Vala or a managed language like C# & the .Net micro framework?
Hi bcs (and others), I agree — solid dynamic checkers plus really good testing is as close to a solution to these problems as we’re likely to get in the short term.
Barry, that’s a great story about the Delphi compiler, thanks! And in fact many C compiler vendors take a similar point of view, though this does not seem to stop the random application breakage that we keep seeing. The general idea of providing stronger semantics than are required by the standard is a great one. As a random example, several compilers (Microsoft C and ARMCC, I believe — and probably many others) provide somewhat stronger semantics for volatile variables than is mandated.
MD I haven’t looked at JSF++ closely, but based on a quick look it seems to contain some good stuff. Similarly, there are some good ideas in MISRA C. These subsets (and of course their informal equivalents that most organizations develop) are a great complement to static and dynamic error checking tools. The problem with subsets and coding guidelines is that they provide very little help in avoiding some kinds of problems such as integer overflows.
Talking about future of C++ in 2013, having a website named “Embedded in Academia” and then not to discuss C++11 looks either biased or uninformed. Also, from reading the article it feels that “GCC isn’t future proof” should have been the headline …
Maybe you should learn C++ before criticizing it.
The only criticism you give can be handled with basic knowledge of the STL. Your index gripe, all you’re doing is asking for the method at() without realizing it.
The idea that C++ isn’t future proof because you don’t know it well enough to know where index protection is is why I still think you should need a license to bitch about programming languages on the web.
Please keep it to languages you actually know. Speaking as someone whose first language was Pascal, the idea that Pascal is more future proof than C++ is outright laughable.
These code generation tricks are annoying because, while they’re technically allowed, they go against the spirit of C and C++ which is that it does actually map fairly closely onto the behaviour of the underlying machine.
Out of all the restrictions you list, only strict aliasing was intended to be used for compiler optimisations. The rest are there to support exotic and mostly obsolete hardware. For example, some hardware cannot perform pointer arithmetic on out-of-range pointers – it actually triggers a hardware exception if you try. On other hardware, function pointers are a different size data pointers so you cannot safely cast between the two. And so on and so forth.
Most productive code is of such bad quality that a few undefined behavior cases don’t matter much. I’m not speaking of zlib or the Linux Kernel here (where the previous sentence does not apply) but I’m speaking of the tons of invisible LOB applications and backend systems deployed in corporations around the world.
Care to run Frama-C on the backend software of my bank? The thought is frightening to me.
The industry has an extreme shortness of very good programmers. Most are mediocre and there are still tons of job openings. There are not enough good people available. Positions are being filled with garbage programmers.
Hi Mikhas, I am biased but not uninformed — at least on this particular subject. C++11 adds many interesting features but leaves the undefined behaviors of the language almost completely unchanged. This has nothing to do with GCC, although many of the interesting examples do come from GCC since it implements a ton of optimizations and has been the dominant C/C++ compiler used by the open source community.
John Haugeland, your comment fails to make sense, but thanks for reading. Pascal was my second programming language and, although I still have a soft spot for it in my heart, I do not remember comparing its future-proofness with C++’s.
makomk, I agree! Many or most of these optimizations seem to go against the spirit in which undefined behavior was originally introduced into the language. I think this is why people have such a hard time understanding these issues: they completely go against C’s reputation as a portable assembly language.
My guess is that the people writing the standards simply failed to anticipate the degree to which their wording could be exploited for code generation purposes.
Hi tobi, I generally agree about the quality of code and certainly would not care to run Frama-C on most large legacy code bases. A reasonable guideline for a tool like Frama-C is that you don’t even think about starting to use it until you’ve written every test case you can think of and all of them pass.
On the other hand I don’t think I agree with your implication that undefined behavior doesn’t matter in big legacy codes. It only takes one instance of the compiler doing something nasty to open up a security problem. These examples give an idea of how this can happen:
http://code.google.com/p/nativeclient/issues/detail?id=245
https://isc.sans.edu/diary/A+new+fascinating+Linux+kernel+vulnerability/6820
I’m not sure why this is, but at some point I became a programmer that cared about understanding what every line of code I write does and if I come to doubt what it does, I read the appropriate documentation until I figure it out. This doesn’t seem to be the common approach people take and it’s unfortunate.
The C/C++ standards undefined behavior rules were probably written to allow portability more than they were performance, but as technology advanced, compiler writers found ways to use them for optimizations. Some of these are intuitive and make sense but others do not. Some of them would break a large number of programs and compilers tend to not do those.
But suggesting that compiler writers are wrong to implement these optimizations seems silly given that they actually read the standard and care what it says and the only issue is that users do not. Our society works because people follow laws, rules and conventions and in this case, they exist and should be followed until they are changed. I believe the semantics for left shifting a 1 into the top bit of a signed integer was successfully defined, so it’s possible.
As for compiler writers providing stronger guarantees, I believe MSVC now has an option to make volatile match the standard’s behavior because when implementing arm support, the additional guarantees weren’t ‘free’ and therefore, it was a burden on people trying to write fast code. Granted, this changed coincided with C++11 atomics support, so replacing volatile with atomic is probably good enough for transitioning.
“We ditch the C and C++, and port our systems code to Objective Ruby or Haskell++ or whatever.”
Unfortunately this is not really an option for mobile development unless you maintain multiple code bases per platform or pay for a proprietary solution like Mono, which introduces its own limitations and costs, or use a cross-platform framework that implements its own user interface, like Phone Gap or Adobe Air. The latter usually tends to lead to poor customer experiences that don’t mesh well with the device.
The most portable and flexible option is to develop the core business logic in C++ and develop the UI using the platform’s native framework (i.e. Java on Android, Objective C on iOS).
Perhaps people shouldn’t drive cars unassisted because they could go up on the curb and mow pedestrians down. Perhaps every person who drives a car should pay for a driving instructor sit in there with them with one foot on the break, that doesn’t sound expensive for the entire world to do.
6. Universities and Teachers start teaching more about computer sciences and less Java and C# and high level programming languages that prevent students to understand why and how to be careful when you have to deal with real issues instead of the fancy and safe and unreal virtuality the other languages give you. Those students then become in useless programmers as they don’t understand why an object can be not there and how to deal with the issue, how to make real fixes for real bugs, and so on. Nowadays not even the teachers want to deal with that then it seems they don’t teach real programming anymore. Those people don’t really care about programming and computers, they are a just stack of impostors.
Compiling with debugging enabled (i.e. -g for gcc) should generate checks for undefined behaviors and abort/throw exception when its detected (like signed overflow/underflow, etc). Higher debug levels would check for more undefined behaviors at a higher runtime cost.
Regarding the optimization of uninitialized variables:
How do you handle hardware access and intertask handling with this ? A pointer to shared memory points to seemingly uninitialized memory. Some OSes allow the task to share his data area with othe processes.
Even a call to a system function i.e. read will make memory initialized. That does not happen in the described case, but the variable is not static, so it might have been initialized from other code. Since there is no fixed definitions file in C, this may happen anywhere (extern unsigned long junk).
Seems i have to read something about the reasoning in compiler construction, which i have missed in the meantime.
this whole thread of thinking has been recycling for the last 25 or so years. Nothing new here. I don’t understand what the unique point is supposed to be…?
Hi Achmed, I’m not exactly suggesting that compiler writers are wrong to aggressively exploit undefined behavior. Rather, I’m trying to say that there’s a balance to be struck between correctness and performance.
Hi Bergur, I agree, this is what should be done. My group has done some of this work, for example the -fsanitize=integer flag that was recently added to Clang came from our IOC work:
http://embed.cs.utah.edu/ioc/
Hi nickels, undefined behavior is subtle and its consequences are not widely understood. I’m trying to help.
Hi Prinz, compilers don’t typically attempt to reason across tasks, or to reason about the behavior of system calls. For now, code that does these things should be safe from the optimizer.
okay, I see… its not the pointers are bad argument recycled…
undefined behavior…interesting… more examples would help make the concept clearer….what about the whole 32 to 64 bit transition. That is a complete quagmire in this regard…
More examples would help to convince me this is really a big issue….
i take my comments on recycling though!
nickels, right, this isn’t just about pointers. Chris Lattner has written some great articles about undefined behavior and I’ve written in more detail about this topic also:
http://blog.regehr.org/archives/213
Hi regehr, what do you think about Go language?
>> http://blog.regehr.org/archives/213
yes, this is what I was looking for.
I wonder how all the C++ 11 stuff fares in this regard…
Replace C/C++ with an interpreted language – WHAT ?
Why are working Java games on Android?
Because they have good TIMER .
Why won’t work 3D games in C# ? Because there is no good timer .
Why didn’t Microsoft put ther a good Timer? Because of C++ mafia inside MS . Nothing else .And then has Microsoft problems Windows 7 on mobile phones .
They sayd we need performance . Whay then is in java on Android enought performance for 3D games ?
We change “perfomance ” for safety . But the problem is no body see it . People are buying more Android as “big C++ Games” because of safety, no crashes, and JAVA is univerzal not like C++ games on Playstation and XBOX only for 1 purpose. .
You can choose:
1. Problems with C++ windows
2. closed expensive Apple platform
3 android with JAVA and no crashes.
4.C++ games platforms only for games , one purpose platform.
Hi nickels, C++11 does not fix any of the underlying undefined behavior problems. Of course, good C++ programming style can reduce some of the risks, perhaps substantially.
I am not as intimately familiar with the C standard as you are, John, but why can’t an implementation simply choose to make many kinds of undefined behavior well-defined and give predictable semantics for them? Integer overflow? Wrap. Invalid point dereference? Trap. Don’t allow code motion or reordering of side-effects across traps. On hardware without safety checks, insert checks and aggressively optimize them away.
Such an implementation, if it chooses to define these behaviors close enough to how things work in practice, or how most programmers expect today (though they be mistaken), would eventually become a de facto standard, and given how much faster new implementations seem to be able to catch on these days (witness LLVM), it could conceivably become widespread enough to begrudgingly pull the language spec with it, and force other compilers to comply. I think programmers would demand it; they’d start to consider those other compilers broken, no matter what the standard says.
Hi Ben, they could totally do this, and in fact that various -fsanitize=xxx options for Clang are good examples. This kind of thing is going to be a lot of help.
The problems with this plan are (1) some undefined behaviors, such as pointing to a dead stack frame, seem to be inherently expensive to check and (2) nobody has yet written a checker for many kinds of undefined behavior.