Skip to content

Memory Safe C/C++: Time to Flip the Switch

For a number of years I’ve been asking:

If the cost of memory safety bugs in C/C++ codes is significant, and if solutions are available, why aren’t we using them in production systems?

Here’s a previous blog post on the subject and a quick summary of the possible answers to my question:

  • The cost of enforcement-related slowdowns is greater than the cost of vulnerabilities.
  • The cost due to slowdown is not greater than the cost of vulnerabilities, but people act like it is because the performance costs are up-front whereas security costs are down the road.
  • Memory safety tools are not ready for prime time for other reasons, like maybe they crash a lot or raise false alarms.
  • Plain old inertia: unsafety was good enough 40 years ago and it’s good enough now.

I’m returning to this topic for two reasons. First, there’s a new paper SoK: Eternal War in Memory that provides a useful survey and analysis of current methods for avoiding memory safety bugs in legacy C/C++ code. (I’m probably being dense but can someone explain what “SoK” in the title refers to? In any case I like the core war allusion.)

When I say “memory safety” I’m referring to relatively comprehensive strategies for trapping the subset of undefined behaviors in C/C++ that are violations of the memory model and that frequently lead to RAM corruption (I say “relatively comprehensive” since even the strongest enforcement has holes, for example due to inline assembly or libraries that can’t be recompiled). The paper, on the other hand, is about a broader collection of solutions to memory safety problems including weak ones like ASLR, stack canaries, and NX bits that catch small but useful subsets of memory safety errors with very low overhead.

The SoK paper does two things. First, it analyzes the different pathways that begin with an untrapped undefined behavior and end with an exploit. This analysis is useful because it helps us understand the situations in which each kind of protection is helpful. Second, the paper evaluates a collection of modern protection schemes along the following axes:

  • protection: what policy is enforced, and how effective is it at stopping memory-based attacks?
  • cost: what is the resource cost in terms of slowdown and memory usage?
  • compatibility: does the source code need to be changed? does it need to be recompiled? can protected and unprotected code interact freely?

As we might expect, stronger protection generally entails higher overhead and more severe compatibility problems.

The second reason for this post is that I’ve reached the conclusion that 30 years of research on memory safe C/C++ should be enough. It’s time to suck it up, take the best available memory safety solution, and just turn it on by default for a major open-source OS distribution such as Ubuntu. For those of us whose single-user machines are quad-core with 16 GB of RAM, the added resource usage is not going to make a difference. I promise to be an early adopter. People running servers might want to turn off safety for the more performance-critical parts of their workloads (though of course these might be where safety is most important). Netbook and Raspberry Pi users probably need to opt out of safety for now.

If the safe-by-default experiment succeeded, we would have (for the first time) a substantial user base for memory-safe C/C++. There would then be an excellent secondary payoff in research aimed at reducing the cost of safety, increasing the strength of the safety guarantees, and dealing with safety exceptions in interesting ways. My guess is that progress would be rapid. If the experiment failed, the new OS would fail to gain users and the vendor would have to back off to the unsafe baseline.

Please nobody leave a comment suggesting that it would be better to just stop using C/C++ instead of making them safe.

{ 80 } Comments

  1. rpw | April 23, 2013 at 12:49 pm | Permalink

    SoK = Systematization of Knowledge

    It’s a weird recent thing. Computer security does not have any good journals. Hence one of the most prestigious Tier-1 conference, IEEE Security & Privacy has introduced this category for presenting surveys instead of novel research output. This category is called SoK and authors need to prefix their paper submission accordingly upon submission.

  2. Bart Coppens | April 23, 2013 at 12:50 pm | Permalink

    The SoK stands for ‘Systematization of Knowledge’, which is a special track at the IEEE Symposium on Security and Privacy since 2010 that asks for ‘work that evaluates, systematizes, and contextualizes existing knowledge’. (See, for example, http://www.ieee-security.org/TC/SP2013/cfp.html).

  3. regehr | April 23, 2013 at 1:01 pm | Permalink

    Thanks Bart and rpw! I’d missed this academic meme.

  4. anon | April 23, 2013 at 3:01 pm | Permalink

    I think the problem is that there is no safe solution. your best bet is to use an as an version of chrome / Firefox, but uaf is still a problem, right?

  5. regehr | April 23, 2013 at 3:34 pm | Permalink

    Hi anon, I haven’t yet read the paper but doesn’t SoftBound+CETS give decent protection against use-after-free?

    http://acg.cis.upenn.edu/papers/ismm10_cets.pdf

  6. Mike Mol | April 23, 2013 at 3:40 pm | Permalink

    How much of this could be injected into the toolchain on a distro like Gentoo? That would seem to make discovery of compatibility issues much more rapid. There’s also gentoo-hardened, whose maintainers would probably be very, very interested in trying this sort of thing.

  7. Joshua Cranmer | April 23, 2013 at 3:45 pm | Permalink

    There is a key disconnect between research and industry here that I see as being a problem in practice: research builds solutions that will work on SPEC; industry has more complex needs. The example I always use when thinking about the engineering challenges for these kinds of tools is Firefox, which has:
    * custom malloc implementation
    * dynamic loading (not even linking!) of libraries
    * JIT
    * high performance sensitivity (engineers chase 1% performance regressions)
    * inline and out-of-line assembly
    * C code that needs to be architecture-specific (basically, reflection)
    * very large (3 million lines of C++; http://quetzalcoatal.blogspot.com/2011/08/not-so-random-mozilla-central-factoids.html has more out-of-date stats)
    * multithreaded
    * inherently imprecise analysis (just about every pointer and every vtable call could be a call to external code)

    It would be hard to turn some solution on by default unless it can cope with pretty much all of these things with minimal or no code modification necessary–these problems are not going to be limited to just Firefox, but other major packages in the distribution, such as SQLite, Apache, OpenJDK, etc.

  8. regehr | April 23, 2013 at 3:45 pm | Permalink

    Mike, do you know if Gentoo can be compiled with Clang yet? If so, then there are two options. First, SoftBound+CETS:

    http://acg.cis.upenn.edu/softbound/

    Second, Clang’s address sanitizer, which is a bug-finder rather than a real memory safety solution, but which still should provide some amount of protection. It would be really interesting to see how far we could get compiling Gentoo using these tools.

  9. regehr | April 23, 2013 at 3:51 pm | Permalink

    Joshua, there’s no question that there’s some serious engineering work to do and that it’s hard to motivate researchers to do it. Another nasty chunk of work would be involved in making Linux kernel modules memory-safe by default. But anyway, all we can do is try to make these things to work and solve the problems that come up and keep trying…

  10. anon | April 23, 2013 at 7:45 pm | Permalink

    Firefox and Chrome both definitely compile under clang, infact, ASAN is commonly used for bugFinding. I would be interested in seeing a CETS+SoftBound build of Firefox. My suspicion is that it won’t have much overhead; neither browser is CPU bound and both Firefox and Chrome are definitely usable under ASan.

    John: Do you know the authors? Can you try and ask them to get a version of Firefox / Chrome working with their system?

  11. regehr | April 23, 2013 at 8:06 pm | Permalink

    anon, I don’t know them but I’ll ask anyhow. Will report the results here, or maybe one of them can comment. (Update: mailed them evening of Tuesday 4/23.)

  12. Milo Martin | April 24, 2013 at 6:58 pm | Permalink

    You may not know us, but we read your blog. :)

    Thanks for highlighting out work. Let me try to answer some of the questions raised.

    SoftBound+CETS is a research prototype. We’ve run enough code that we’re comfortable with the performance estimates and basic approach, but the prototype is mostly the efforts of one person (Santosh, now a first-semester professor at Rutgers), so he just haven’t been able to put in the engineering effort to making it really bullet proof. So, no, the current prototype is almost certainly not robust enough to compile Firefox or Chrome.

    Also, the most recent results and description of the work can be found in Santosh Nagarakatte’s dissertation from 2012:

    http://www.cis.upenn.edu/~milom/papers/santosh_nagarakatte_phd.pdf

    The SoftBound+CETS implementation code can be found in the SAFECode sub-project of the LLVM main SVN repository. We’ve been trying to keep it current with LLVM as they release new versions of LLVM, but LLVM changes rapidly enough, that likely isn’t possible much longer.

    There is, however, an industrial implementation of something quite similar to Softbound: Intel’s “Pointer Checker” as part of its most recent releases of its C/C++ compiler:

    http://software.intel.com/sites/default/files/m/d/4/1/d/8/Pointer_Checker-Webinar.pdf

    http://d3f8ykwhia686p.cloudfront.net/1live/intel/Intel_PUMag_Issue11_Pointer_Checker.pdf

    From reading the description of how it works, it is extremely similar to the bounds checking proposed in our SoftBound PLDI paper: pointer based with disjoint metadata. Although the bounds checking is similar, the use-after-free checking we described in our CETS paper is much lighter weight than what Intel implemented.

    As for the specific issues asked about, we cover several of them in our various papers, but let me comment further:

    Libraries: the long-term plan is that all the libraries would also be recompiled with SoftBound+CETS, in which case all the static and dynamic linking just works. If the library is not recompiled, a whole host of issues comes up. First and foremost is that calling unsafe code means that the program is executing unsafe code, so all bets are off. The same is true in inline assembly. In some instances, you might be able to create wrappers for unsafe code that allow for interaction, but that is a fairly manual process. We did it for many of the standard C libraries to get our benchmarks to compile, but it isn’t a trivial thing. But I think this issue is true for any such system.

    Multithreading: multithreading support in SoftBound+CETS is okay, but not something we have focused on too much. If the program is free of data races (which is required by the C/C++ memory consistency model) and avoids low-level atomic usage, then adding the necessary synchronization to the SoftBound+CETS data structures is fairly straightforward. But in such an implementation, errant data races could compromise memory safety. That is regrettable, but still better than having no memory safety at all. See Section 6.7 of Santosh Nagarakatte’s dissertation.

    Custom memory allocators: custom memory allocators work if you’re willing to annotate the allocator to mark bounds and participate in the object identification scheme. So, just a few small code changes to get the protection. Without the code changes, the entire allocation pool would just be one big object, so it wouldn’t give the fine-grained protection you would expect (but it wouldn’t really “break” either).

  13. Milo Martin | April 24, 2013 at 7:12 pm | Permalink

    I’d also like to say a bit about the history of the project. Our project started out as a *hardware* project, actually. The project was a reaction to a seminar course I taught in 2005 on hardware support for security, which included CCured and Cyclone papers as assigned reading. We wondered: could hardware support help overcome some of the compatibility and performance issues with such approaches? We starting working on it and it resulted in a ASPLOS paper in 2008 with the title: “HardBound: Architectural Support for Spatial Safety of the C Programming Language”.

    http://acg.cis.upenn.edu/papers/asplos08_hardbound.pdf

    We then realized that many of the same ideas we had in hardware could, in fact, be applied to a software implementation as well. We called that “SoftBound”, and published a paper about it PLDI 2009. We had another follow-up paper “Watchdog” on the hardware-centric implementation at ISCA 2012:

    http://acg.cis.upenn.edu/papers/isca12_watchdog.pdf

    Intel has also been working on similar hardware support. For evidence, see US Patent application 2011/0078389 A1:

    “Managing and implementing metadata in central processing unit using register extensions”
    http://www.google.com/patents/US20110078389

    I’ve heard rumors that Intel is planning adding hardware support for this. The fact that Intel released the SoftBound-like Pointer Checker as part of their compiler seems to lend evidence to that possibility.

    So, what will it take? Perhaps if Intel adds hardware supports and really pushes it, maybe that will be what it takes to make C/C++ memory safe.

  14. Matthew Flannery | April 24, 2013 at 9:26 pm | Permalink

    I think your embedded in Academia because you need to get a refund on the shitty education you got so you choose to write about topics in which you or your readers really have no clue about so you can appear like you actually spent your money on something of value where as in all actuality you are no more qualified to write about then a two year old.

    I don’t know who, Laszl ´ o Szekeres ´y, Mathias Payerz, Tao Weiz, Dawn Songz nor do I care because if all they can do is write terribly dry white papers on a topic which they re-invent just to have something to write about then I don’t really want to read it anyway…. let alone understand why I should read material put forth by bumbling idiots who spent tenure with more morons who are all just rubbing each others backs anyway…

    There is nothing new in that paper and in the end their education seems to be as wasted as yours… In the next 20 years you personally and all of your readers can look back at where they are and ask yourself only 1 question… Was the Ivy League Education someone paid for me really worth it in the end?

    Great blog and great ideas… :-/

    Definitely won’t have to worry about my comments again!

  15. regehr | April 25, 2013 at 12:11 am | Permalink

    Hi Milo, thanks for the detailed response!

    It’s excellent that you folks have kept SoftBound+CETS up to date with LLVM.

    Even if SoftBound and related tools have these limitations, it would still be a valuable exercise to start making a Linux distro safe, working through tool problems as they come up. I may be able to devote some resources to this.

    One positive side effect is that the source code modifications supporting one safe C compiler would hopefully carry over and remain useful if another compiler were used later on.

    Randomly, for our Safe TinyOS project I spent a lot of time annotating source code to make it safe using Deputy, which was a lot more hands-on than SoftBound and company. It was actually a really good exercise.

  16. Simon O'Riordan | April 25, 2013 at 1:14 am | Permalink

    Maybe I missed the point; however, wouldn’t it be easier to create a stricter compiler instead of going into billions of lines of legacy code from the outset and incorporating new methodology and packages?

    If the newer compiler detected problems with the memory allocation in the code, surely that would be the time to change it, not before?

  17. MrK | April 25, 2013 at 4:51 am | Permalink

    Sometimes it’s better to start anew. C/C++ passed that point a long time ago. 44 years of patches over patches is enough.

  18. davetweed | April 25, 2013 at 5:07 am | Permalink

    One of the things about memory safety is that it really needs to be integrated into a compiler that understands language semantics quite deeply can establish quite a lot of inferences about code, eg that for

    for(i=0;i<n;++i) A[i]=f(B[i]);

    it needs to check that A[0] & A[n-1] and B[0] & B[n-1] fall within a contiguously allocated memory segment and that no other checks are needed. I don't mind potentially paying for memory safety, but I'd be put off applying memory safety checks that are over-the-top and hence have a big performance hit because they are so simply implemented.

  19. Ted van Gaalen | April 25, 2013 at 5:31 am | Permalink

    if possible, avoid using C/C++ ( and other programming languages that allow direct memory access alltogether. Switch to managed ones like C# or even Smalltalk. Currently I am prograamming IOS, Objective-C , using ARC. To prevent maintainance headaches, I am not using mallocs etc. In effect, it is all lower level and should be controlled by the system, so I can concentrate on the app itself, thnks

  20. regehr | April 25, 2013 at 7:16 am | Permalink

    Hi Simon, I am exactly talking about switching to a stricter compiler.

    davetweed, a modern memory safety implementation is pretty smart about this sort of thing. Not nearly as smart as we want it to be, but not that bad. I encourage you to download for example the SoftBound system and try things like this out.

    Ted, the problem is that you haven’t yet taken all of the legacy C/C++ code that runs on a modern OS, and rewritten it in a managed language. Please do that, then get back to us. Thanks!

  21. Henry | April 25, 2013 at 7:27 am | Permalink

    @Matthew Flannery:

    Congratulations on submiting your Phd thesis on “Aspects of Social Fitness within groups of Computing Science Wannabes”.

  22. Doc Maynard | April 25, 2013 at 8:03 am | Permalink

    The reality is this…. Good memory safe code comes from good programmers, not from modifying C, C++. Not C# as that statement really shows ignorance of the subject (no disrespect meant for the language). Dave Tweed nailed it. Code should be written and the compiler should catch the issues. Then the programmer should fix them! Thirty years of code only prove one thing, C++ and C are the best choice, and what mono and .net are written in. If .net and c# are safe then this only proves the C++ likewise is safe when a good programmer uses it.

    It’s like fixing a car, if you want to trust the kid at the tire shop to fix your vehicle, that’s what you get. Programmers that use C# to avoid memory issues are like the kid. C#, a good runtime language is limited. So those who think this is the answer don’t understand programming. They dabble with code that they think will be safe, while a pro knows whether or not it is safe.

  23. Andy | April 25, 2013 at 8:05 am | Permalink

    Memory will always reside in memory. Think about that. With enough determination, rogue code will be able to read and alter that memory, despite your best efforts at protecting me from myself. The solution is to use better memory protection at the O/S level. Here is a tidbit about Windows. I don’t know how Linux implements memory protection. http://msdn.microsoft.com/en-us/library/windows/desktop/aa366785(v=vs.85).aspx

  24. Daniel Pfeffer | April 25, 2013 at 8:34 am | Permalink

    Perhaps the solution should be implemented at the hardware level.

    Intel’s 16-bit 80286 processor had a segmented virtual memory mode in which it was possible for each process to allocate up to 8192 segments of any size from 1 byte up to 65535. I recall a memory allocation validator that allocated a segment for every memory allocation, so the processor’s memory protection hardware would cause an exception if you tried to access unallocated memory.

    A similar implementation on 32-bit (or 64-bit) processors would be much more difficult – the number of pointers in a large program is much higher. Furthermore, even if segments are available, the cost of switching between them is much higher in modern processors. However, many programs are not CPU limited. Perhaps we should use some of the unused cycles on security, by re-introducing segmentation and paying the cost to switch between segments.

  25. Ranjit Jhala | April 25, 2013 at 9:08 am | Permalink

    Hi John,

    you are absolutely right when you say:

    “there’s no question that there’s some serious engineering
    work to do and that it’s hard to motivate researchers to do it”

    In my opinion, a large part of the trouble with motivating researchers
    is that among large segments of the research community, there is the
    curious view that memory safety is a “solved problem”…

    Ranjit.

  26. Nickels | April 25, 2013 at 9:20 am | Permalink

    leave C++ alone. Hire good programmers. And use another language if you want memory safety.
    C++ is designed for performance not memory safety.
    Its like arguing that we should put revolution per minute governors on cars because they are unsafe. Just a fancy word for going 15 MPH.

  27. regehr | April 25, 2013 at 10:19 am | Permalink

    Hi Daniel, please read Milo Martin’s comment 13: it sounds like Intel may be adding hardware support for bounds checking.

  28. Dan Sutton | April 25, 2013 at 10:31 am | Permalink

    One comment that needs to be made here is that the whole point of C (and to some extent C++) is that it *isn’t* memory-safe: it should be understood that C is more of a high-level macro assembler than it is a high-level language — one often uses it specifically in situations where you don’t want memory-safe execution, such as writing operating systems or firmware. As was pointed out earlier in this thread by Doc Maynard, memory safety comes from good programmers: meanwhile, leave C alone and let it be what it is: a highly-flexible tool which, in the right hands, can be used to achieve things which only machine code would let you do otherwise.

  29. Josh | April 25, 2013 at 10:39 am | Permalink

    @Nickels
    The problem is that today’s performance tweak is tomorrow’s exploit. The last thirty years have shown that when a “good programmer” leaves out (for example) bounds checking to improve efficiency because he feels it would be safe, that code is exactly the point where the hacker finds his next exploit.

    I’m not proposing Nanny State fixes to the problem, but anything that can add to the tool belt to improve safety like ASLR or NX is a step towards a safer computer. The most important thing though is educating programmers on good practices from day one.

    Unfortunately, that is one thing we haven’t figured out effectively yet.

  30. Alexander Schwarz | April 25, 2013 at 11:48 am | Permalink

    Who are all these people?!

  31. regehr | April 25, 2013 at 12:03 pm | Permalink

    Dan, you are not correct. The vast majority of C/C++, even in an OS kernel, is intended to be memory safe. The problem is that sometimes, despite our intentions, it isn’t memory safe. A “safe-by-default” compilation mode — with an easy escape hatch for the small minority of code that actually needs to be unsafe — is an effective defense against implementation errors.

  32. Chris | April 25, 2013 at 1:02 pm | Permalink

    John, my response to this post was too large for a comment. So I blogged it here http://blog.leafsr.com/2013/04/memory-safety-protections-and-real-world.html

  33. regehr | April 25, 2013 at 1:15 pm | Permalink

    Chris, thanks for the pointer. I think your point of view is a reasonable one and that of course we should pursue these kinds of mitigations, but I also think there are plenty of situations where memory safety is a better choice.

  34. Chris | April 25, 2013 at 1:32 pm | Permalink

    John, Memory safety is required when all bugs are exploitable. But attackers have to exist and persist in an imperfect world the same as defenders. If you starve them of exploitable bugs its game over. This is (should be) the industry approach, academia continues to strive for program purity. A noble, but lost cause in my opinion.

  35. regehr | April 25, 2013 at 1:36 pm | Permalink

    Chris, one reason we may differ is that I’m not a security person and I deeply hate all memory corruption errors, not just those that lead to vulnerabilities. My view is that security might be the lever we use to push memory safety into deployment but it’s hardly the only benefit.

  36. Jason | April 25, 2013 at 2:43 pm | Permalink

    I have heard C# (2000) is a better language than C (from the 1950s) because it does all this stuff for you? Perhaps time to stop the laziness and learn a new code paradigm?

  37. regehr | April 25, 2013 at 3:02 pm | Permalink

    Jason, your comment and some of these others fall well below the usual standard here. These are getting in the way of a useful discussion. I expect better, and will start deleting if I have to.

    Did my article get linked to some site other than HN/Reddit today, perhaps?

  38. Milo Martin | April 25, 2013 at 3:22 pm | Permalink

    I want to stress that hardware support might be the key to reducing the performance penalty. Our simulations (based on a ton of simulation and compute-intense workload assumptions) show just a 17% overhead on average for full memory safety (bounds checking and use-after-free checking). This is versus the 2x overhead introduced by the software-only implementation in the compiler.

    There are lots of code in which I as an end-user would gladly pay 17% to avoid security vulnerabilities and catch memory corruption bugs. As I mentioned above, based on Intel patent filings, rumors, and Intel’s software-only pointer checker tool, there is reason to suspect that Intel might be planning on such hardware support. That could really change the calculus.

  39. Chris | April 25, 2013 at 3:24 pm | Permalink

    John – That point is not lost on me at all. There is a lot of good that will come of memory safety, especially if you can bring it to all those hundreds of millions of lines of existing C/C++ code. This is part of why I am so interested in approaches like NaCl/PNaCl. Their end goal is the same (bring safety to existing code) but obviously with a drastically different approach. As always I look forward to your next post.

  40. Alex Groce | April 25, 2013 at 3:24 pm | Permalink

    Ranjit said:

    In my opinion, a large part of the trouble with motivating researchers
    is that among large segments of the research community, there is the
    curious view that memory safety is a “solved problem”…

    True, though with huge credit to SoftBound etc., the engineering is hard enough for even those of us who completely believe this is a problem to find a good cost-benefit argument for the needed resources. It’s a little bit like testing research’s issue of making it worthwhile to not just say “we tried it on some container classes, and it looked awesome.” Or when model checking papers had to produce less toy examples.

  41. Milo Martin | April 25, 2013 at 3:33 pm | Permalink

    Cognitive bias can slip into such discussions.

    If you ask someone, will you give up X to get Y, they will often say “no”. Is it worth slowing down your code by 50% to get memory safety? No! (They value the performance they already have more than security.)

    But if you ask them the reverse question, will you give up the security of memory safety for a 50% performance improvement, they might say; What? Are you crazy, make my machine more insecure because of some performance. That’s crazy. I could get fired for that. (They value the security they already have more than performance.)

    This sort of cognitive bias is well documented; just by framing the same situation in a slightly different way, people will answer it differently:

    http://en.wikipedia.org/wiki/Endowment_effect

    Of course, I’m not saying it makes sense to give up 10x performance for security, etc. But as such tools have gotten better, we are close and closer to the point in which the endowment effect begins to play a role.

  42. Ted van Gaalen | April 25, 2013 at 3:38 pm | Permalink

    why would i need to allocate/access memory directly (well, except for harware registers etc.) when i can use arrays, scalars, objects, collections, sets, ordered lists etc. .? Let the compiler figure it out, so i can concentrate on the essentials? Take a look at Smalltalk for instance (e.g. Squeak)

  43. regehr | April 25, 2013 at 4:05 pm | Permalink

    Milo, I agree re. the endowment effect! That is really my main motivation for pushing an agenda where safety is opt-out instead of opt-in.

    The HW performance figure is really impressive, although I feel compelled to insert something here about my inherent and large distrust for simulation results in computer architecture research :).

  44. Alex Groce | April 25, 2013 at 4:06 pm | Permalink

    I will say that I think the motivation problem somewhat solves itself, as long as anyone “cares” — eventually, academic research standards also adjust to require more plausible paths from an idea to engineering implementations.

  45. regehr | April 25, 2013 at 4:07 pm | Permalink

    Ted, I am not talking about new code, where arguments like yours are applicable. I’m talking about the kind of code that is causing problems like these:

    http://www.ubuntu.com/usn/

    Notice that this is page 1 out of 42. Some of these vulnerabilities would not have happened if these programs were executing in a memory safe environment. That is all.

  46. Jason | April 25, 2013 at 4:39 pm | Permalink

    regehr, apologies I am only a software manager of a team of 15 aerospace developers? We stopped using C when a memory leak almost caused a disaster which would have perhaps led to loss of life. Feel free to continue in the stone age of development!

  47. Jason | April 25, 2013 at 4:39 pm | Permalink

    regehr, apologies I am only a software manager of a team of 15 aerospace developers? We stopped using C when a memory leak almost caused a disaster which would have perhaps led to loss of life. Feel free to continue in the stone age of development!

  48. regehr | April 25, 2013 at 4:42 pm | Permalink

    Jason, see my comment #45. None of this is about liking C/C++ or writing more of it. This is about how our entire trusted computing base for Windows, Linux, etc. is all unsafe code. We are stuck with a lot of it.

    Much embedded software is still getting written in C, I’m happy that your team’s code is not!

  49. Milo Martin | April 25, 2013 at 4:52 pm | Permalink

    John, oh yes, certainly don’t really trust the 17% number. I also don’t trust the results from such hardware simulations experiments. To quote Einstein: “An experiment is something everybody believes, except the person who made it.” :)

    But even a simple back-of-the-envolope calculation supports that low overhead for doing such checking in hardware is not unreasonable.

    Let’s assume a simple in-order processor in which all instructions take a single cycle. In such a setting, the inputs to the calculation are primarily: (1) the percentage of all instructions that access memory (say, one third) which is multiplied by the bounds check cost (say, one cycle) and (2) the percentage of all memory operations load/store pointers from the shadow space (say, 25%) multiplied by the cost (say, two cycles). That give us 1 + (33% * 1) + (33% * 25% * 2) = 1.5 So that gives a ballpark estimate of 50% operation overhead, which is likely to result in relatively less performance overhead in an out-of-order dynamically scheduled superscalar, as these operations are all off the critical path.

    In fact, there are some aspects of our simulations that are pessimistic, so the overhead could actually be lower depending on how the hardware implements the checking.

    I suspect that Intel will only release such hardware if the overheads are low enough to be deemed acceptable. In fact, one possible situation that could delay hardware support is if Intel implements it, finds its performance is lacking, and then pushes it back. They did this with multi-threading, for example. All Pentium IV chips had multi-threading, but it was disabled on the first-genreation Pentium IV chips due to bugs and/or poor performance.

  50. Milo Martin | April 25, 2013 at 5:10 pm | Permalink

    Jason, I don’t think anyone working on memory safety on C/C++ would argue against using modern programming languages if they fit your needs! Yes, of course!

    But, for various reasons—both legacy reasons and deficiencies with the current suite of modern memory-safe languages—C/C++ is still used. Java is almost 20 years old now, and I really would have expected it to crowd out C/C++ completely, but it has not. If anything, we have seen a resurgence in interested in C++ and “native code” for mobile applications. Java has taken over in some key niches (web services, for example) but my assessment is there is a reason that our web browsers, compilers, and databases are not written in C/C++ (and not Java or C#).

    The question becomes: why? What are the deficiencies—real or perceived—with the modern languages available today vs C/C++. I think performance is actually a big one. Java, for example, adds layers of indirection and thus makes it extremely difficult to express some data structures and computations efficiently. There was a paper “Four Trends Leading to Java Runtime Bloat” by Nick Mitchell et al that described some of the cases. A search for java bloat will result in several interesting academic papers discussing such issues. Of course, some of the same bloat can happen in other languages (say, C++) but there are language design aspects of Java that make it really difficult to inline objects to reduce such overheads. Like C++ or not, its templates and inline objects can results in some really efficient use of memory.

    Based on some recent research we’ve been doing, we have some results that indicate we can create a language nearly as efficient as C++ but also totally memory safe and type safe, hopefully capturing the best of both worlds…

  51. Milo Martin | April 25, 2013 at 6:01 pm | Permalink

    More evidence of hardware support from Intel. Check out slide 16 of the following PDF of presentation slides:

    http://software.intel.com/sites/default/files/article/371299/09-gdb-application-debugger.pdf

    It says:

    “Point Lookout (PL): … Fast through h/w support on future processors.”

    The year on the slides is 2013.

  52. Jesse Ruderman | April 25, 2013 at 6:20 pm | Permalink

    The “Eternal War in Memory” paper says “pointers can legitimately go out of bounds as long as they are not dereferenced”. I thought that was undefined behavior.

  53. Andrew | April 25, 2013 at 7:06 pm | Permalink

    Word “unsafe” covers different situations. I work with large images. I need pointer arithmetic. Once one wise boy tried to modify my code a little. He excluded three lines of inline assembler. As a result he slowed execution of a corresponding part of code from 20 sec to 12 minutes! Assume, I trace a contour line in an image. Doing so, I can leave this image and start to walk on memory, which contents different data. It is one sort of “safety” issues. There is another one for example: a programmer allocated 16 bytes buffer in a stack for a type of protocol (http, https, ftp etc.). Bad guy typed string like “ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ”. This is another sort of safety issue. But there is a simple way to prevent such sort of attack: a simple class with safe buffers. As explained Kris Kaspersky more than 10 years ago, this problem can be fixed, using SEH. Three (or more) sequential pages with different access. The first and the last must have attribute PAGEGUARD or NOACCESS, all the others READWRITE. These pages inside the buffer accept user’s input. Such sort of control gives an absolute guarantee, that input buffer won’t be overrun.

  54. regehr | April 25, 2013 at 7:20 pm | Permalink

    Hi Jesse, here’s the actual text:

    “One problem with this approach, however, is that pointers
    can legitimately go out of bounds as long as they are not
    dereferenced. For instance, during the last iteration of a loop
    over an array, a pointer typically goes off the array by one,
    but it is not dereferenced”

    As I’m sure you know, it is OK to create (but not dereference) a pointer to the location one past the end of an array, but it is illegal to create a pointer before the start of an array or more than one element past its end.

    So the first sentence of this quote makes it look like the authors don’t quite understand the situation, but then the second one makes it seem like perhaps they do?

  55. regehr | April 25, 2013 at 7:33 pm | Permalink

    Hi Andrew, right — there are many kinds of safety that can be enforced. The “Eternal War” paper contains a good discussion of the tradeoffs between them.

  56. Alex Groce | April 25, 2013 at 9:16 pm | Permalink

    If Intel really does produce hardware support (thanks for the hope of some good news there, Milo!) I think that would “flip the switch” at least eventually. Security gains would be huge, but I think an under-appreciated gain would be the huge amount of time programmers and testers (including, yes, “good programmers”) would save because bugs that are currently insidious and intermittent, and very hard to find in testing, and often hard to debug, would become much easier to find and fix. Programmer productivity and testing effectiveness are not penny-ante things. Being able to run automated tests “under Valgrind” but without the huge decrease in throughput would be a big win for people like me, but even normal testing and development would benefit hugely, just in terms of the bugs we _already find_.

  57. Jeff Archer | April 26, 2013 at 5:48 am | Permalink

    Just learn to do a little decent design and it’s really not that difficult to know who own the memory and when it can/should be deallocated.

  58. Milo Martin | April 26, 2013 at 8:19 am | Permalink

    The great thing about the comments on the post is the are a microcosm of all the sorts of comments I’ve received while working on this problem over the years. In addition to supportive comments, I’ve heard everything from “bah, give up on C/C++, they are too broken” to “bah, it isn’t that hard to write correct C/C++” to “bah, anything more than 1% overhead would be totally unacceptable” to “bah, other mitigations other than memory safety are good enough”.

    Working on making C/C++ safe and efficient is in many ways a pragmatic middle ground—yet also pie in the sky, as everyone knows it isn’t possible—but as we know from today’s polarized political environment, such pragmatic centrists often find themselves in some uncomfortable spots.

  59. frank | April 26, 2013 at 8:21 am | Permalink

    instead of changing the language why not choose a different tool from your tool box. mechanics choose the appropriate tools to do their work. if the tool you are using has some kind of downside — then find another tool to use. the concern you are trying to address may not be a concern to others, but the solution you are proposing will affect those who are not as concerned as you.

  60. regehr | April 26, 2013 at 8:28 am | Permalink

    Hi Milo, all true. But you haven’t even seen the handful of comments that I’ve had to delete, ugh.

    An odd fact about blogging is that I could write the most ridiculous crap and not hear a peep about it, but a post like this which is basically just common sense gets a bunch of flak.

  61. regehr | April 26, 2013 at 8:31 am | Permalink

    Hi frank, I’m repeating myself here, but what tool would you choose to mitigate problems such as those seen here?

    http://www.ubuntu.com/usn/

  62. Ted van Gaalen | April 26, 2013 at 9:07 am | Permalink

    it was a very interesting discussion!
    so long and thanks for all the fish :0)

  63. nickels | April 26, 2013 at 3:13 pm | Permalink

    Agreed that Address randomization is a good trick.
    But just try to use bounds checked stl from microsoft once and you will never again if what you are doing requires any kind of performance. Just not realistic.
    Same with check stl in general. not even feasible as a debug method, just impossible to ever get anything done.
    pointers. heard the same arguments against them for 20 years. but they stay.

  64. Andrew | April 26, 2013 at 3:38 pm | Permalink

    One very simple sample: I have two images, A and B. Assume, both of them are of the same size and 32 bpp and pA points to some pixel of image A. Pointer, which points to corresponding pixel of image B pB=pA+pFirstPixelOfB-pFirstPixelOfA. Such sort of pointer arithmetic will be forbidden in any “safe” language. A qualified programmer can make safe programs, using C (C++). Bad news is that writing such progs requires more efforts and more qualification. Employers dream about time, when sophisticate soft can be written by cheap and non- qualified scholars. This is why simple languages are so popular.

  65. Michael B. Smith | April 26, 2013 at 3:45 pm | Permalink

    The mainframe people solved this issue in the 1950′s and the early 1960′s. My comments refer to Burroughs Corp. (later Unisys Corp.).

    Each allocated page should have a “tag” that defines whether it is code, data, or something else. The page descriptor defines the start of the page, its length, and acceptable execution modes.

    “Unsafe” code can modify this tag, but that is only available with elevated privileges. User code, and otherwise normal code, is restricted to executing code, accessing data, and doing both only in the accepted limits.

  66. Andrew | April 26, 2013 at 9:54 pm | Permalink

    Dear Michael, what makes you think so? VirtualProtect can be called by any prog, running with user’s privileges (ring 3). If someone call this function, it doesn’t make his code “unsafe”. He may increase security level, as I explained before. Usually word “unsafe” means pointer arithmetic, when after simple arithmetic ops a pointer might point to some forbidden region in address space.

  67. Jonathan Thornburg | April 28, 2013 at 12:45 am | Permalink

    It would be interesting to know what fraction of memory-safety bugs in C++ code could be prevented by (for example) suitable programming guidelines. I’m thinking here of things like always
    using std::vector::at() (which does a run-time check that an array reference is in-bounds) in preference to std::vector::operator[] (which is typically implemented as the same as the C [] subscripts, i.e., no checking).

    Inspired by a colleague, I adopted this into my personal C++ programming guidelines about 5 years ago. It’s caught a lot of bugs since then…

    It would be interesting to classify all the security bugs in some large OS bug database by programming language (C, C++, …), and for the C++ ones try to determine something about the level of abstraction being used, e.g., was the programmer manipulating raw arrays vs was she manipulating STL containers?

    Another idea… turn on bounds-checking STL in some large C++ codebases (e.g., firefox). I would happily live with a firefox that was a factor of 2 slower than it’s already bloated state if it had a substantial security improvement…..

  68. Nuno Lopes | April 28, 2013 at 12:46 am | Permalink

    Clang already has a ‘-fsanitize=bounds’ flag, which performs instrumentation for buffer overflow detection (intra-procedural only). The overhead is usually just a few %.
    There’s still some work to be done on the optimization side, but it’s a start, I guess. Well, and extend the instrumentation to the inter-procedural setting, as well.

  69. Jonathan Thornburg | April 28, 2013 at 12:48 am | Permalink

    Followup…

    What would be really interesting to know is, what fraction of (say) the last N year’s firefox security bugs could have been prevented by such programming guidelines and/or bounds-checking STL and other containers? If that number is “50%” then we have something very interesting.
    If it’s 1% (and they’re not “the nastiest 1%”) then we have something a lot less interesting…

  70. Jonathan Thornburg | April 28, 2013 at 12:54 am | Permalink

    @Nuno #68:

    It’s perhaps also worth noting that on OpenBSD,
    /usr/bin/gcc comes with the ProPolice stack-protection extension turned on by default (and this has been the case since sometime around 2005). This is the compiler used to build the kernal and almost all of the userland.

  71. Jonathan Thornburg | April 28, 2013 at 1:22 am | Permalink

    A final note… ProPolice (which catches a large fraction of stack-smashing buffer overruns) has a very low overhead — around 2% to 3% CPU time and less than that for code size.

  72. Milo Martin | April 28, 2013 at 8:09 am | Permalink

    @Jonathan, preventing stack smashing is great, and it is good to see OpenBSD willing to give up a bit in performance for more security. However, attackers have adapted and the sort of memory corruption vulnerabilities being exploited in the wild have moved far past simple stack smashing.

  73. regehr | April 28, 2013 at 10:03 am | Permalink

    Jonathan, a fascinating piece of followup work to the “eternal war” paper would be a large-scale study of vulnerabilities from (say) the last 24 months with an analysis of what technologies (if any) would have rendered each bug unexploitable, had they been deployed.

    As Milo indicates, we keep raising the bar for security but attackers show remarkable adaptability. And of course even if we turned on memory safety for an entire platform, attackers would simply shift all of their efforts towards non-memory bugs.

    The hypothesis behind memory safety research, which we maybe have not yet managed to state very clearly, is something like: The costs of pervasive memory safety for C/C++ are worth paying because (1) safety will stop entire classes of attacks once and for all, as opposed to just raising the barrier to entry, and (2) developer productivity will increase due to easier, earlier detection of safety errors.

    Milo’s guess, which I’m liking more and more, is that perhaps this hypothesis is false for software-only safety and true for hardware-assisted safety.

  74. Paulo Pinto | April 28, 2013 at 10:23 am | Permalink

    First of all let me say that those complain about C, C++ vs Java,C# seem to miss the point that there are native code compilers for such languages and even research OS done in them.

    Nowadays when targeting Windows Phone 8, C# is actually compiled to native code and the new compiler (Roslyn) is done in C#, not C++.

    Around 30 years ago we had the possibility to get safer systems programming languages with Modula-2 and Ada, sadly the industry, for various reasons choose the C route with the price we pay nowadays in security.

    The only way to make C memory safe, without having tons of tools that offer patches over patches in terms of safety, is to have some kind of Safe C, where the usual errors are disallowed except in unsafe blocks or similar.

    Namely:

    - automatic decay of vectors into pointers when passed as parameters;

    - lack of bounds checking (make bounds checking selective, like in stronger type systems languages)

    - offer real arrays, instead of relying on the developer to specify the size, opening the door to copy paste errors

    Who knows, maybe we just need more security exploits until someone really puts the breaks on.

  75. Magnus | April 29, 2013 at 2:52 am | Permalink

    64/Andrew,

    All sorts of different things could happen with
    pB=pA+pFirstPixelOfB-pFirstPixelOfA if pointers are segmented. For example if you break the expression up into tmp = (pA + pFirstPixelOfB) and pB = tmp – pFirstPixelOfA, the first part (the addition) could add the complete values of both pointers, but the second part (the subtraction) could assume that tmp is a pointer into A, and might only subtract the non-segment part of pFirstPixelOfA from tmp.

    I think
    pB = pFirstPixelOfB + (pA – pFirstPixelOfA)
    is easier to understand and should also be correct as far as any compiler with any pointer model is concerned (the bit in brackets is valid and returns a ptrdiff_t, which can be added validly to pFirstPixelOfB).

  76. Andrew | April 29, 2013 at 4:05 pm | Permalink

    pB = pFirstPixelOfB + (pA – pFirstPixelOfA) will be correct in C++, but as I said such sort of pointer arithmetic is forbidden is so-called “safe” languages. There are another problems. Assume, I declared 2D array A[3][3]. What about b=A[4][0]? Or A[4][0]=0? In fact it is data corruption. If a language controls access to data, it must prevent such set of indexes. On the other hand, it means, that I cannot treat 2D array as 1D one and vise versa. I have a feeling, that “safe” languages bind my hands.

  77. David LeBlanc | May 1, 2013 at 5:35 pm | Permalink

    Until Jonathan chimed in, no one mentioned STL, which is part of the C++ standard. If you standardize on use of STL containers (note that not all programming problems are amenable to this), then you can turn potentially exploitable dereferences into C++ exceptions, which are not generally exploitable. As to perf, if you take the time to actually do perf tuning, you start figuring out how to use STL in a performant way, and for some pieces of code, you actually get it to go faster. For example, std::vector is generally within a few percent of the best implementations of a resizable buffer, and several times faster than the more common bad ones. Correctly used STL does not have to be a performance hit. On the way to converting the code, you make things exception safe, which generally implies that everything is initialized and cleaned up correctly, and the perf investigation will often reveal places the code may not have a good design, STL or not.

    While this observation isn’t especially exciting from an academic standpoint, from an engineering standpoint, the advice to go actually learn the more advanced parts of the standard and use it to make all sorts of reliability problems go away is solid.

    To those of you who might complain “We don’t use exceptions in our code”, my response is to ask whether you ever dereference any pointers. If you do, you’re programming with exceptions, just not nice, well-controlled exceptions.

  78. regehr | May 1, 2013 at 9:32 pm | Permalink

    David, I agree– a safe-by-default STL isn’t too sexy but is probably a very good idea in the short run. Unfortunately this does not fix the legacy C and non-STL C++ code.

  79. Paulo Pinto | May 2, 2013 at 2:26 am | Permalink

    David and regehr, I agree with both.

    C++ can be use in a safe way, if the developers restrict themselves to Modern C++ with STL, alongside -Wall -Werror and static analysis.

    The main problem is that too many developers write C like code in C++, and this also does not solve the problem for pure C programs.

  80. Laszlo Szekeres | May 7, 2013 at 1:12 am | Permalink

    Hi! I’m the main author of the mentioned SoK paper. Thanks a lot for referring to it! I know, I got here a bit late (although I also follow the blog), but I was really happy to see this long conversation. After all, this was the primary goal of the paper, to re-initiate discussion about the topic. But to reflect to the original post or proposal, I’m not sure we could wrap it up yet.

    The overhead of pointer based solutions (like SoftBounds+CETS) is too high (2-4x). Hardware support would of course fundamentally change the situation, but I don’t think we should just wait for Intel (and AMD and ARM) to implement it, since we can only speculate about it.

    Object based protections (like ASAN or BBC) don’t provide full protection, and their overhead is still a bit high (2x).

    Projects like SafeCode could have significantly lower overhead, but like other solutions based on static pointer analysis (such as CFI/DFI/WIT/DSR), it has serious compatibility/modularity issues (dynamic libraries).

    So as the SoK paper suggested as well, I don’t think there is such thing as the “best available memory safety solution”. I think more research is needed, or to quote myself: The war is not over. :)

    I’m looking forward to reading newer posts on the topic and thanks again for mentioning our paper!