Use of Assertions

Assertions are great because they:

  • support better testing,
  • make debugging easier by reducing the distance between the execution of a bug and the manifestation of its effects,
  • serve as executable comments about preconditions and postconditions,
  • can act as a gateway drug to formal methods.

Assertions are less than great because they:

  • slow down our code,
  • make our programs incorrect — when used improperly,
  • might trick some of us lazy programmers into using them to implement error handling,
  • are commonly misunderstood.

This post will go through the care and feeding of assertions with the goal of helping developers maximize the benefits and minimize the costs.


The best definition of assertion that I’ve seen is from the C2 wiki:

An assertion is a Boolean expression at a specific point in a program which will be true unless there is a bug in the program.

This definition immediately tells us that assertions are not to be used for error handling. In contrast with assertions, errors are things that go wrong that do not correspond to bugs in the program. For example, we might ask if this assertion is a good one:

int result = open (filename, O_RDWR);
assert (result != -1);

The definition tells us that the expression result != -1 will be true unless there is a bug in the program. Clearly this is not (generally) the case: the call to open() will fail depending on the contents of the filesystem, which are not wholly under the control of our program. Therefore, this assertion is a poor one. The ability to make a clear distinction between program bugs and externally-triggered error conditions is the most important prerequisite for effective use of assertions.

This post on assertions by Ned Batchelder gives an alternate definition:


Asserts that an expression is true. The expression may or may not be evaluated.

  • If the expression is true, execution continues normally.
  • If the expression is false, what happens is undefined.

Notice the difference between the first definition — which told us what an assertion actually means in terms of program correctness — and this one, which is operational and doesn’t mention bugs at all. This is a language designer’s and compiler writer’s view of assertions. It is useful because it makes it clear that while the assertion might be executed, we must not count on its being executed. This foreshadows an issue that I’ll discuss below, which is that assertions must not change the state of the program.

“What happens is undefined” when an assertion fails is taking a pretty strong view of things. First, this says that assertion failure might result in a message being logged to a console, a fatal signal being sent to the application, an exception being thrown, or no effect at all. Second, “undefined behavior when p is false” is equivalent to “p may be assumed to be true”. Therefore, the compiler should feel free to optimize the program under the assumption that the asserted condition holds. Although this might be what we want — in fact it would be really cool if adding assertions made our code faster rather than slower — it’s not an interpretation that is universally useful. As developers, we might want to count on a certain kind of behavior when an assertion fails. For example, Linux’s BUG_ON() is defined to trigger a kernel panic. If we weaken Linux’s behavior, for example by logging an error message and continuing to execute, we could easily end up adding exploitable vulnerabilities.

Where Do Assertions Come From?

When we write an assertion, we are teaching a program to diagnose bugs in itself. The second most common problem that students have with assertions (the first most common being a failure to make the distinction between bugs and error conditions) is figuring out what to assert. This is not so easy at first, but (for many people, at least) it soon becomes second nature. The short answer to where assertions come from is “outside of the core program logic.” Specifically, assertions come from:

Math. Basic math gives us simple ways — such as casting out nines — to check for errors in a complex operation. In a mathematical computer program there are typically many opportunities to exploit similar tricks. If we’re computing the angles between the sides of a triangle, it would be reasonable to assert that the angles sum to 180 degrees (or close to 180 if we’re using floating point). If we’re implementing an involved method of computing square roots, we might assert that the result squared is equal to the original argument. If we’re implementing a fast and tricky cosine function, asserting that its output is in [-1,1] would be a useful sanity check. In general, CS theory tells us that there are likely to be many problems that are inherently much more difficult to solve than to verify. Asserting the verification of the solution to any hard problem (whether it is in NP or not) would be an excellent thing to do.

Preconditions. When something has to be true for our code to execute correctly, it’s often worth asserting the condition up-front. This documents the requirement and makes it easier to diagnose failures to meet it. Examples include asserting that a number is non-negative before computing its square root, asserting that a pointer is non-null before dereferencing it, and asserting that a list does not contain duplicate elements before relying on that fact.

Postconditions. Often, near the end of a function, some kind of non-trivial but easy-to-check guarantee will be made, such as a search tree being balanced or a matrix being upper triangular. If we think there’s even a remote chance that our logic does not respect this guarantee, we can assert the postcondition to make sure.

Invariants. Data and data structures often have properties that need to hold in non-buggy executions. For example, in a doubly-linked list, things have gone wrong if there exists a node where n->next->prev != n. Similarly, binary search trees often require that the value attached to the left node is not greater than the value attached to the right node. Of course these are easy examples: programs such as compilers that do a lot of data structure manipulation can have almost arbitrarily elaborate data structure invariants.

The spec. Since a program is buggy by definition if it doesn’t implement the spec, specifications can be a powerful source of conditions to assert. For example, if our financial program is designed to balance the books, we might assert that no money has been created or destroyed after the day’s transactions have all been processed. In a typesetting program, we might assert that no text has been placed outside of the page.

Assertions vs. Contracts

As shown in the previous section, assertions fill a variety of roles. Assertions about preconditions and postconditions that hold at module boundaries are often called contracts. Programming languages such as Eiffel, D, Racket, and Ada provide first-class support for contracts. In other languages, contract support is available via libraries or we can simply use assertions in a contract-like fashion.

How About Some Examples?

Let’s look at assertions in action in two sophisticated code bases. Here’s a super badass assertion in Mozilla that was triggered by 66 different bugs. Jesse Ruderman adds that “about half of the bugs that trigger the assertion CAN lead to exploitable crashes, but without a specially crafted testcase, they will not crash at all.” Here’s another awesome Mozilla assertion with 33 bugs that could trigger it.

The LLVM project (not including Clang, and not including the “examples” and “unittests” directories) contains about 500,000 SLOC in .cpp files and about 7,000 assertions. Does one assertion per 70 lines of code sound high or low? It seems about right to me.

I arbitrarily picked the C++ file in LLVM that is at the 90th percentile for number of assertions (the file with the median number of assertions contains just one!). This file, which can be found here, contains 18 assertions and it’s not too hard to get a sense for their meaning even without seeing the surrounding code:

assert(BlockAddrFwdRefs.empty() && "Unresolved blockaddress fwd references");
assert(Ty == V->getType() && "Type mismatch in constant table!");
assert((Ty == 0 || Ty == V->getType()) && "Type mismatch in value table!");
assert(It != ResolveConstants.end() && It->first == *I);
assert(isa<ConstantExpr>(UserC) && "Must be a ConstantExpr.");
assert(V->getType()->isMetadataTy() && "Type mismatch in value table!");
assert((!Alignment || isPowerOf2_32(Alignment)) && "Alignment must be a power of two.");
assert((Record[i] == 3 || Record[i] == 4) && "Invalid attribute group entry");
assert(Record[i] == 0 && "Kind string not null terminated");
assert(Record[i] == 0 && "Value string not null terminated");
assert(ResultTy && "Didn't read a type?");
assert(TypeList[NumRecords] == 0 && "Already read type?");
assert(NextBitCode == bitc::METADATA_NAMED_NODE); (void)NextBitCode;
assert((CT != LandingPadInst::Catch || !isa<ArrayType>(Val->getType())) && 
        "Catch clause has a invalid type!");
assert((CT != LandingPadInst::Filter || isa<ArrayType>(Val->getType())) && 
        "Filter clause has invalid type!");
assert(DFII != DeferredFunctionInfo.end() && "Deferred function not found!");
assert(DeferredFunctionInfo.count(F) && "No info to read function later?");
assert(M == TheModule && "Can only Materialize the Module this BitcodeReader is attached to.");

The string on the right side of each conjunction is a nice touch; it makes assertion failure messages a bit more self-documenting than they otherwise would have been. Jesse Ruderman points out that a variadic assert taking an optional explanation string would be less error prone, and he also points to Mozilla’s implementation of this, which burned my eyes.

Do the assertions in LLVM ever fail? Of course they do. As of right now, 422 open bugs in the LLVM bug database match the search string “assertion.”

GCC (leaving out its testsuite directory) contains 1,228,865 SLOC in its .c files, with about 9,500 assertions, or about one assertion per 130 lines of code.

Things I’ve be interested in hearing from readers:

  • Your favorite assertion
  • The assertion ratio of your favorite code base, if you have the data available and can share it

Mistakes in Assertions

There’s only one really, really bad mistake you can make when writing an assertion: changing the state of the program while evaluating the Boolean condition that is being asserted. This is likely to come about in one of two ways. First, in a C/C++ program we sometimes like to accidentally write this sort of code:

assert (x = 7);

This can be avoided using Yoda conditions or — better yet — just by being careful. The second way to accidentally change the state of the program is like this:

assert (treeDepth() == 7);

but unfortunately treeDepth() changes the value in some variable or heap cell, perhaps via a longish call chain.

In case it isn’t totally clear, the problem with side-effects in assertions is that we’ll test our program for a while, decide it’s good, and do a release build with assertions turned off and of course suddenly it doesn’t work. Or, it might be the release version that works but our debug build is broken by a side-effecting assertion. Dealing with these problems is highly demoralizing since assertions are supposed to save time, not eat it up. I feel certain that there are static analyzers that warn about this kind of thing. In fact, the original paper about the work that became Coverity’s tool mentions exactly this analysis in Section 4.1, and also gives plenty of examples of this bug. This is an area where language support for controlling side effects would be useful. Such support is extremely primitive in C/C++.

I feel compelled to add that of course every assertion changes the state of the machine, for example by using clock cycles, flipping bits in the branch predictor, and by causing cache lines to be loaded or evicted. In some circumstances, these changes will feed back into the program logic. For example, a program that runs 2% slower when compiled with assertions may run afoul of network timeouts and behave totally differently than its non-assert cousin. As developers, we hope not to run into these situations too often.

Other assertion mistakes are less severe. We might accidentally write a vacuous assertion that gives us a false sense of confidence in the code. We might accidentally write an assertion that is too strict; it will spuriously fail at some point and need to be dialed back. As a rule of thumb, we don’t want to write assertions that are too obviously true:

if (b) {
  x = 3;
} else {
  x = 17;
assert (x==3 || x==17);

It is also not useful to gunk up a program with layers and layers of redundant assertions. This makes code slow and hard to read. The 1-in-70 number from LLVM is a reasonable target. Some codes will naturally want more assertions than that; some codes will not need nearly as many.

Ben Titzer’s comment illustrates some ways that assertions can be misused to break modularity and make code more confusing.

One danger of assertions is that they don’t compose well when their behavior is to unconditionally terminate a process. Tripping over assertions in buggy library code is extremely frustrating. On the other hand, it’s not immediately obvious that compiling a buggy library with NDEBUG is a big improvement since now incorrect results are likely to percolate into application code. In any case, if we are writing library code, we must be even more careful to use assertions correctly than if we are writing a standalone application. The only time to assert something in library code is when the code truly cannot continue executing without dire consequences.

Finally — and I already said this — assertions are not for error handling. Even so, I often write code asserting that calls to close() return 0 and calls to malloc() return a non-null pointer. I’m not proud of this but I feel that it’s better than the popular alternatives of (1) ignoring the fact that malloc() and close() can fail and (2) writing dodgy code that pretends to handle their failure. Anyhow, as a professor I’m allowed to write academic-quality code sometimes. In a high-quality code base it might still be OK to crash upon encountering an error that we don’t care to handle, but we’d want to do it using a separate mechanism. Jesse mentions that Mozilla has a MOZ_CRASH(message) construct that serves this purpose.

Misconceptions About Assertions

A few years ago, in a class where the assignments were being written in Java, I noticed that the students weren’t putting very many assertions in their code, so I asked them about it. Most of them sort of looked at me blankly but one student spoke up and said that he thought that exceptions were an adequate replacement for assertions. I said something like “exceptions are for error handling and assertions are for catching bugs” but really that wasn’t the best answer. The best answer would have been something like this:

Exceptions are a low-level control flow mechanism in the same category as method dispatch and switch statements. A common use for exceptions is to support structured error handling. However, exceptions could also be used to implement assertion failure. Both assertions and structured error handling are higher-level programming tasks that need to be mapped onto lower-level language features.

Another misconception (that I’ve never heard from a student, but have seen on the net) is that unit testing is a superior replacement for assertions. This would be true only in the case where our unit tests are perfect because they catch all bugs. In the real world neither unit tests nor assertions find all bugs, so we should use both. In fact, there is a strong synergy between assertions and unit tests, as I have tried to illustrate in a previous post. Here’s a blog post talking about how assertions interfere with unit testing. My opinion is that if this happens, you’re probably doing something wrong, such as writing tests that are too friendly with the implementation or else writing bogus assertions that don’t actually correspond to bug conditions. This page asks if unit tests or assertions are more important and here’s some additional discussion.

The checkRep() Idiom

I believe it is a good idea, when implementing any non-trivial data structure, to create a representation checker — often called checkRep() — that walks the entire structure and asserts everything it can. Alternatively, a method called repOK() can be implemented; it does not contain any assertions but rather returns a Boolean value indicating whether the data structure is in a consistent state. We would then write, for example:

assert (tree.repOK());

The idea is that while unit-testing the data structure, we can call checkRep() or assert repOK() after every operation in order to aggressively catch bugs in the data structure’s methods. For example, here’s a checkRep() that I implemented for a red-black tree:

void checkRep (rb_red_blk_tree *tree)
  /* root is black by convention */
  assert (!tree->root->left->red);
  checkRepHelper (tree->root->left, tree);

 * returns the number of black nodes along any path through 
 * this subtree 
int checkRepHelper (rb_red_blk_node *node, rb_red_blk_tree *t)
  /* by convention sentinel nodes point to nil instead of null */
  assert (node);
  if (node == t->nil) return 0;
  /* the tree order must be respected */
  /* parents and children must point to each other */
  if (node->left != t->nil) {
    int tmp = t->Compare (node->key, node->left->key);
    assert (tmp==0 || tmp==1);
    assert (node->left->parent == node);
  if (node->right != t->nil) { 
    int tmp = t->Compare (node->key, node->right->key);
    assert (tmp==0 || tmp==-1);
    assert (node->right->parent == node);
  if (node->left != t->nil && node->right != t->nil) {
    int tmp = t->Compare (node->left->key, node->right->key);
    assert (tmp==0 || tmp==-1);
  /* both children of a red node are black */
  if (node->red) {
    assert (!node->left->red);
    assert (!node->right->red);

  /* every root->leaf path has the same number of black nodes */
  int left_black_cnt = checkRepHelper (node->left, t);
  int right_black_cnt = checkRepHelper (node->right, t);
  assert (left_black_cnt == right_black_cnt);
  return left_black_cnt + (node->red ? 0 : 1);

Hopefully this code makes some sense even if you’ve never implemented a red-black tree. It checks just about every invariant I could think of. The full code is here.


Often, a switch or case statement is meant to be cover all possibilities and we don’t wish to fall into the default case. Here’s one way to deal with the problem:

switch (x) {
  case 1: foo(); break;
  case 2: bar(); break;
  case 3: baz(); break;
  default: assert (false);

Alternatively, some code bases use an UNREACHABLE() macro that is equivalent to assert(false). LLVM, for example, uses llvm_unreachable() about 2,500 times (however, their unreachable construct has a pretty strong semantics — in non-debug builds, it is turned into a compiler directive indicating dead code).

Light vs. Heavy Assertions

In many cases, assertions naturally divide into two categories. Light assertions execute in small constant time and tend to touch only metadata. Let’s take the example of a linked list that maintains a cached list length. A light assertion would ensure that the length is zero when the list pointer is null, and the length is non-zero when the list pointer is non-null. In contrast, a heavy assertion would check that the length is actually equal to the number of elements. Heavy assertions for a sort routine would ensure that the output is sorted and maybe also that no element of the array has been modified during the sort. A checkRep() almost always includes heavy assertions.

Generally speaking, heavy assertions are most useful during testing and probably have to be disabled in production builds. Light assertions might be enabled in deployed software. It is not uncommon for large software bases such as LLVM and Microsoft Windows to support both “checked” and “release” builds where the checked build — which is customarily not used outside of the organization that develops the software — executes heavy assertions and is substantially slower than the release build. The release build — if it does include the light assertions — is generally only a tiny bit slower than it would be if the assertions were omitted entirely.

Are Assertions Enabled in Production Code?

This is entirely situational. Let’s look at a few examples. Jesse passes along this example where a useful check in Mozilla was backed out due to a 3% performance hit. On the other hand Julian Seward said:

Valgrind is loaded with assertion checks and internal sanity checkers which periodically inspect critical data structures. These are permanently enabled. I don’t care if 5 percent or even 10 percent of the total run-time is spent in these checks–automated debugging is the way to go. As a result, Valgrind almost never segfaults–instead it emits some kind of a useful error message before dying. That’s something I’m rather proud of.

The Linux kernel generally wants to keep going no matter what, but even so it contains more than 11,000 uses of the BUG_ON() macro, which is basically an assertion — on failure it prints a message and then triggers a kernel panic without flushing dirty buffers. The idea, I assume, is that we’d rather lose some recently-produced data than risk flushing corrupted data to stable storage. Compilers such as GCC and LLVM ship with assertions enabled, making the compiler more likely to die and less likely to emit incorrect object code. On the other hand, I heard from a NASA flight software engineer that some of the tricky Mars landings have been done with assertions turned off because an assertion violation would have resulted in a system reboot and by the time the reboot had completed, the spacecraft would have hit the planet. The question of whether it is better to stop or keep going when an internal bug is detected is not a straightforward one to answer. I’d be interested to hear stories from readers about situations where assertions or lack of assertions in deployed software lead to interesting results.

Limitations of Assertions

Some classes of program bugs cannot be effectively detected using assertions. There’s no obvious way to assert the absence of race conditions or infinite loops. In C/C++, it is not possible to assert the validity of pointers, nor is it possible to assert that storage has been initialized. Since assertions live within the programming language, they cannot be used to express mathematical concepts such as quantifiers — useful, for example, because they support a concise specification of part of the postcondition for a sort routine:

   ∀ i, j : 0 ≤ i ≤ j < length ⇒ array[i] ≤ array[j]

The other half of the postcondition for a sort routine — which is often left out — requires that the output array is a permutation of the input array.

Some bugs can in principle be detected by assertions, but in practice are better detected other ways. One good example of this is undefined integer operations in C/C++ programs: asserting the precondition for every math operation would clutter up a program unacceptably. Compiler-inserted instrumentation is a much better solution. Assertions are best reserved for conditions that mean something to humans.

Assertions and Formal Methods

Recall my preferred definition of assertion: an expression at a program point that, if it ever evaluates to false, indicates a bug. Using predicate logic we can flip this definition around and see that if our program contains no bugs, then no assertion can fail. Thus, at least in principle, we should be able to prove that each assertion in our program cannot fail. Doing these proofs for non-trivial code is difficult, although it is far easier than proving an entire program to be correct. Also, we have tools that are specifically designed to help us reason about assertions; my favorite one for C code is Frama-C. If we write our assertions in ACSL (Frama-C’s annotation language) instead of in C, we get some advantages. First, ACSL is considerably more expressive than C; for example, it provides quantifiers, mathematical integers, and a way to assert that a pointer refers to valid storage. Second, if we can prove that our ACSL annotations are consistent with the source code, then we have a very strong result — we know that the assertion holds on all program paths, not just the ones that we were able to execute during testing. Additionally, as a backup plan, a subset of ACSL (called E-ACSL) can be translated into regular old assertions.


Assertions live at a sweet spot where testing, documentation, and formal methods all come together to help us increase the quality of a code base.

Many thanks to Robby Findler and Jesse Ruderman for providing feedback on drafts of this article.



35 responses to “Use of Assertions”

  1. I’ve thought for quite a while that ‘assert’ should have a meaning in release (optimised) builds, which is that the compiler can assume the condition is true.

    E.g. if you know that a variable will (should!) always be a power of two, asserting so might let the compiler use a shift operation in a release build.

    As for mistakes while using assertions, I made one some months ago while doing something tricky. I’ve always assumed that assert was implemented if _DEBUG was defined, and a NOP otherwise. In fact it’s the other way around and a kind of double negative: assert is implemented only if NDEBUG is not defined!

  2. Hi Magnus, both GCC and LLVM support __builtin_unreachable(), so it is easy to implement assert() like this in release builds:

    #define assert(x) if (!(x)) __builtin_unreachable()

    I just tried this for a small assert-heavy C file that I had sitting around and it did not seem to result in better code being generated by either Clang or GCC. It is possible that the optimizers have not been taught to exploit the dead paths that this produces, and it is also possible that the code I tried simply does not contain the correct assertions that would make good optimization possible.

  3. I tend to think by writing, so my code is littered with tons of comments (with embarrassing ASCII art for diagrams).

    One trick that’s worked well for me is to encode as many of those comments into assertions as possible. From an informal count of several of my research projects, I found ~3% of lines to be assertions — 1 assertion per 33 lines. I think that was Python code, though, so there’s less boilerplate.

    I wrote a more fluffy version of your piece here:

  4. On the other hand, it’s not hard to construct examples by hand where __builtin_unreachable() does result in (slightly) better code generation, such as when this is put at the end of a switch whose cases are meant to be exhaustive.

  5. In my lab’s codebase, grep finds an assert per every ~180 lines of code. So, sparser than LLVM, though I suspect we have both less structure to work with, and many cases where assertions would be about distributed state that can’t be examined locally. I still expect that ratio could reasonably be substantially tightened if we tried.

  6. An unmentioned benefit of assertions is that they make your program more useful for (other people’s) research. Effectively they are a weak oracle but can still be useful for verification purposes (have I generated a program that seems to do what the original did, what specification was this code trying to adhere to).

  7. Hi John, I haven’t kept up with your blog in a while!

    Re __builtin_unreachable(): Generally, we tend not to want assert(x) to turn into a compiler assumption in release mode; instead, we want to do something “robust” — i.e., muddle along and hope something hides the error from the user. (This is better than a plain crash in release mode, anyway.)

    Also, sometimes we want very “heavy” (side-effecting, non-trivial) assertions in debug mode. For example, assert(replOK(myTree)). We certainly don’t want to force the compiler to generate code for that assertion in release mode! (Which is what would happen with your proposed release-mode “assert”.)

    So I prefer to spell the __builtin_unreachable variant as “assume”:

    #ifdef NDEBUG
    #define assume(x) do { if (x) (void)0; else __builtin_unreachable(); } while (0)
    #define assume(x) assert(x)

    Then, you can use assume() for lightweight hints to the optimizer in release mode, and assert() for heavyweight documentation of invariants and bug-catching in debug mode.

    Compare the following function with and without the assume(), on GCC 4.6+ or Clang 3.5:

    void increment(std::vector& x) {
    assume(x.size() == 10);
    for (int i=0; i < x.size(); ++i) ++x[i];

    If you say "assume(x.size() == 10)", you get the optimal code for
    looping 10 times. (-funroll-loops works, too.)
    If you say "assume(x.size() == 0)", you get a no-op function "rep; ret".
    If you say "assume(x.empty())", ditto.

    (But if you say "assume(!x.empty())" or any variation I tried,
    unfortunately you do NOT get the optimal code for a "do-at-least-once"

  8. I’m surprised you said “network timeouts” but not “race conditions”. In particular, asserting consistency of a concurrent data structure can be very helpful (because writing concurrent data structures is hard), but also very harmful (by introducing extra synchronization operations that are removed in release builds).

    I’m also curious about your remark about controlling side effects in C++ — doesn’t const help somewhat (at least more than some other languages)? Though if you have a non-const T, you’d have to pass it and the predicate separately to an assert that takes a const T& to get checking. Or do you want something more fine-grained than types?

  9. Your eyes burn there with good reason. 🙂 C++’s variadic macro syntax is pretty unpleasant to work with (perhaps necessarily, given its purely-textual nature), which causes some of the gunk. Non-intuitive macro expansion rules (you can’t concatenate macros or macro calls, you have to pass the arguments through an extra layer of macro calls to concatenate the expanded results) cause more. The MSVC bug is probably the biggest problem that code has to deal with, tho. Cut out that, and things would be a good deal simpler.

  10. An example when I use assertions and also (I think) error handling cannot be easily distinguished from bugs.
    It is when assertions are supposed to trigger mainly in cases when hardware fails (i.e. RAM content or processor registers are corrupted by cosmic rays). Conditions in such assertions usually are preconditions and postconditions in functions, catches of unreachable code sections (like default in switch/case) and detection of some other problems associated with unexpected code flow (i.e. before function return it is checked if there was executed function entry code).
    Of course such assertions also catch “normal” bugs but their main intention is to catch hardware errors.

    BTW> I know, for safety purposes there is available and there should be used appropriate software (i.e. Ada not C) and hardware (i.e. parity bits, memory protection, doubled processors etc.) but the reality is that there is a class of devices where such hardware protections can be not enough and there is also another class of devices where such hardware is too expensive but certain safety level is still required (i.e. ASIL A,B).

  11. QEMU’s codebase seems to have 4660 assertions in a total of ~802,000 lines, which is about one every 170 lines. We tend to what I would think of as a fairly “assertion-light” style.

    I notice you suggest asserting that a pointer is non-null before dereferencing it. For me that’s one of the things I would tend to reject as “pointless assertion” in code review, since I think the benefit of assertions is catching bugs early, and in the sequence “assert(p); x = *p;” you are not catching the bug any earlier or making the issue any easier to debug than if you just let the null pointer cause a segfault. I guess if you’re targetting an embedded system where null pointer dereference isn’t an immediate segfault you’d want to be more aggressive with the assertions about null pointers, though.

  12. As a C++ programmer I’m a big fan of using ‘assert’ to embed contracts in the program – especially for multithreaded programming I have a lot of ASSERT(mutex.IsLockedByCurrentThread()) in order to check that a mutex protecting a certain data structure is taken as a precondition to calling methods that operate on the structure. I know there are other ways of ensuring that certain locks are taken (and also in a certain order), but all the ones I have seen have seemed rather inflexible for production code.

  13. There are further dark sides to assertions. I won’t quibble with your excellent write up on the advantages. There are plenty of advantages, and I do use and appreciate assertions from time to time.


    I find code with too many assertions smells bad. I’ve worked on several code bases where the noise ratio as a result of heavy asserting was so high that I felt constantly distracted trying to figure out both what exactly the assertion is trying to check, and why–e.g. what would go wrong if it no longer held?

    Assertions get increasingly worse the farther they are from the source of the bug for which they are checking. They find that something is wrong (or at least, it is different than what was assumed in this code path). They don’t–and can’t–tell you why or how that went wrong. When they are misplaced, we get invariants from one part of the system expressed in another part of the system. Sometimes it is as simple as a post-condition from one module being incorrectly placed as a pre-condition in another model, or vice versa (usually, the latter being worse). But when modules have complex dependencies and assertions are misplaced, they contribute to increasing confusion about the roles of different modules and the functioning of the system. Done wrong, assertions break abstraction boundaries and can improperly describe contracts. So careful placement of assertions is as important as their existence.

    Too many assertions tend to encode current behavior or only the execution paths tested so far. I’ve repeatedly found cases where over-asserting makes the code brittle and resistance to change. Many times an algorithm will work _just fine_ for the general case, but the author or maintainer was too paranoid to allow it through. These arise out of fundamental misunderstandings of the code. Maybe the code is complex. Maybe the system is complex. But too many asserts can often make the problem worse, not better. And sometimes I get the impression that assertions are used as a crutch to avoid handling a difficult case and generalizing an algorithm. I’ve had this problem so many times in implementing compilers that I always implement the general case first and won’t accept code that doesn’t at least have a default, conservative, and maybe even embarassingly slow behavior for complex cases. At least it is correct.

    I see assert overload in a codebase arise over time, little by little. A bug occurs. Something unexpected happens. In the course of debugging, a programmer adds assertions in various places of the code, trying to find where something goes wrong. If the assertions are spread too far afield, then the above problems start to take hold. And they will multiply.

    Another problem is that redundant assertions have zero value. In the limit, we could assert(true) at every line in the program. But such asserts never fail, and only litter the code with annoying tangents, checking trivially true conditions. Well-designed systems, particularly ones that employ immutable data in just the right way, make many kinds of preconditions structurally incapable of failing. Of course, a strong type system falls into this category. A well-designed one makes all runtime type assertions (both implicit and explicit) completely unnecessary. It would be of no value to assert dynamically an invariant that is statically true. But where to draw the line?

    I would suggest the following:

    1.) An assert that never indicates a bug in this program is not useful.
    2.) An assert that never fails is only useful as documentation. (and by _never fails_, include all of the testing phase before deploying the code).
    3.) An assert that only serves as documentation is only useful if it expresses relevant and clear information to the local code.

  14. Favourite codebase: cryptominisat, a SAT solver which I develop. 30K LoC, 675 assertions, which is 1 per 44 LoC. I am personally quite in favour of assertions. They allow me to catch a wrong state early, which significantly lowers the time it takes to find where the state was messed up.

    As for leaving them turned on in release builds, I’m on the side of Julian Seward — even through SAT solvers are notorious for trying to get every CPU cycle out of the machine. We even write our own memory managers to keep things contiguous in memory. But I’d rather focus on giving correct output and finding bugs early.

    As for light&heavy assertions, I have an extra-assert build that I use in case there is a bug that enables the heavy ones. It allows me to catch the wrong state even earlier. Then I only have to pay the penalty when I really need it. Still, the penalty paid is significantly lower than if I had to try to figure out the bug without such help.

  15. @pm215: re assert(p != NULL), In some environments, that assertion is very valuable because it points to the exact variable that is null, where as a seg-v will only (at best) give you the line the seg-v was on or (at worst), the bare fact that one occurred. This is more typical for hard to reproduce bugs and particularly bugs that occur in a situation where a debugger is unavailable.


    Regarding __builtin_unreachable, ACSL, etc., one things I’d like is a compiler (note not just a static analysis tool) with an optimization pass that specifically targets static proofs of run-time asserts. That is, the compiler takes the assertions as a hit, not of what is true but, what it should attempt to prove is true. Assertions that it can prove to be unconditionally true, can be removed. Assertions that can be proved to be unconditionally false, can become compile time errors. And the rest (those that are conditionally true) might be good candidates for assertions higher up in the code (replace one assert inside a loop with another before it). And whatever it proves along the way feeds into the optimizer.

    Along the lines of code coverage analysis, it would be nice to be able to generate a report of what was done and possibly how easily (assertions that a tautological without having to examine any “non trivial” code might be worthless noise). This would allow some interesting IDE integration as well.


    Do any languages have the construct of “partially undefined behavior”? For example assert could be defined such that: “if the condition is false, then execution will halt and not continue to the following statement is but is otherwise undefined behavior.” This would allow some safety guarantees while still allowing the compiler a lot of latitude to do things like move asserts past code that has side effects (execution can’t continue to the following statement if it never got to this one) .

  16. Phil, assertions about the state of distributed systems sounds like a fun research topic!

    Arthur, thanks, and glad to see you here again! I plan to do a followup post about assume(). I’ve been playing with assume() and have been finding it tricky to come up with assumptions that the compiler can actually make use of.

  17. Jeffrey, the const qualifier is OK but support for pure functions is probably what is needed here. Races probably would have been a better example than network timeouts. I tend to use assertions heavily when writing concurrent code and the added synchronization is indeed a thorny problem.

    Artur, thanks! I tend to not run into RAM corruption in my daily life so I hadn’t even considered this issue.

    pm215, another reason to assert non-nullness of pointers is that compilers are free to assume that undefined behavior does not exist and may do something stranger than generating a nice crash when this assumption in violated.

  18. bcs, I agree that compilers should work hard on assertions, but my guess is that the current generation of compilers is so focused on being fast that that are just far too stupid to do meaningful work along these lines.

    My belief is that subsequent generations of compilers will routinely use things like SAT/SMT solvers and will be able to do a much better job at this sort of thing. Also, they will use search-based techniques to find optimal instruction sequences and other great things.

  19. bcs, one more thing: when using a formal methods tool like Frama-C, it is common to use assertions in exactly the mode that you describe. If i put in an assertion, it forces the tool to prove the condition. This result is then available as a lemma to support subsequent (and presumably more difficult) proofs. Here’s an example that I saw recently:

    assert x <= 46339 || x > 46339 ;

    It looks stupid but (in context) it provides the tool with a case split that the tool would not have figured out on its own.

  20. I enjoy your blog very much!

    You asked for examples of “favorite assertions”. Here is mine.

    The main thing that I wanted to accomplish with this assertion was to catch structure padding issues in C/C++ code. I knew that my code might present some problems in certain environments, and I wanted to catch these problems as soon as possible. However, I did not anticipate what my co-worker would do when the assertion failed…

  21. Regarding fast compilers, I’ve several times wished for a “optimized for x-minutes/hours/days” flag. With thing like feedback-directed-optimization and massive build clusters (avalable to anyone willing to spend the $$$) in the mix, it seems like a false economy for the compiler designer to limit how much compute time I can spend on the problem. Or more generally, what happens if we throw out out assumption about what the build process looks like? This seems to me to a fascinating research opportunity as it opens up a lot of options that I’ve never even heard of exploration into:

    – Assume you have a cluster, what (beyond parallel make) can you do with it? (e.g. distributed IPO)
    – Assume you have a benchmark or “workload simulator” that the build system can make use of, what kind of instrumentation would you want to run thought it? I.e. how far can you take FDO?
    – Assume you have “exhaustive tests”, can you find “technically invalid but practically acceptable” optimizations?
    – What kind of artifacts can we export from the build to enable work elsewhere in the development life-cycle.


    The particular difference between what I was suggesting and formal methods is that I wasn’t suggesting that the compiler be *obligated* to prove the assertion.

  22. Good point about compilers making assumptions. Personally I think the right way to fix that is for the compiler (and the language standard) to insert assert(!undefined_behaviour_I_am_about_to_rely_on) when it is going to generate surprising code. Sadly as you say this isn’t really the direction compiler writers are currently headed — I think the incentives are overly weighted in the direction of “produce benchmark scores as good as possible and hang everything else” 🙁

  23. @Ben re “An assert that never fails is only useful as documentation”

    I’m not sure why you use the word “only” in that sentence, given the number of complaints devs make about lack of internal documentation. Asserts are the best sort of documentation because they are eminently verifiable even as the code around them evolves.

    Given my druthers, I would rather debug overly asserted code rather than under-asserted code. Of course tautologies/non-sequiturs are silly but OTOH when I see this (e.g.):

    if ( pFuncArg ) { …; }

    … I’m often left wondering if _pFuncArg_ is truly allowed to be NULL or is it just that the implementor was entirely too meek in their pre-condition assertions. The distinction is often important.

    The problem is that you never quite know which invariant expression floating around in your head while you write the algorithm is the invariant that, if asserted, will prevent an important bug later on down the road. Choose wisely…

  24. I am the author and maintainer of a small middleware library, a layer of abstraction over sockets and shared memory. There, the assert to C source code ration is 1:17. I counted it both with and without unit-test code, in both cases the ratio turned out to be the same.

    In that library, the applications use message buffers created by the library. One of my favorite assert() is one which checks whether the message buffer pointer used by the application resides in the good zone as known to the library and the length of the message buffer is not bigger than it is supposed to be. It is on the heavier side, but it has been useful, since it has caught potential application bugs.

    I prefer to write one condition per assert. So, instead of writing

    assert( cond1 && cond2 );

    I would write

    assert( cond1 );
    assert( cond2 );

    This way, if any of the conditions fail, I immediately what _exactly_ has failed.

    I liked the invariant checker of red-black tree. I have never seen this or Emin Martinian’s code, but I did the same thing when I implemented AVL tree in my past job. I implemented a function “sane()”

    /// Alert: High complexity
    bool AvlTree::sane() const

    which checks each and every property of the AVL tree.

    If I call any function inside assert(), I manually/visually make sure that it is “const,” I wish there was some programmatic way of doing it.

  25. Years ago, I was influenced by the assert discussions in John Robbins “Debugging Applications” and Steve Maguire “Writing Solid Code”.

    The first thing I do, after setting up version control, is add custom assert and verify macros/functions to all my embedded projects. The custom assert functionality safely shuts down any hardware and may disable certain interrupts and keep watchdogs happy. If I have room in memory, I keep a string representation of the assert expression, filename, and line number in my assert module. If memory is tight, I shorten the filename and expression that are saved to a few characters. I also have an ignore variable that allows me to step out of my assert which often helps me find the problem.

  26. The goal of our SMACCMPilot project ( is to build higher-assurance embedded flight-control systems. We currently have about 2000 assertions in about 40klocs of C code. The assertions are generated automatically by our domain-specific language, Ivory, that generates embedded C. Most assertions are to check for arithmetic underflow/overflow and division by zero, but there are others. We typically insert a breakpoint in our assertions so we can debug with GDB/JTAG.

    We use assertions both as automatic triggers for model-checking as well as in testing. As you note, they provide us a way to gain confidence we don’t have bugs even if we haven’t verified all the code (and verifying non-linear bounded arithmetic is difficult!).

    We have also used assertions as a cheap way to test a compiler: we have a compiler insert assertions that any valid program should satisfy and then if it fails, there is a compilation bug.

    One final story: I introduced a bug in the compiler which generated assertions that could fail on correct programs (the assertion caused an arithmetic overflow, which resulted in evaluating to false)! I am now much more careful about auto-generating assertions, and have even used SMT to verify that the assertions test the correct properties.

  27. Hi bcs, this cluster-compiler thing would be really interesting to work on. You can imagine that certain companies have maybe a few thousand C++ functions that collectively cost millions of dollars per day in power and machine time. If these could be sped up by an average of 10%, real money might be saved.

  28. If you use loop variants as Djikstra suggests, they can be used to guard against infinite loops.
    The loop variant is developed to be an computed integer that decreases in size each time through the loop with a minimum possible value. By asserting that the loop variant decreases and that it does not become smaller than the minimum value, one of those assertions would eventually be triggered by any infinite loop.

  29. Running of what Dale said, some of my favorite assertions were auto-generated by a CIL module I had that took C code, calculated static upper bound on loop iterations for every loop (or yelled that it wasn’t guaranteed to terminate) and threw in an assert that it hadn’t run more times than that.

    Never made it to production versions, though.

  30. regarding exceptions, it is instructive to check out (if you can find it):

    Cheriton, D. R. (1986). Making Exceptions Simplify the Rule (and Justify their Handling). In IFIP Congress (pp. 27-34).