How Should Integers Work?


Integer computer math is kind of a bummer because it’s a lot more complicated than it seems like it should be. I’m specifically talking about the way that fixed-width integers wreck havoc on program logic when they overflow. There are other issues, but this seems like the big one. So: How should integers work?

C/C++’s undefined overflow semantics for signed integers is a real pain and I don’t think anyone would say this is how integers should work.

Lots of languages like Java specify two’s complement semantics for integer overflow. This is reasonably efficient (though it kills optimization opportunities in loop code) but it has the problem of not matching the behavior of mathematical integers. Which integer does Java give us when we increment 2,147,483,647? It gives the integer that is — out of all possible results — the farthest from the mathematical result. Just the other day I heard a story from the high performance computing people in my department where a parallel code worked correctly except at the largest scale, when an integer would overflow. I don’t know how they debugged this but it can’t have been fun.

Saturating integer semantics where overflows stop at the maximum or minimum representable value are useful in signal processing, but they are not used as the default integer semantics in any language that I know of. Unfortunately, too few platforms offer native support for saturating semantics; otherwise there’s a significant performance penalty. Also, a design choice needs to be made about whether saturation is sticky or not. In other words, does decrementing INT_MAX give INT_MAX, or does it give INT_MAX-1?

IEEE floating point supports sticky NaN values to represent nonsensical results. Adding a NaN integer value to represent overflows would seem like a reasonable compromise between efficiency and correctness. Unfortunately, the two’s complement representation has no spare bits to represent NaN, nor of course do hardware platforms support transparent propagation of integer NaNs.

Integer types can be infinitely ranged. Bignums have a reputation for poor performance, but they can be pretty fast if a smart compiler and runtime optimistically assume that operations don’t overflow, but silently trap into the bignum library when necessary. This is clearly the right choice for high-level languages, and clearly the wrong choice for systems languages like C and C++. The problem in C/C++ is that they are used to implement OS kernels, hypervisors, and bare-metal embedded systems that will become impossible to code if memory allocation can be triggered at nearly arbitrary program points. Small embedded systems don’t have have a dynamic allocator, and OS kernels have environments like interrupt handlers where allocation is dicey at best. To summarize, the most critical software systems are also the ones where we cannot afford friendly integer semantics.

Another option is explicitly ranged integers backed by formal methods tools. Here, the programmer basically has to prove that integers cannot overflow. This is unsuitable for many programming tasks because it is too difficult, but it’s probably the right answer for safety critical systems.

Finally, we can have hybrid integer semantics that combine various options. This add flexibility but makes the programming model more complex, particularly when different kinds of integers interact.

So: How should integers work?

This post shows some amusing bugs in simple integer codes.

,

17 responses to “How Should Integers Work?”

  1. You know, I think that it would be nice if C/C++ were only used for low-level systems programming. So maybe real integers are not such an obviously bad choice there, given how much of our infrastructure is actually using C/C++ (compared to how much should be…)

  2. PS: why is “might allocate” a worse property than “might give the wrong answer” in situations like hypervisors and whatnot? Or is there some reason why addition that overflows can actually be the right answer?

  3. Signed and unsigned ints of sizes S | 1 <= S <= 64. Two's complement (wrapping) arithmetic by default. Saturating and trapping arithmetic operators available, but syntactically set off somehow (e.g. x ++ y).

    Bignums should be a library that is implemented with variants (tagged unions):


    type BigInt =
    int#63
    | BigInteger

    And the compiler should be smart enough to realize it needs only one bit to distinguish the two variants and can use a 64-bit representation without boxing in the case of the int#63.

  4. Here is a scheme that might work for tolerably low-level languages.

    Let all operations on integers (arithmetic, comparisons, etc) be mathematically honest. That is, a + b > c means exactly what it looks like and the original types of a, b or c do not matter as long as they all are integers of some sort. (In other words, operations do not depend on the exact integer types of their arguments.)

    Operations do not normally overflow (except when exceeding the maximum intermediate range, or for things like division by zero). There should be no undefined behaviour.

    Provide both modulo (unsigned or two’s complement) types of any at least 0..64 bits, and range types. Assignment (or binding, etc) of values that cannot be represented to variables of range types would trap. (This is a little like Ada.)

    We don’t necessarily need bignums; we could restrict intermediate values to fit into a 128-bit integer. Most computations would get away with far less in practice.

    Left shifts would be problematic, but any limit that we need to impose should be rather generous – we want the semantics to be predictable and implementation-independent. Right shifts would be “arithmetic” (sign-preserving), since operands do not have signedness, only signs.

    The usual Sufficiently Advanced Compiler should be able to generate code equivalent to that of C when only modulo types are used, and considerably safer code for range types. Perhaps range types could provide further optimisation opportunities, or at the very least interesting static checking.

    I would also like dependent types for things like array indices, and a pony.

  5. This is a problem for the JIT compiler writers. The correct answer is that the CPU should detect the overflow and trigger an interrupt, which goes into some runtime library logic that recovers the function’s state, recompiles it with ‘int’ replaced by ‘bignum’ in the relevant places, and resumes. If a variable that was promoted from int to bignum this way gets stored in a struct or class, then relocate it onto a special page such that accessing it triggers a similar trap.

    All the components necessary to do this are available, it’s just really hard. So the major VMs end up not bothering.

  6. Why not have different types and put the onus on the programmer ?

    Like, have “overflowingInteger”, “undefinedInteger”, “rangeCheckedInteger”, “wrappingInteger”, “stickyOverflowInteger”, “autoUpgradeToBignumInteger” …
    (and then make rangeCheckedInteger the default “int”)

    The disadvantage being that it’s probably utterly impossible to encode this in classes, these would really have to be primitive types compiled down to different machine code.

  7. When we ask how integers should work in the context of C, it’s worth noting that 40 years ago that would have been regarded as a question about hardware more than one about a programming language. C’s integer behavior was precisely that of the underlying hardware—specifically the PDP-11, right down to ++ and –. This was undeniably a pragmatically good choice. We still want, it seem to me, the efficiency that follows from C being a thin veneer over the hardware and not entirely an abstraction.

  8. A single integer model can never work in all situations. Sometimes you want wrap-around, sometimes you want saturation, and sometimes any overflow is an error. For unsigned arithmetic, all hardware can easily provide wraparound, so that is what the C standard requires. For signed types, such a requirement would have been impractical when C was first invented due to the variety of signed integer representations used in hardware at the time.

    Modern systems all use two’s complement representation, so insisting on wrapping would be practically possible for that reason. Situations where wrapping signed arithmetic makes any sense are, however, very rare, and leaving it unspecified allows a few additional compiler optimisations. It also means a trapping implementation is conforming, which could be used as an additional safeguard in some situations.

    Having saturation as default behaviour would be a bad idea, aside from lacking hardware support, since it would actually make the effects of many accidental overflows worse. For example, if saturation happens, A+B-B is no longer equal to A as it would be with silent wraparound.

    All things considered, it is very hard to come up with a model better than the one currently used without seriously hampering some use cases. The best we can hope for is better language features to pick the desired behaviour in each situation.

  9. My bet would go on : use bignums, plus good dynamic prediction techniques to get good performances in the usual case (similar to what today OOP JIT do for dynamic dispatch), plus good static analysis to remove even the prediction/profiling overhead of dynamic prediction when it can be proved unnecessary, plus optional failure mode (I would just “abort” but if you say overflow are often the right choice, why not overflow) as an escape hatch to ask for when the static proof doesn’t go well but you still can’t afford bignum.

    The order is important : you want safe and powerful by default for everyone, with good performances in the general case, helped by static analysis to statically rule out allocation and guarantee performances in the low-level case (but you have to pay the price of helping the prover do its work), with an escape hatch because proving doesn’t always work (or the price is too high).

    For this to work well you need a very good interface between the programmer, the language and the prover, a bit like some compiler pragmas in some languages. The programmer needs to be able to ensure, in some places, that the static optimization always fire (or else be warned at compilation), needs to work with the prover in the delicate cases, etc. I don’t think this is quite ready, contrarily to JIT which are getting rather mature and well-known again.

  10. Any system that relies on dynamic profiling and/or compilation is a non-starter on small embedded systems like microcontrollers. This leaves us right back where we started, using C and relying on its platform-dependent semantics. Let the smalltalkers and lispers have their arbitrary-sized, heap-allocated bignums–systems programs need to work with small integers and simple arithmetic, without a giant runtime system.

  11. ITU audio codecs are defined to use saturating arithmetic, and many DSP architectures implement, for that reason. I find it a bit of a pain though, as not being associative, it disallows many common optimisation techniques.

    One curiosity is that in C++, one can define an integer type which although static, doesn’t overflow. It would have operators like this:

    x_int operator+(const x_inta,const x_intb);
    x_int operator*(const x_inta,const x_intb);

    Admittedly, this is not so useful in practice. I used it when defining a hardware function, and it was irritating to have to keep lopping the integers down to size.