Undefined Behavior: Not Just for Programming Languages

This is an oldie but goodie. Start with this premise:
a = b
Multiply both sides by a:
a2 = ab
Subtract b2 from both sides:
a2 – b2 = ab – b2
Factor the left side:
(a + b)(a – b) = ab – b2
Factor the right side:
(a + b)(a – b) = b(a – b)
Divide both sides by (a – b) and cancel:
a + b = b
Substitute b for a:
b + b = b
Finally, let b = 1 and simplify:
2 = 1

I ran into this derivation when I was nine or ten years old and it made me deeply uneasy. The explanation, that you’re not allowed to divide by (a – b) because this term is equal to zero, seemed to raise more questions than it answered. How are we supposed to keep track of which terms are equal to zero? What if something is equal to zero but we don’t know it yet? What other little traps are lying out there, waiting to invalidate a derivation? This was one of many times where I noticed that in school they seemed willing to teach the easy version, and that the real world was never so nice, even in a subject like math where — you would think — everything is clean and precise.

Anyway, the point is that undefined behavior has been confusing people for well over a thousand years — we shouldn’t feel too bad that we haven’t gotten it right in programming languages yet.

Vigorous Public Debates in Academic Computer Science

The other day a non-CS friend remarked to me that since computer science is a quantitative, technical discipline, most issues probably have an obvious objective truth. Of course this is not at all the case, and it is not uncommon to find major disagreements even when all parties are apparently reasonable and acting in good faith. Sometimes these disagreements spill over into the public space.

The purpose of this post is to list a collection of public debates in academic computer science where there is genuine and heartfelt disagreement among intelligent and accomplished researchers. I sometimes assign these as reading in class: they are a valuable resource for a couple of reasons. First, they show an important part of science that often gets swept under the rug. Second, they put discussions out into the open where they are widely accessible. In contrast, I’ve heard of papers that are known to be worthless by all of the experts in the area, but only privately — and this private knowledge is of no help to outsiders who might be led astray by the bad research. For whatever reasons (see this tweet by Brendan Dolan-Gavitt) the culture in CS does not seem to encourage retracting papers.

I’d like to fill any holes in this list, please leave a comment if you know of a debate that I’ve left out!

Here are some more debates pointed out by readers:

Advanced Compilers Weeks 3-5

This continues a previous post.

We went through the lattice theory and introduction to dataflow analysis parts of SPA. I consider this extremely good and important material, but I’m afraid that the students looked pretty bored. It may be the case that this material is best approached by first looking at practical aspects and only later going into the theory.

One part of SPA that I’m not super happy with is the material about combining lattices (section 4.3). This is a useful and practical topic but the use cases aren’t really discussed. In class we went through some examples, for example this function that cannot be optimized by either constant propagation or dead code elimination alone, but can be optimized by their reduced product: conditional constant propagation. Which, as you can see, is implemented by both LLVM and GCC. Also, this example cannot be optimized by either sign analysis or parity analysis, but can be optimized using their reduced product.

We didn’t go into them, but I pointed the class to the foundational papers for dataflow analysis and abstract interpretation.

I gave an assignment to implement subtract and bitwise-and transfer functions for the interval abstract domain for signed 5-bit integers. The bitwidth is small so I can rapidly do exhausive testing of students’ code. Their subtract had to be correct and maximally precise — about half of the class accomplished this. Their bitwise-and had to be correct and more precise than always returning top, and about half of the class accomplished this as well (a maximally precise bitwise-and operator for intervals is not at all easy — try it!). Since not everyone got the code right, I had them fix bugs (if any) and resubmit their code for this week. I hope everyone will get it right this time! Also I will give prizes to students whose bitwise-and operator is on the Pareto frontier (out of all submitted solutions) for throughput vs precision and code size vs precision. Here are the results with the Pareto frontier in blue and the minimum and maximum precision in red (narrower intervals are better).

Impressively, student k implemented an optimally precise bitwise-and transfer function! Student c’s transfer function returned an answer other than top only for intervals of width 1. Mine (labeled JOHN) looked at the number of leading zeroes in both operands.

We looked at the LLVM implementation of the bitwise domain (“known bits”, they call it) which lives in ValueTracking.cpp. This analysis doesn’t have a real fixpoint computation, it rather simply walks up the dataflow graph in a recursive fashion, which is a bit confusing since it is a forward dataflow analysis that looks at nodes in the backward direction. The traversal stops at depth 6, and isn’t cached, so the code is really very easy to understand.

We started to look at how LLVM works, I went partway through some lecture notes by David Chisnall. We didn’t focus on the LLVM implementation yet, but rather looked at the design, with a bit of focus on SSA, which is worth spending some time on since it forms the foundation for most modern compilers. I had the students read the first couple of chapters of this drafty SSA book.

Something I’d appreciate feedback on is what (besides SSA) have been the major developments in ahead-of-time compiler technology over the last 25 years or so. Loop optimizations and vectorization have seen major advances of course, as have verified compilers. In this class I want to steer clear of PL-level innovations.

Finally, former Utah undergrad and current Googler Chad Brubaker visited the class and gave a guest lecture on UBSan in production Android: very cool stuff! Hopefully this motivated the class to care about using static analysis to remove integer overflow checks, since they will be doing assignments on that very topic in the future.

Teaching C

The other day Neel Krishnaswami mentioned that he’s going to be teaching the C class at Cambridge in the fall, and asked if I had any advice about that topic. Of course I do! In fact the response got so long that it ended up being a blog post.

My main idea is that we need to teach C in a way that helps students understand why a very large fraction of critical software infrastructure, including almost all operating systems and embedded systems, is written in C, while also acknowledging the disastrously central role that it has played in our ongoing computer security nightmare.

There’s a lot of reading material out there. For the basics, I still recommend that students purchase K&R. People say good things about C Programming: A Modern Approach; I’ve only skimmed it. For advanced C I’ve not read a better book than Expert C Programming, though like K&R it is fairly old. The Practice of Programming is a really great book though it’s not completely specific to C. I haven’t read all of it but from what I’ve seen Modern C is a very good resource, with AFAIK the best treatment of undefined behavior of any C book. The C FAQ contains lots of good material.

For supplemental reading, of course the students need to look at all three parts of Chris Lattner’s writeup about undefined behavior, and mine as well.

What version of C should we teach? Probably a common subset of C99 and C11. In a first C class there’s no need to go into advanced C11 features such as the concurrent memory model.

We’d like students to be able to answer the question: Is C an appropriate choice for solving this problem? We’ll want some lecture material about C’s place in the modern world and we also need to spend time reading some high-quality C code, perhaps starting with Redis, Musl, or Xv6. Musl, in particular, is a good match for teaching since it contains lots of cute little functions that can be understood in isolation. From any such function we can launch a discussion about tradeoffs between portability, efficiency, maintainability, testability, etc. If Rich Felker (the Musl author) did something a certain way, there’s probably a good reason for it and we should be able to puzzle it out. We can also use Matt Godbolt’s super awesome compiler explorer to look at the code generated by various compilers. C’s lightweight-to-nonexistent runtime support is one of its key advantages for real-world system building, and it also means that generated code can be understood without thinking about something like a garbage collector.

We probably also need to spend a bit of time looking at bad old C, the kind that makes the world work even though we’re not proud of it. We can find files in OpenSSL and in the PHP interpreter that would singe your brain despite getting run billions of times a day, or we can always pick on an old standby like glibc — worth looking at just for the preprocessor abuse. But perhaps I am being uncharitable: Pascal Cuoq (reading a draft of this piece) correctly points out that “even what seems like plain stupidity often stems from engineering trade-offs. Does the project try to remain compilable under MS-DOS with DJGPP, with C90 compilers, under VMS, or all three at the same time?” And it is true that we would do well to help students understand that real-world engineering constraints do not often resemble the circumstances that we lead them to expect in school.

The second big thing each student should learn is: How can I avoid being burned by C’s numerous and severe shortcomings? This is a development environment where only the paranoid can survive; we want to emphasize a modern C programming style and heavy reliance on the (thankfully excellent) collection of tools that is available for helping us develop good C.

Static analysis is the first line of defense; the students need to use a good selection of -W flags and then get used to making things compile without warnings. A stronger tool such as the Clang static analyzer should also be used. On the dynamic side, all code handed in by students must be clean as far as ASan, UBSan, and MSan are concerned. tis-interpreter holds code to an even higher standard; I haven’t had students use this tool yet but I think it’s a great thing to try. Since dynamic testing is limited by the quality of the test cases, the students need to get used to using the output of a code coverage tool to find gaps in test coverage. Lots of coverage tools for C are available but I usually just use gcov since it is ubiquitous and hassle-free.

Teaching undefined behavior using sanitizers is a piece of cake: the tool gives students exactly the feedback that they need. The other way of teaching undefined behavior, by looking at its consequences, is something that we should spend a bit of time on, but it requires a different kind of thinking and we probably won’t expect the majority of students to pick up on all the subtleties — even seasoned professional C programmers are often unaware of these.

Detecting errors and doing something about them is a really important part of programming that we typically don’t teach much about in school. Since C is designed to avoid sweeping these problems under the rug, a C class is a great place to get students started on the right track. They should have to implement a goto chain.

Something I’m leaving out of this post is the content of the assignments that we give the students — this mostly depends on the specific goals of the course and how it fits into the broader curriculum (In what year are students expected to take the class? What kind of background do they have in math and science? What languages do they already know?). I’ve always taught C as a side effect of teaching operating systems, embedded systems, or something along those lines. In a course where the primary goal is C we have more freedom, and could look at more domains. Image processing and cryptographic algorithms would be really fun, for example, and even the old standby, data structures, can be used to good effect in class.

I’m also leaving out build systems and version control. They should use these.

In some courses I will give students access to the test infrastructure that will be used to grade their code. This makes assignments a lot more fun, and makes students a lot more happy. Other times I will give them a few test cases and keep the good tests (and the fuzzers) for myself. The idea is to make the assignment not only about implementation but also about testing. This stresses students out but it’s far more realistic.

Pascal remarks that “C is mostly taught very badly, and a student who aims at becoming good at maintaining C code will need to unlearn much that they have (typically) been told in class.” This is regrettably true — a lot of instructors learned C in previous decades and then they teach an outdated language, for example failing to discourage preprocessor abuse. The most serious common failing is to leave students unaware of their side of the bargain when the deal with a C compiler. I am talking of course about undefined behavior (and, to a lesser extent, unspecified and implementation-defined behavior). As a concrete example, I have taught numerous classes based on Computer Systems: A Programmer’s Perspective. In most respects this is an excellent book, but (even in the 3rd edition) it not only ignores undefined behavior but, worse, explicitly teaches students that signed integers in C have two’s complement behavior on overflow:

This claim that positive signed overflow wraps around is neither correct by the C standard nor consistent with the observed behavior of either GCC or LLVM. This isn’t an acceptable claim to make in a popular C-based textbook published in 2015. While I can patch problems in the book during lecture, that isn’t very satisfying, and not all instructors have the time and expertise.

One might argue that we shouldn’t be teaching C any longer, and I would certainly agree that C is probably a poor first or second language. On the other hand, even if we were in a position where no new projects should be written in C (that day is coming, but slowly — probably at least a decade off), we’re still going to be stuck maintaining C for many decades. A random CS graduate has pretty good odds of running into C during her career. But beyond that, even after we replace C, the systems programming niche will remain. A lot of what we learn when we think we’re learning C is low-level programming and that stuff is important.

Thanks to Pascal Cuoq and Robby Findler for commenting on drafts of this piece.

Python Exercises for Kids

For the last year or so I’ve been giving Python exercises to my 11 year old. I thought I’d share some of them. If any of you have been doing similar things, I’d love to hear what worked for you. I think it is helpful that I’m not much of a Python programmer, this forces him to read the documentation. The other day he said “Wow– Stack Overflow is great!”


Print the Fibonacci series, useful for teaching basics of looping.

Number Guessing Game

The user thinks of a number between 1 and 100. The computer tries to guess it based on feedback about whether the previous guess was too high or low. This one was great for learning about the kinds of off-by-one errors that one customarily runs into while implementing a binary search.

Binary Printer

Print the non-negative binary integers in increasing order. This one forces some representation choices.

Palindrome Recognizer

Recognizing “A man, a plan, a canal — Panama!” as a palindrome requires some string manipulation.

Door Code Recognizer

Our Paris apartment building will let you in if you enter the correct five-digit sequence regardless of how many incorrect digits you previously entered. I thought replicating this behavior in Python would be slightly tricky for the kid but he saw that a FIFO could easily be created from a list.

Monte Carlo Pi Estimator

If you generate random points in the square between -1,-1 and 1,1, the fraction of points that lie inside the unit circle will approach pi/4. The kid got a kick out of seeing this happen. Of course I had to explain the derivation of pi/4 and the formula for a circle since he hasn’t seen this stuff in school yet. Also of course we could have simply worked in quadrant one but that seemed to make the explanations more complicated.

I should perhaps add that I explained this exercise to my kid very differently than I’m explaining it here. We started by drawing a box containing a circle and then pretended to throw darts into the box. The concepts are not hard: all we need to understand is random sampling and the area of a circle. Math gets made to seem a lot harder than it is, in many cases, and also we tend to conflate the difficulty with the order in which the concepts are presented in the school system.

Turtle Graphics Interpreter

The current project is implementing a turtle that takes four commands:

  • forward n
  • right-turn n
  • left-turn n
  • color c

So far we’ve been restricting angles to 0, 90, 180, 270 (and no radians yet…) but I don’t think sine and cosine will be hard concepts to explain in this context. Also, so far the turtle commands are just a series of function calls, the parser will come later. I wonder what would be the simplest loop construct for this turtle language? Perhaps “repeat n” followed by some sort of block construct.

Secret Coders

coders Although I’m not sure that I’ve mentioned it here before, I’m a pretty big comic book nerd. So I was psyched when, late last year, Gene Luen Yang mailed me asking if I’d like a review copy of his upcoming graphic novel. I love Gene’s Avatar comics, which I had been reading with my kids, as well as his earlier American Born Chinese. What I hadn’t known is that Gene is a former computer engineer and high school computer science teacher.

Gene’s mail described Secret Coders — which comes out at the end of September 2015 — as sort of a Harry Potter but with programming instead of magic. It stars plucky Hopper Gracie who has to adjust to life at her strange new middle school while also, with her new buddy Eni, unraveling its many mysteries, some of which might involve her missing father. Along the way, the reader gets a clever introduction to binary numbers and Logo programming, the latter via a slightly mysterious robotic turtle character that obeys verbal commands. You can read an excerpt here. The book’s web site also has a 5-minute video introducing Logo, which makes it feel like this series might end up being as much of a MOOC as a graphic novel.

My 10 year old read Secret Coders almost as soon as it arrived. I should mention that he has read more of Gene’s books than I have (perhaps using the school library? I’m not totally sure) and has loved all of them. The kid is also a decent Scratch programmer and something of a novice at Python. He gave the book high ratings both for plot and for CS content. My eight year old hasn’t read the book yet; if he does, I’ll update this post with additional findings. Anyway, at $5.50 (Amazon price) this book is a great deal and I’d recommend it to folks who are hoping to get a young person interested in programming.

Update from Sep 17: The 8 year old is reading Secret Coders now and loving it. Too early to tell if he’ll pick up the programming content.

Too Much Milk: The Secret Origin Story

When I first taught operating systems 12 years ago, I based my teaching materials on a set of slides inherited from John Carter, the previous instructor at Utah. I was generally happy with these slides, and I’ve continued to evolve them since then, but one thing I was always curious about was the “too much milk” example that is used to introduce concurrency problems. It goes something like: “You see that you’re out of milk and leave to go buy some. A few minutes later your roommate comes home and sees that there’s no milk. Later: OH NO THERE’S TOO MUCH MILK because of a synchronization error between humans.” This always struck me as a bit of an idiosyncratic way to go about introducing synchronization but I didn’t have any reason to replace it and didn’t think about it much until one time while procrastinating on lecture prep I Googled “too much milk” and learned the shocking truth: some large fraction of operating systems courses at research universities use this same example. See this for yourself.

The intriguing possibility is that too much milk could be used as sort of a mitochondrial DNA to trace the evolution of OS course notes. I started to envision creating a tool for analyzing the lineage of Powerpoint files, but then totally forgot about it until a few weeks ago. While chatting with Ryan Stutsman about teaching OS, for some reason I mentioned the Too Much Milk phenomenon. He thought that this had probably originated with his advisor John Ousterhout. John then explained:

I believe that the “too much milk” example owes its popularity to me, in that I have been using it ever since the early 1980s in my courses. For many years in the 1980s and 1990s, whenever a PhD student graduated in the systems area, Dave Patterson gave them a complete set of lecture videos for a couple of systems classes, including my CS 162 lectures. As a result, a lot of the material from my CS 162 lectures has spread all across the country through Berkeley graduates, including the “too much milk” example.

However, I did not invent this example. Before I taught my first operating systems course at Berkeley, Mike Powell gave me a copy of his notes and the “too much milk” example was there. So, it goes back at least to Mike Powell. I don’t know if he invented it, or if he got it from someone else.

So there you have it.

This post doesn’t have too much of a point, but I thought this was a nice story and also it provides a bit of insight into how teaching actually works: figuring out how to teach course material is hard and in practice, most of the time we borrow a framework and adapt it to fit our needs. Every now and then I’ll discuss this in class and invariably one or two undergraduates are so surprised that I’ll get dinged on the course evaluations with comments like “Lazy professor doesn’t even develop his own course materials.” However, I’ve done three or four courses the other way — starting from scratch — and have found that it takes multiple iterations before the material converges, so it’s not really clear to me that it’s better to start from scratch in cases where a good set of existing material can be found.

Heartbleed and Static Analysis

Today in my Writing Solid Code class we went through some of the 151 defects that Coverity Scan reports for OpenSSL. I can’t link to these results but take my word for it that they are a pleasure to read — the interface clearly explains each flaw and the reasoning that leads up to it, even across multiple function calls. Some of the problems were slightly alarming but we didn’t see anything that looks like the next heartbleed. Unfortunately, Coverity did not find the heartbleed bug itself. (UPDATE: See this followup post.) This is puzzling; here’s a bit of speculation about what might be going on. There are basically two ways to find heartbleed using static analysis (here I’ll assume you’re familiar with the bug; if not, this post is useful). First, a taint analysis should be able to observe that two bytes from an untrusted source find their way into the length argument of a memcpy() call. This is clearly undesirable. The Coverity documentation indicates that it taints the buffer stored to by a read() system call (unfortunately you will need to login to Coverity Scan before you can see this). So why don’t we get a defect report? One guess is that since the data buffer is behind two pointer dereferences (the access is via s->s3->rrec.data), the scanner’s alias analysis loses track of what is going on. Another guess is that two bytes of tainted data are not enough to trigger an alarm. Only someone familiar with the Coverity implementation can say for sure what is going on — the tool is highly sophisticated not just in its static analysis but also in its defect ranking system.

The other kind of static analysis that would find heartbleed is one that insists that the length argument to any memcpy() call does not exceed the size of either the source or destination buffer. Frama-C in value analysis mode is a tool that can do this. It is sound, meaning that it will not stop complaining until it can prove that certain defect classes are not present, and as such it requires far more handholding than does Coverity, which is designed to unsoundly analyze huge quantities of code. To use Frama-C, we would make sure that its own header files are included instead of the system headers. In one of those files we would find a model for memcpy():

/*@ requires \valid(((char*)dest)+(0..n - 1));
  @ requires \valid_read(((char*)src)+(0..n - 1));
  @ requires \separated(((char *)dest)+(0..n-1),((char *)src)+(0..n-1));
  @ assigns ((char*)dest)[0..n - 1] \from ((char*)src)[0..n-1];
  @ assigns \result \from dest;
  @ ensures memcmp((char*)dest,(char*)src,n) == 0;
  @ ensures \result == dest;
extern void *memcpy(void *restrict dest,
                    const void *restrict src, 
                    size_t n);

The comments are consumed by Frama-C. Basically they say that src and dest are pointers to valid storage of at least the required size, that the buffers do not overlap (recall that memcpy() has undefined behavior when called with overlapping regions), that it moves data in the proper direction, that the return value is dest, and that a subsequent memcmp() of the two regions will return zero.

The Frama-C value analyzer tracks an integer variable using an interval: a representation of the smallest and largest value that the integer could contain at some program point. Upon reaching the problematic memcpy() call in t1_lib.c, the value of payload is in the interval [0..65535]. This interval comes from the n2s() macro which turns two arbitrary-valued bytes from the client into an unsigned int:

#define n2s(c,s)        ((s=(((unsigned int)(c[0]))<< 8)| \
                            (((unsigned int)(c[1]))    )),c+=2)

The dest argument of this memcpy() turns out to be large enough. However, the source buffer is way too small. Frama-C would gripe about this and it would not shut up until the bug was really fixed.

How much effort would be required to push OpenSSL through Frama-C? I don’t know, but wouldn’t plan on getting this done in a week or three. Interestingly, a company spun off by Frama-C developers has recently used the tool to create a version of PolarSSL that they promise is immune to CWEs 119, 120, 121, 122, 123, 124, 125, 126, 127, 369, 415, 416, 457, 562, and 690. I think it would be reasonable for the open source security community to start thinking more seriously about what this kind of tool can do for us.


  • In a comment below, Masklinn states that OpenSSL’s custom allocators would defeat the detection of the too-large argument to memcpy(). This is indeed a danger. To avoid it, as part of applying Frama-C to OpenSSL, the custom malloc/free functions would be marked as being malloc-like using the “allocates” and “frees” keywords supported by ACSL 1.8. Coverity lets you do the same thing, and so would any competent analyzer for C. Custom allocators are regrettably common in oldish C code.
  • I’m interested to see what other tools have to say about heartbleed. If you have a link to results, please put it in a comment.
  • I re-ran Coverity after disabling OpenSSL’s custom freelist and also hacking CRYPTO_malloc() and friends to just directly call the obvious function from the malloc family. This caused Coverity to report 173 new defects: mostly use-after-free and resource leaks. Heartbleed wasn’t in the list, however, so I stand by my guess (above) that perhaps something related to indirection caused this defect to not be ranked highly enough to be reported.
  • HN has some discussion of PolarSSL and of this blog post. Also Reddit.


I’m teaching a small Advanced Operating Systems course this spring. Preparing for the course over winter break, I spent some time reading various Linux subsystems such as the scheduler, and was a bit shocked at how complex it has become. I’ve been using Linux, looking at its code, and occasionally hacking it for more than 20 years, and it seems that my impression of it as a fairly simple, easy-to-follow kernel has become badly out of date. It’s not that this glorious 7991-line file is impossible to understand, but rather that it — along with the other 16,000 lines of code in the kernel/sched directory — isn’t obviously the correct thing to inflict on some undergrads who are interested enough in operating systems to take a second course.

While looking around for alternatives I tried out Xv6, which I had known about for a while but hadn’t looked at closely. Xv6 is a rewrite of v6 UNIX in modern C that runs on multicore x86 chips. It compiles in a couple of seconds and is trivial to boot up in QEMU. It took me a while to see the genius of Xv6, which is that it is simpler than I would have thought a working multicore OS with shell and filesystem could be. For example, it lacks wait queues and ready queues — in Xv6, both wakeup and scheduling are accomplished by looping over the all-process table. Similarly, there’s no malloc() in the kernel, but rather just a page allocator. The pipe implementation copies one byte at a time. Amazingly, even the bootloader is a pleasure to read. Another nice thing about Xv6 is that it comes with a short textbook that explains OS concepts in terms of their implementations in Xv6.

So what did we do with Xv6 beside read it? One exercise was to speed up its pipe implementation as much as possible while preserving UNIX semantics. This turned out to be a really nice exercise (I thought). It is fairly easy to get within a factor of two of Linux pipe throughput on a two-core VM. We implemented a ring buffer for zero-copy bulk data transfer between processes. Finally, we added priorities and ready queues.

I have no real complaints about Xv6: the code is clean and commented, the functions are small, and overall it makes a great instructional OS. The look and feel are very similar to what I remember from hacking on Minix when I took an advanced OS class around 1994. Actually I do have one small complaint, which is that in a few places liberties are taken with error checking. For example, in the allocuvm() function from vm.c, which grows a process, there is a call to mappages() that mysteriously fails to check the return value. Is there some reason that this particular call cannot fail? If so, a comment explaining the reasoning is needed. If not, error checking is required. I’m sensitive about this issue since I tell the students over and over that they cannot ignore failures in this kind of code.

Another OS that we’ve been learning from is the Windows 2000 kernel which is — surprisingly, to many people — a simple and elegant multicore OS that provides some real-world contrast to Xv6’s over-simplicity. I haven’t seen any Windows kernels later than 2000, I’d be curious to hear if they have remained so nice.

That Was Easy

I wanted to play around with the Python interface to Z3 and the classic SEND + MORE = MONEY puzzle seemed like a good way to get started. This turned out to be so easy that my code worked almost on the first try:

from z3 import *

# find the (distinct) integer value in 0..9 for each letter that makes
# this equation work:
#   SEND
# + MORE
# ------

S = Int('S')
E = Int('E')
N = Int('N')
D = Int('D')
M = Int('M')
O = Int('O')
R = Int('R')
Y = Int('Y')

s = Solver()

s.add (S >= 0, S < 10, 
       E >= 0, E < 10, 
       N >= 0, N < 10, 
       D >= 0, D < 10, 
       M >= 0, M < 10, 
       O >= 0, O < 10, 
       R >= 0, R < 10, 
       Y >= 0, Y < 10)

s.add ((D + 10*N + 100*E + 1000*S + 
        E + 10*R + 100*O + 1000*M) == 
       (Y + 10*E + 100*N + 1000*O + 10000*M))

s.add (S != E, S != N, S != D, S != M, S != O, S != R, S != Y, 
       E != N, E != D, E != M, E != O, E != R, E != Y, 
       N != D, N != M, N != O, N != R, N != Y, 
       D != M, D != O, D != R, D != Y, 
       M != O, M != R, M != Y, 
       O != R, O != Y, 
       R != Y)

print s.check()
m = s.model()
for d in m.decls():
    print "%s = %s" % (d.name(), m[d])


$ python ./send_more_money.py
D = 7
S = 9
N = 6
O = 0
R = 8
E = 5
M = 1
Y = 2

Is there a non-quadratic way to express the uniqueness constraint? That would be nice. Anyhow, it’s really a pleasure when research software is this easy to use.

UPDATE: There is a better way! See comment #1 below. Thanks Taylor.