Teaching Python Informally to Kids

For the last few months I’ve been running a “coding club” for my son’s sixth-grade class. Once a week, the interested students (about 2/3 of the class) stick around for an hour after school and I help them learn to program. The structure is basically like the lab part of a programming class, without the lecture. I originally had planned to do some lecturing but it soon became clear that at the end of a full day of school (these kids get on the bus around 7:00am, ugh) there was very little attention span remaining. So instead I decided to give them a sheet of problems to work on each week and I spend the hour walking around helping whoever needs help.

One of my main goals is to avoid making anyone hate programming. There are three parts to this. First, I’m not forcing them to do anything, but rather providing suggestions and guidance. Second, there’s no assessment going on. I’ve never been too comfortable with the grading part of teaching, actually. It can really get in the way of learning. Third, I’m encouraging them to work together. I’ve long noticed (starting when I was a CS student) that most learning occurs between students in the lab. Not as much learning happens in the lecture hall.

For a curriculum, we started out with turtle graphics, which I’ve always found to be wonderful. Python’s turtle library is clunkier than Logo and also is abysmally slow, but the advantages (visual debugging, many kids enjoy graphics) outweigh the problems. Fractals (Sierpinski triangle, dragon curve) were a good way to introduce recursion. We spent three or four weeks doing turtle stuff before moving on to simple crypto, which didn’t work as well. On the other hand, the students did really well with mathy code, here’s the handout from one of the math weeks.

Some observations:

  • One of my favorite moments so far has been when a couple of students implemented rot13 functions and used it to send rude messages to each other.
  • It’s pretty important to take a snack break.
  • Although the rockstar / 10x programmer idea is out of favor, it is absolutely clear that there is a huge amount of variation (much more than 10x) in how easily a group of twenty 10-11 year olds learn programming. Some kids are teaching themselves to open files and parse text while others are stuck on basic syntax issues.
  • Syntax, which computer professionals more or less learn to overlook, is hugely important.
  • I’ve always disliked Python’s significant whitespace and watching kids struggle with it has made me hate it even more.

Anyhow, this has been a fun teaching exercise and hopefully it is benefiting the kids.

A Tourist’s Guide to the LLVM Source Code

In my Advanced Compilers course last fall we spent some time poking around in the LLVM source tree. A million lines of C++ is pretty daunting but I found this to be an interesting exercise and at least some of the students agreed, so I thought I’d try to write up something similar. We’ll be using LLVM 3.9, but the layout isn’t that different for previous (and probably subsequent) releases.

I don’t want to spend too much time on LLVM background but here are a few things to keep in mind:

  • The LLVM core doesn’t contain frontends, only the “middle end” optimizers, a pile of backends, documentation, and a lot of auxiliary code. Frontends such as Clang live in separate projects.
  • The core LLVM representation lives in RAM and is manipulated using a large C++ API. This representation can be dumped to readable text and parsed back into memory, but this is only a convenience for debugging: during a normal compilation using LLVM, textual IR is never generated. Typically, a frontend builds IR by calling into the LLVM APIs, then it runs some optimization passes, and finally it invokes a backend to generate assembly or machine code. When LLVM code is stored on disk (which doesn’t even happen during a normal compilation of a C or C++ project using Clang) it is stored as “bitcode,” a compact binary representation.
  • The main LLVM API documentation is generated by doxygen and can be found here. This information is very difficult to make use of unless you already have an idea of what you’re doing and what you’re looking for. The tutorials (linked below) are the place to start learning the LLVM APIs.

So now on to the code. Here’s the root directory, it contains:

  • bindings that permit LLVM APIs to be used from programming languages other than C++. There exist more bindings than this, including C (which we’ll get to a bit later) and Haskell (out of tree).
  • cmake: LLVM uses CMake rather than autoconf now. Just be glad someone besides you works on this.
  • docs in ReStructuredText. See for example the Language Reference Manual that defines the meaning of each LLVM instruction (GitHub renders .rst files to HTML by default; you can look at the raw file here.) The material in the tutorial subdirectory is particularly interesting, but don’t look at it there, rather go here. This is the best way to learn LLVM!
  • examples: This is the source code that goes along with the tutorials. As an LLVM hacker you should grab code, CMakeLists.txt, etc. from here whenever possible.
  • include: The first subdirectory, llvm-c, contains the C bindings, which I haven’t used but look pretty reasonable. Importantly, the LLVM folks try to keep these bindings stable, whereas the C++ APIs are prone to change across releases, though the pace of change seems to have slowed down in the last few years. The second subdirectory, llvm, is a biggie: it contains 878 header files that define all of the LLVM APIs. In general it’s easier to use the doxygen versions of these files rather than reading them directly, but I often end up grepping these files to find some piece of functionality.
  • lib contains the real goodies, we’ll look at it separately below.
  • projects doesn’t contain anything by default but it’s where you checkout LLVM components such as compiler-rt (runtime library for things like sanitizers), OpenMP support, and the LLVM C++ library that live in separate repos.
  • resources: something for Visual C++ that you and I don’t care about (but see here).
  • runtimes: another placeholder for external projects, added only last summer, I don’t know what actually goes here.
  • test: this is a biggie, it contains many thousands of unit tests for LLVM, they get run when you build the check target. Most of these are .ll files containing the textual version of LLVM IR. They test things like an optimization pass having the expected result. I’ll be covering LLVM’s tests in detail in an upcoming blog post.
  • tools: LLVM itself is just a collection of libraries, there isn’t any particular main function. Most of the subdirectories of the tools directory contain an executable tool that links against the LLVM libraries. For example, llvm-dis is a disassembler from bitcode to the textual assembly format.
  • unittests: More unit tests, also run by the check build target. These are C++ files that use the Google Test framework to invoke APIs directly, as opposed to the contents of the “test” directory, which indirectly invoke LLVM functionality by running things like the assembler, disassembler, or optimizer.
  • utils: emacs and vim modes for enforcing LLVM coding conventions; a Valgrind suppression file to eliminate false positives when running make check in such a way that all sub-processes are monitored by Valgrind; the lit and FileCheck tools that support unit testing; and, plenty of other random stuff. You probably don’t care about most of this.

Ok, that was pretty easy! The only thing we skipped over is the “lib” directory, which contains basically everything important. Let’s look its subdirectories now:

  • Analysis contains a lot of static analyses that you would read about in a compiler textbook, such as alias analysis and global value numbering. Some analyses are structured as LLVM passes that must be run by the pass manager; others are structured as libraries that can be called directly. An odd member of the analysis family is InstructionSimplify.cpp, which is a transformation, not an analysis; I’m sure someone can leave a comment explaining what it is doing here (see this comment). I’ll do a deep dive into this directory in a followup post.
  • AsmParser: parse textual IR into memory
  • Bitcode: serialize IR into the compact format and read it back into RAM
  • CodeGen: the LLVM target-independent code generator, basically a framework that LLVM backends fit into and also a bunch of library functions that backends can use. There’s a lot going on here (>100 KLOC) and unfortunately I don’t know very much about it.
  • DebugInfo is a library for maintaining mappings between LLVM instructions and source code locations. There’s a lot of good info in these slides from a talk at the 2014 LLVM Developers’ Meeting.
  • ExecutionEngine: Although LLVM is usually translated into assembly code or machine code, it can be directly executed using an interpreter. The non-jitting interpreter wasn’t quite working the last time I tried to use it, but anyhow it’s a lot slower than running jitted code. The latest JIT API, Orc, is in here.
  • Fuzzer: this is libFuzzer, a coverage-guided fuzzer similar to AFL. It doesn’t fuzz LLVM components, but rather uses LLVM functionality in order to perform fuzzing of programs that are compiled using LLVM.
  • IR: sort of a grab-bag of IR-related code, with no other obvious unifying theme. There’s code for dumping IR to the textual format, for upgrading bitcode files created by earlier versions of LLVM, for folding constants as IR nodes are created, etc.
  • IRReader, LibDriver, LineEditor: almost nobody will care about these and they contain hardly any code anyway.
  • Linker: An LLVM module, like a compilation unit in C or C++, contains functions and variables. The LLVM linker combines multiple modules into a single, larger module.
  • LTO: Link-time optimization, the subject of many blog posts and PhD theses, permits compiler optimizations to see through boundaries created by separate compilation. LLVM can do link-time optimization “for free” by using its linker to create a large module and then optimize this using the regular optimization passes. This used to be the preferred approach, but it doesn’t scale to huge projects. The current approach is ThinLTO, which gets most of the benefit at a small fraction of the cost.
  • MC: compilers usually emit assembly code and let an assembler deal with creating machine code. The MC subsystem in LLVM cuts out the middleman and generates machine code directly. This speeds up compiles and is especially useful when LLVM is used as a JIT compiler.
  • Object: Deals with details of object file formats such as ELF.
  • ObjectYAML seems to support encoding object files as YAML. I do not know why this is desirable.
  • Option: Command line parsing
  • Passes: part of the pass manager, which schedules and sequences LLVM passes, taking their dependencies and invalidations into account.
  • ProfileData: Read and write profile data to support profile-guided optimizations
  • Support: Miscellaneous support code including APInts (arbitrary-precision integers that are used pervasively in LLVM) and much else.
  • TableGen: A wacky Swiss-army knife of a tool that inputs .td files (of which there are more than 200 in LLVM) containing structured data and uses a domain-specific backend to emit C++ code that gets compiled into LLVM. TableGen is used, for example, to take some of the tedium out of implementing assemblers and disassemblers.
  • Target: the processor-specific parts of the backends live here. There are lots of TableGen files. As far as I can tell, you create a new LLVM backend by cloning the one for the architecture that looks the most like yours and then beating on it for a couple of years.
  • Transforms: this is my favorite directory, it’s where the middle-end optimizations live. IPO contains interprocedural optimizations that work across function boundaries, they are typically not too aggressive since they have to look at a lot of code. InstCombine is LLVM’s beast of a peephole optimizer. Instrumentation supports sanitizers. ObjCARC supports this. Scalar contains a pile of compiler-textbooky kinds of optimizers, I’ll try to write a more detailed post about the contents of this directory at some point. Utils are helper code. Vectorize is LLVM’s auto-vectorizer, the subject of much work in recent years.

And that’s all for the high-level tour, hope it was useful and as always let me know what I’ve got wrong or left out.

Undefined Behavior: Not Just for Programming Languages

This is an oldie but goodie. Start with this premise:
a = b
Multiply both sides by a:
a2 = ab
Subtract b2 from both sides:
a2 – b2 = ab – b2
Factor the left side:
(a + b)(a – b) = ab – b2
Factor the right side:
(a + b)(a – b) = b(a – b)
Divide both sides by (a – b) and cancel:
a + b = b
Substitute b for a:
b + b = b
Finally, let b = 1 and simplify:
2 = 1

I ran into this derivation when I was nine or ten years old and it made me deeply uneasy. The explanation, that you’re not allowed to divide by (a – b) because this term is equal to zero, seemed to raise more questions than it answered. How are we supposed to keep track of which terms are equal to zero? What if something is equal to zero but we don’t know it yet? What other little traps are lying out there, waiting to invalidate a derivation? This was one of many times where I noticed that in school they seemed willing to teach the easy version, and that the real world was never so nice, even in a subject like math where — you would think — everything is clean and precise.

Anyway, the point is that undefined behavior has been confusing people for well over a thousand years — we shouldn’t feel too bad that we haven’t gotten it right in programming languages yet.

Vigorous Public Debates in Academic Computer Science

The other day a non-CS friend remarked to me that since computer science is a quantitative, technical discipline, most issues probably have an obvious objective truth. Of course this is not at all the case, and it is not uncommon to find major disagreements even when all parties are apparently reasonable and acting in good faith. Sometimes these disagreements spill over into the public space.

The purpose of this post is to list a collection of public debates in academic computer science where there is genuine and heartfelt disagreement among intelligent and accomplished researchers. I sometimes assign these as reading in class: they are a valuable resource for a couple of reasons. First, they show an important part of science that often gets swept under the rug. Second, they put discussions out into the open where they are widely accessible. In contrast, I’ve heard of papers that are known to be worthless by all of the experts in the area, but only privately — and this private knowledge is of no help to outsiders who might be led astray by the bad research. For whatever reasons (see this tweet by Brendan Dolan-Gavitt) the culture in CS does not seem to encourage retracting papers.

I’d like to fill any holes in this list, please leave a comment if you know of a debate that I’ve left out!

Here are some more debates pointed out by readers:

Advanced Compilers Weeks 3-5

This continues a previous post.

We went through the lattice theory and introduction to dataflow analysis parts of SPA. I consider this extremely good and important material, but I’m afraid that the students looked pretty bored. It may be the case that this material is best approached by first looking at practical aspects and only later going into the theory.

One part of SPA that I’m not super happy with is the material about combining lattices (section 4.3). This is a useful and practical topic but the use cases aren’t really discussed. In class we went through some examples, for example this function that cannot be optimized by either constant propagation or dead code elimination alone, but can be optimized by their reduced product: conditional constant propagation. Which, as you can see, is implemented by both LLVM and GCC. Also, this example cannot be optimized by either sign analysis or parity analysis, but can be optimized using their reduced product.

We didn’t go into them, but I pointed the class to the foundational papers for dataflow analysis and abstract interpretation.

I gave an assignment to implement subtract and bitwise-and transfer functions for the interval abstract domain for signed 5-bit integers. The bitwidth is small so I can rapidly do exhausive testing of students’ code. Their subtract had to be correct and maximally precise — about half of the class accomplished this. Their bitwise-and had to be correct and more precise than always returning top, and about half of the class accomplished this as well (a maximally precise bitwise-and operator for intervals is not at all easy — try it!). Since not everyone got the code right, I had them fix bugs (if any) and resubmit their code for this week. I hope everyone will get it right this time! Also I will give prizes to students whose bitwise-and operator is on the Pareto frontier (out of all submitted solutions) for throughput vs precision and code size vs precision. Here are the results with the Pareto frontier in blue and the minimum and maximum precision in red (narrower intervals are better).

Impressively, student k implemented an optimally precise bitwise-and transfer function! Student c’s transfer function returned an answer other than top only for intervals of width 1. Mine (labeled JOHN) looked at the number of leading zeroes in both operands.

We looked at the LLVM implementation of the bitwise domain (“known bits”, they call it) which lives in ValueTracking.cpp. This analysis doesn’t have a real fixpoint computation, it rather simply walks up the dataflow graph in a recursive fashion, which is a bit confusing since it is a forward dataflow analysis that looks at nodes in the backward direction. The traversal stops at depth 6, and isn’t cached, so the code is really very easy to understand.

We started to look at how LLVM works, I went partway through some lecture notes by David Chisnall. We didn’t focus on the LLVM implementation yet, but rather looked at the design, with a bit of focus on SSA, which is worth spending some time on since it forms the foundation for most modern compilers. I had the students read the first couple of chapters of this drafty SSA book.

Something I’d appreciate feedback on is what (besides SSA) have been the major developments in ahead-of-time compiler technology over the last 25 years or so. Loop optimizations and vectorization have seen major advances of course, as have verified compilers. In this class I want to steer clear of PL-level innovations.

Finally, former Utah undergrad and current Googler Chad Brubaker visited the class and gave a guest lecture on UBSan in production Android: very cool stuff! Hopefully this motivated the class to care about using static analysis to remove integer overflow checks, since they will be doing assignments on that very topic in the future.

Teaching C

The other day Neel Krishnaswami mentioned that he’s going to be teaching the C class at Cambridge in the fall, and asked if I had any advice about that topic. Of course I do! In fact the response got so long that it ended up being a blog post.

My main idea is that we need to teach C in a way that helps students understand why a very large fraction of critical software infrastructure, including almost all operating systems and embedded systems, is written in C, while also acknowledging the disastrously central role that it has played in our ongoing computer security nightmare.

There’s a lot of reading material out there. For the basics, I still recommend that students purchase K&R. People say good things about C Programming: A Modern Approach; I’ve only skimmed it. For advanced C I’ve not read a better book than Expert C Programming, though like K&R it is fairly old. The Practice of Programming is a really great book though it’s not completely specific to C. I haven’t read all of it but from what I’ve seen Modern C is a very good resource, with AFAIK the best treatment of undefined behavior of any C book. The C FAQ contains lots of good material.

For supplemental reading, of course the students need to look at all three parts of Chris Lattner’s writeup about undefined behavior, and mine as well.

What version of C should we teach? Probably a common subset of C99 and C11. In a first C class there’s no need to go into advanced C11 features such as the concurrent memory model.

We’d like students to be able to answer the question: Is C an appropriate choice for solving this problem? We’ll want some lecture material about C’s place in the modern world and we also need to spend time reading some high-quality C code, perhaps starting with Redis, Musl, or Xv6. Musl, in particular, is a good match for teaching since it contains lots of cute little functions that can be understood in isolation. From any such function we can launch a discussion about tradeoffs between portability, efficiency, maintainability, testability, etc. If Rich Felker (the Musl author) did something a certain way, there’s probably a good reason for it and we should be able to puzzle it out. We can also use Matt Godbolt’s super awesome compiler explorer to look at the code generated by various compilers. C’s lightweight-to-nonexistent runtime support is one of its key advantages for real-world system building, and it also means that generated code can be understood without thinking about something like a garbage collector.

We probably also need to spend a bit of time looking at bad old C, the kind that makes the world work even though we’re not proud of it. We can find files in OpenSSL and in the PHP interpreter that would singe your brain despite getting run billions of times a day, or we can always pick on an old standby like glibc — worth looking at just for the preprocessor abuse. But perhaps I am being uncharitable: Pascal Cuoq (reading a draft of this piece) correctly points out that “even what seems like plain stupidity often stems from engineering trade-offs. Does the project try to remain compilable under MS-DOS with DJGPP, with C90 compilers, under VMS, or all three at the same time?” And it is true that we would do well to help students understand that real-world engineering constraints do not often resemble the circumstances that we lead them to expect in school.

The second big thing each student should learn is: How can I avoid being burned by C’s numerous and severe shortcomings? This is a development environment where only the paranoid can survive; we want to emphasize a modern C programming style and heavy reliance on the (thankfully excellent) collection of tools that is available for helping us develop good C.

Static analysis is the first line of defense; the students need to use a good selection of -W flags and then get used to making things compile without warnings. A stronger tool such as the Clang static analyzer should also be used. On the dynamic side, all code handed in by students must be clean as far as ASan, UBSan, and MSan are concerned. tis-interpreter holds code to an even higher standard; I haven’t had students use this tool yet but I think it’s a great thing to try. Since dynamic testing is limited by the quality of the test cases, the students need to get used to using the output of a code coverage tool to find gaps in test coverage. Lots of coverage tools for C are available but I usually just use gcov since it is ubiquitous and hassle-free.

Teaching undefined behavior using sanitizers is a piece of cake: the tool gives students exactly the feedback that they need. The other way of teaching undefined behavior, by looking at its consequences, is something that we should spend a bit of time on, but it requires a different kind of thinking and we probably won’t expect the majority of students to pick up on all the subtleties — even seasoned professional C programmers are often unaware of these.

Detecting errors and doing something about them is a really important part of programming that we typically don’t teach much about in school. Since C is designed to avoid sweeping these problems under the rug, a C class is a great place to get students started on the right track. They should have to implement a goto chain.

Something I’m leaving out of this post is the content of the assignments that we give the students — this mostly depends on the specific goals of the course and how it fits into the broader curriculum (In what year are students expected to take the class? What kind of background do they have in math and science? What languages do they already know?). I’ve always taught C as a side effect of teaching operating systems, embedded systems, or something along those lines. In a course where the primary goal is C we have more freedom, and could look at more domains. Image processing and cryptographic algorithms would be really fun, for example, and even the old standby, data structures, can be used to good effect in class.

I’m also leaving out build systems and version control. They should use these.

In some courses I will give students access to the test infrastructure that will be used to grade their code. This makes assignments a lot more fun, and makes students a lot more happy. Other times I will give them a few test cases and keep the good tests (and the fuzzers) for myself. The idea is to make the assignment not only about implementation but also about testing. This stresses students out but it’s far more realistic.

Pascal remarks that “C is mostly taught very badly, and a student who aims at becoming good at maintaining C code will need to unlearn much that they have (typically) been told in class.” This is regrettably true — a lot of instructors learned C in previous decades and then they teach an outdated language, for example failing to discourage preprocessor abuse. The most serious common failing is to leave students unaware of their side of the bargain when the deal with a C compiler. I am talking of course about undefined behavior (and, to a lesser extent, unspecified and implementation-defined behavior). As a concrete example, I have taught numerous classes based on Computer Systems: A Programmer’s Perspective. In most respects this is an excellent book, but (even in the 3rd edition) it not only ignores undefined behavior but, worse, explicitly teaches students that signed integers in C have two’s complement behavior on overflow:

This claim that positive signed overflow wraps around is neither correct by the C standard nor consistent with the observed behavior of either GCC or LLVM. This isn’t an acceptable claim to make in a popular C-based textbook published in 2015. While I can patch problems in the book during lecture, that isn’t very satisfying, and not all instructors have the time and expertise.

One might argue that we shouldn’t be teaching C any longer, and I would certainly agree that C is probably a poor first or second language. On the other hand, even if we were in a position where no new projects should be written in C (that day is coming, but slowly — probably at least a decade off), we’re still going to be stuck maintaining C for many decades. A random CS graduate has pretty good odds of running into C during her career. But beyond that, even after we replace C, the systems programming niche will remain. A lot of what we learn when we think we’re learning C is low-level programming and that stuff is important.

Thanks to Pascal Cuoq and Robby Findler for commenting on drafts of this piece.

Python Exercises for Kids

For the last year or so I’ve been giving Python exercises to my 11 year old. I thought I’d share some of them. If any of you have been doing similar things, I’d love to hear what worked for you. I think it is helpful that I’m not much of a Python programmer, this forces him to read the documentation. The other day he said “Wow– Stack Overflow is great!”

Fibonacci

Print the Fibonacci series, useful for teaching basics of looping.

Number Guessing Game

The user thinks of a number between 1 and 100. The computer tries to guess it based on feedback about whether the previous guess was too high or low. This one was great for learning about the kinds of off-by-one errors that one customarily runs into while implementing a binary search.

Binary Printer

Print the non-negative binary integers in increasing order. This one forces some representation choices.

Palindrome Recognizer

Recognizing “A man, a plan, a canal — Panama!” as a palindrome requires some string manipulation.

Door Code Recognizer

Our Paris apartment building will let you in if you enter the correct five-digit sequence regardless of how many incorrect digits you previously entered. I thought replicating this behavior in Python would be slightly tricky for the kid but he saw that a FIFO could easily be created from a list.

Monte Carlo Pi Estimator

If you generate random points in the square between -1,-1 and 1,1, the fraction of points that lie inside the unit circle will approach pi/4. The kid got a kick out of seeing this happen. Of course I had to explain the derivation of pi/4 and the formula for a circle since he hasn’t seen this stuff in school yet. Also of course we could have simply worked in quadrant one but that seemed to make the explanations more complicated.

I should perhaps add that I explained this exercise to my kid very differently than I’m explaining it here. We started by drawing a box containing a circle and then pretended to throw darts into the box. The concepts are not hard: all we need to understand is random sampling and the area of a circle. Math gets made to seem a lot harder than it is, in many cases, and also we tend to conflate the difficulty with the order in which the concepts are presented in the school system.

Turtle Graphics Interpreter

The current project is implementing a turtle that takes four commands:

  • forward n
  • right-turn n
  • left-turn n
  • color c

So far we’ve been restricting angles to 0, 90, 180, 270 (and no radians yet…) but I don’t think sine and cosine will be hard concepts to explain in this context. Also, so far the turtle commands are just a series of function calls, the parser will come later. I wonder what would be the simplest loop construct for this turtle language? Perhaps “repeat n” followed by some sort of block construct.

Secret Coders

coders Although I’m not sure that I’ve mentioned it here before, I’m a pretty big comic book nerd. So I was psyched when, late last year, Gene Luen Yang mailed me asking if I’d like a review copy of his upcoming graphic novel. I love Gene’s Avatar comics, which I had been reading with my kids, as well as his earlier American Born Chinese. What I hadn’t known is that Gene is a former computer engineer and high school computer science teacher.

Gene’s mail described Secret Coders — which comes out at the end of September 2015 — as sort of a Harry Potter but with programming instead of magic. It stars plucky Hopper Gracie who has to adjust to life at her strange new middle school while also, with her new buddy Eni, unraveling its many mysteries, some of which might involve her missing father. Along the way, the reader gets a clever introduction to binary numbers and Logo programming, the latter via a slightly mysterious robotic turtle character that obeys verbal commands. You can read an excerpt here. The book’s web site also has a 5-minute video introducing Logo, which makes it feel like this series might end up being as much of a MOOC as a graphic novel.

My 10 year old read Secret Coders almost as soon as it arrived. I should mention that he has read more of Gene’s books than I have (perhaps using the school library? I’m not totally sure) and has loved all of them. The kid is also a decent Scratch programmer and something of a novice at Python. He gave the book high ratings both for plot and for CS content. My eight year old hasn’t read the book yet; if he does, I’ll update this post with additional findings. Anyway, at $5.50 (Amazon price) this book is a great deal and I’d recommend it to folks who are hoping to get a young person interested in programming.

Update from Sep 17: The 8 year old is reading Secret Coders now and loving it. Too early to tell if he’ll pick up the programming content.

Too Much Milk: The Secret Origin Story

When I first taught operating systems 12 years ago, I based my teaching materials on a set of slides inherited from John Carter, the previous instructor at Utah. I was generally happy with these slides, and I’ve continued to evolve them since then, but one thing I was always curious about was the “too much milk” example that is used to introduce concurrency problems. It goes something like: “You see that you’re out of milk and leave to go buy some. A few minutes later your roommate comes home and sees that there’s no milk. Later: OH NO THERE’S TOO MUCH MILK because of a synchronization error between humans.” This always struck me as a bit of an idiosyncratic way to go about introducing synchronization but I didn’t have any reason to replace it and didn’t think about it much until one time while procrastinating on lecture prep I Googled “too much milk” and learned the shocking truth: some large fraction of operating systems courses at research universities use this same example. See this for yourself.

The intriguing possibility is that too much milk could be used as sort of a mitochondrial DNA to trace the evolution of OS course notes. I started to envision creating a tool for analyzing the lineage of Powerpoint files, but then totally forgot about it until a few weeks ago. While chatting with Ryan Stutsman about teaching OS, for some reason I mentioned the Too Much Milk phenomenon. He thought that this had probably originated with his advisor John Ousterhout. John then explained:

I believe that the “too much milk” example owes its popularity to me, in that I have been using it ever since the early 1980s in my courses. For many years in the 1980s and 1990s, whenever a PhD student graduated in the systems area, Dave Patterson gave them a complete set of lecture videos for a couple of systems classes, including my CS 162 lectures. As a result, a lot of the material from my CS 162 lectures has spread all across the country through Berkeley graduates, including the “too much milk” example.

However, I did not invent this example. Before I taught my first operating systems course at Berkeley, Mike Powell gave me a copy of his notes and the “too much milk” example was there. So, it goes back at least to Mike Powell. I don’t know if he invented it, or if he got it from someone else.

So there you have it.

This post doesn’t have too much of a point, but I thought this was a nice story and also it provides a bit of insight into how teaching actually works: figuring out how to teach course material is hard and in practice, most of the time we borrow a framework and adapt it to fit our needs. Every now and then I’ll discuss this in class and invariably one or two undergraduates are so surprised that I’ll get dinged on the course evaluations with comments like “Lazy professor doesn’t even develop his own course materials.” However, I’ve done three or four courses the other way — starting from scratch — and have found that it takes multiple iterations before the material converges, so it’s not really clear to me that it’s better to start from scratch in cases where a good set of existing material can be found.

Heartbleed and Static Analysis

Today in my Writing Solid Code class we went through some of the 151 defects that Coverity Scan reports for OpenSSL. I can’t link to these results but take my word for it that they are a pleasure to read — the interface clearly explains each flaw and the reasoning that leads up to it, even across multiple function calls. Some of the problems were slightly alarming but we didn’t see anything that looks like the next heartbleed. Unfortunately, Coverity did not find the heartbleed bug itself. (UPDATE: See this followup post.) This is puzzling; here’s a bit of speculation about what might be going on. There are basically two ways to find heartbleed using static analysis (here I’ll assume you’re familiar with the bug; if not, this post is useful). First, a taint analysis should be able to observe that two bytes from an untrusted source find their way into the length argument of a memcpy() call. This is clearly undesirable. The Coverity documentation indicates that it taints the buffer stored to by a read() system call (unfortunately you will need to login to Coverity Scan before you can see this). So why don’t we get a defect report? One guess is that since the data buffer is behind two pointer dereferences (the access is via s->s3->rrec.data), the scanner’s alias analysis loses track of what is going on. Another guess is that two bytes of tainted data are not enough to trigger an alarm. Only someone familiar with the Coverity implementation can say for sure what is going on — the tool is highly sophisticated not just in its static analysis but also in its defect ranking system.

The other kind of static analysis that would find heartbleed is one that insists that the length argument to any memcpy() call does not exceed the size of either the source or destination buffer. Frama-C in value analysis mode is a tool that can do this. It is sound, meaning that it will not stop complaining until it can prove that certain defect classes are not present, and as such it requires far more handholding than does Coverity, which is designed to unsoundly analyze huge quantities of code. To use Frama-C, we would make sure that its own header files are included instead of the system headers. In one of those files we would find a model for memcpy():

/*@ requires \valid(((char*)dest)+(0..n - 1));
  @ requires \valid_read(((char*)src)+(0..n - 1));
  @ requires \separated(((char *)dest)+(0..n-1),((char *)src)+(0..n-1));
  @ assigns ((char*)dest)[0..n - 1] \from ((char*)src)[0..n-1];
  @ assigns \result \from dest;
  @ ensures memcmp((char*)dest,(char*)src,n) == 0;
  @ ensures \result == dest;
  @*/
extern void *memcpy(void *restrict dest,
                    const void *restrict src, 
                    size_t n);

The comments are consumed by Frama-C. Basically they say that src and dest are pointers to valid storage of at least the required size, that the buffers do not overlap (recall that memcpy() has undefined behavior when called with overlapping regions), that it moves data in the proper direction, that the return value is dest, and that a subsequent memcmp() of the two regions will return zero.

The Frama-C value analyzer tracks an integer variable using an interval: a representation of the smallest and largest value that the integer could contain at some program point. Upon reaching the problematic memcpy() call in t1_lib.c, the value of payload is in the interval [0..65535]. This interval comes from the n2s() macro which turns two arbitrary-valued bytes from the client into an unsigned int:

#define n2s(c,s)        ((s=(((unsigned int)(c[0]))<< 8)| \
                            (((unsigned int)(c[1]))    )),c+=2)

The dest argument of this memcpy() turns out to be large enough. However, the source buffer is way too small. Frama-C would gripe about this and it would not shut up until the bug was really fixed.

How much effort would be required to push OpenSSL through Frama-C? I don’t know, but wouldn’t plan on getting this done in a week or three. Interestingly, a company spun off by Frama-C developers has recently used the tool to create a version of PolarSSL that they promise is immune to CWEs 119, 120, 121, 122, 123, 124, 125, 126, 127, 369, 415, 416, 457, 562, and 690. I think it would be reasonable for the open source security community to start thinking more seriously about what this kind of tool can do for us.

UPDATES:

  • In a comment below, Masklinn states that OpenSSL’s custom allocators would defeat the detection of the too-large argument to memcpy(). This is indeed a danger. To avoid it, as part of applying Frama-C to OpenSSL, the custom malloc/free functions would be marked as being malloc-like using the “allocates” and “frees” keywords supported by ACSL 1.8. Coverity lets you do the same thing, and so would any competent analyzer for C. Custom allocators are regrettably common in oldish C code.
  • I’m interested to see what other tools have to say about heartbleed. If you have a link to results, please put it in a comment.
  • I re-ran Coverity after disabling OpenSSL’s custom freelist and also hacking CRYPTO_malloc() and friends to just directly call the obvious function from the malloc family. This caused Coverity to report 173 new defects: mostly use-after-free and resource leaks. Heartbleed wasn’t in the list, however, so I stand by my guess (above) that perhaps something related to indirection caused this defect to not be ranked highly enough to be reported.
  • HN has some discussion of PolarSSL and of this blog post. Also Reddit.