Picking a Research Topic in Computer Systems

This post is a collection of observations and advice for people who want to choose a research topic in computer systems.  I’m not claiming to be some kind of genius in this area, but I have enough ideas that they seemed worth writing down. This advice is probably most useful for graduate students in CS, but may also be helpful for junior profs and for undergrads interested in doing research.

Picking a research area

By “research area” I mean a sub-area of computer systems, such as object-oriented operating systems, distributed hash tables, or whatever.  This often ends up being a pretty pragmatic decision and it is understood that a good researcher can work concurrently in multiple areas or change area as often as every few years.

The easiest way is to choose an area that a lot of other people are working on.  I’m quite serious, and this method has some important advantages.  First, all those people at MIT and Berkeley aren’t stupid; if they’re working on ultra-reliable byzantine transcoders, there’s probably something to it.  Second, if you work on the same thing others are working on, the odds are good they’ll be interested in your work and will be more willing to fund and publish it.  The potentially serious drawback is that you’ll likely miss the initial, exciting phase of research where lots of new things are being discovered and there’s not yet a lot of commonly accepted wisdom about what the best solutions are.  In the worst case, you’ll miss the entire main period of interest and arrive late when the best people have moved on to other topics.

The research area that you choose should have a chance of making a difference.  Systems research tends to be applied, and a great deal of it is engineering rather than science.  The focus, in most cases, is making something work better for someone.  The more people benefit, the better.

Fundamentally, the area you choose needs to be one that you have some natural affinity for.  You should enjoy working on it and your intuition should strongly suggest that all is not right: there is work that needs to be done.

If you’re a PhD student seeking a research assistantship, it would be practical to choose a research area that your advisor has a grant to work on.

Departing from the accepted wisdom

Every good research idea is a departure from the accepted wisdom, but it’s important to depart at the right level.  Consider these extremes:

  1. You reject the notion of binary computers.  Instead, ternary computation will sidestep most of the fundamental problems faced by computer science.  Everything from instruction sets to complexity theory must be thrown out and rebuilt from scratch.
  2. You reject the notion of the semicolon as a statement terminator in programming languages.  Instead, the dollar sign should be used.  A revolution in software productivity will ensue.

The first idea diverges from the status quo in a fundamental way, but it will be difficult to get buy-in from people.  The second departure is too minute to matter, and nobody will care even if you do a big user study showing that the dollar sign is better with p<0.05.  In both cases, the research idea does not feel like one that will change the world.  In contrast, some great examples of departing from the conventional wisdom in the right way can be found in David Patterson’s work: see RISC and RAID.

Focusing a skeptical personality

If the point is to challenge the accepted wisdom in some fashion, you can’t very well go believing everything people tell you.  Computer systems is not exactly a rigorous area of science and you will hear all manner of ridiculous explanations for things.

Good systems problems, in my experience, come from noticing something wrong, some discrepancy between how the world works and how it should work.  However, it doesn’t appear possible to do good work based on someone else’s observations.  For example, you can tell me that parallel programming is hard or software transactional memory is too slow, but the fact is that if I haven’t seen the problems for myself, if long and bitter experience hasn’t trained my intuition with a thousand little facts about what works and what does not, then I’m probably going to come up with an irrelevant or otherwise bad solution to the problem.

How does this work in practice?  You go about your business building kernels or web servers or whatever, and along the way there are a hundred irritations.  Most of them are incidental, but some are interesting enough that you start pulling at the thread.  Most of these threads die quickly when you discover that the problem was adequately solved elsewhere, is boring, or is too difficult to solve at present.  Every now and then it becomes clear that there’s a real problem to be solved, that nobody else is attacking it (at least in the right way), and that it would be fun to work on.  These are your potential research projects.  There are probably other ways to find research ideas in computer systems, but this is the only one I know of.

Here’s an anecdote.  As an instructor of embedded systems courses I’d long been annoyed by buggy cross compilers for microcontrollers.  Students who are struggling to write correct code do not need this extra level of hassle.  Finally, one day I was showing a small snippet of assembly code in lecture and a student noticed that it was wrong.  I assumed that it was a cut-and-paste error, but it turned out the compiler was mistranslating a 2-line function.  This seemed beyond the pale, so I wrote some more test cases and tested some more compilers and kept finding more and more bugs.  Even at this point, after I’d spent probably 40 hours on the problem, it was not at all clear that I was moving towards any kind of research result.  It was only after hundreds of additional hours of work that it became obvious the project had legs: everyone knows that embedded compilers contain bugs, but probably most people would be surprised that we were able to find (so far) 200 compiler bugs including major wrong-code bugs in every C compiler we’ve tested.  So in this case, the surprising result is quantitative.

A few more observations

  • The best research problems are often those that are not yet of major industrial interest, but that will be addressed by billion-dollar industries in 10-20 years.  Once a problem becomes the focus of intense industrial interest, it becomes a difficult target to attack from academia.  At the very least, you need a seriously new angle.
  • Don’t get into a research area late in its life cycle.
  • Thomas Edison said it’s 1% inspiration, 99% perspiration.  In computer systems it’s more like 99.9% perspiration: if you’re not careful you can build things for years without getting any research results.
  • It’s really not possible to figure out in advance which promising ideas are going to pan out and which ones are distractions.  Keep a number of different ideas in mind and work in directions that cut off as few options as possible.
  • Ignore sunk costs.  Always be ready to drop an idea, no matter how proud of it you are.
  • If you become aware of someone credible who is working on your exact idea, drop it.  First, at this point you have a 50% chance of winning the race to publication.  Second, duplicated work wastes time and tax dollars.  Life’s too short.
  • Often, problems and obstacles that seem insurmountable at first can be flipped around and turned into interesting features of your work or even advantages.
  • Periodic re-examination of assumptions is useful.  A recent example I like is Google native client.  Most efforts to isolate untrusted binary code just take whatever the compiler outputs.  The Google project gets good mileage by hacking the compiler so that the code doesn’t contain so many stupid things.  It’s a good idea — who said the compiler was set in stone?
  • If your project seems boring, think about dropping it.  If you’re not excited why would anyone else be?
  • Writing a couple of paragraphs about an idea has the effect of revealing bad ideas for what they are, and making good ideas better.  It’s almost comical how many of my ideas look silly when written down.  It’s definitely possible to read too much, but it’s probably impossible to write too much.
  • Smart people love to solve problems, regardless of their relevance.  Avoid the trap where you improve a result far beyond the point of diminishing returns.
  • Code tweaking is seductive; if you do it, consider it time spent on a hobby, not time spent doing research.
  • Once you’re invested in a research area, it’s tempting to stay there since you’re on top of the learning curve.  This can turn into a trap.  When it’s time to move on, just do it.  Many of the best researchers change areas several times during their careers.
  • Having a grand vision up-front is OK as long as it’s somewhat vague and doesn’t prevent you from seeing the actual situation.
  • Computer systems speak to you, in their own way.  Always listen to what they’re telling you.


Picking a good research problem is at least half the battle, especially for PhD students.  It’s worth studying why some ideas and approaches are great while others are boring.

The Compiler Doesn’t Care About Your Intent

A misunderstanding that I sometimes run into when teaching programming is that the compiler can and should guess what the programmer means.  This isn’t usually quite what people say, but it’s what they’re thinking.  A great example appeared in a message sent to the avr-gcc mailing list.  The poster had upgraded his version of GCC, causing a delay loop that previously worked to be completely optimized away.  (Here’s the original message in the thread and the message I’m quoting from.)  The poster said:

… the delay loop is just an example, of how simple, intuitive code can throw the compiler into a tizzy. I’ve used SDCC(for mcs51) where the compiler ‘recognises’ code patterns, and says “Oh, I know what this is – it’s a delay loop! – Let it pass.”(for example).

Another example comes from the pre-ANSI history of C, before the volatile qualifier existed.  The compiler would attempt to guess whether the programmer was accessing a hardware register, in order to avoid optimizing away the critical accesses.  I’m sure there are more examples out there (if you know of a good one, please post a comment or mail me).

The problem with this kind of thinking is that the correctness of code now depends on the compiler correctly guessing what the programmer means.  Since the heuristics used to guess aren’t specified or documented, they are free to change across compilers and compiler versions.  In the short term, these hacks are attractive, but in the long run they’re little disasters waiting to happen.

One of the great innovations of the past 50 years of programming language research was to separate the semantics of a language from its implementation.  This permits the correctness of an implementation to be judged solely by how closely it conforms to the standard, and it also permits programs to be reasoned about as mathematical objects.  C is not the most suited to mathematical reasoning, but there are some excellent research projects that do exactly this.  For example Michael Norrish’s PhD thesis formalized C in the HOL theorem prover, and Xavier Leroy’s CompCert compiler provably preserves the meaning of a C program as it is translated into PPC or ARM assembly.

Of course, the intent of the programmer does matter sometimes.  First, a well-designed programming language takes programmer intent into account and makes programs mean what they look like they mean.  Second, intent is important for readability and maintainability.  In other words, there are usually many ways to accomplish a given task, and good programmers choose one that permits subsequent readers of the code to easily grasp what is being done, and why.  But the compiler does not, and should not, care about intent.

C and C++ Make It Hard to Read a Register for Its Side Effects

[ This post was co-written with Nigel Jones, who maintains an excellent embedded blog Stack Overflow.  Nigel and I share an interest in volatile pitfalls in embedded C/C++ and this post resulted from an email discussion we had.  Since we both have blogs, we decided to both post it.   However, since comments are not enabled here, all discussion should take place at Nigel’s post. ]

Once in awhile one finds oneself having to read a device register, but without needing nor caring what the value of the register is. A typical scenario is as follows. You have written some sort of asynchronous communications driver. The driver is set up to generate an interrupt upon receipt of a character. In the ISR, the code first of all examines a status register to see if the character has been received correctly (e.g. no framing, parity or overrun errors). If an error has occurred, what should the code do? Well, in just about every system we have worked on, it is necessary to read the register that contains the received character — even though the character is useless. If you don’t perform the read, then you will almost certainly get an overrun error on the next character. Thus you find yourself in the position of having to read a register even though its value is useless. The question then becomes, how does one do this in C? In the following examples, assume that SBUF is the register holding the data to be discarded and that SBUF is understood to be volatile. The exact semantics of the declaration of SBUF vary from compiler to compiler.

If you are programming in C and if your compiler correctly supports the volatile qualifier, then this simple code suffices:

void cload_reg1 (void)

This certainly looks a little strange, but it is completely legal C and should generate the requisite read, and nothing more. For example, at the -Os optimization level, the MSP430 port of GCC gives this code:

   mov &SBUF, r15

Unfortunately, there are two practical problems with this C code. First, quite a few C compilers incorrectly translate this code, although the C standard gives it an unambiguous meaning. We tested the code on a variety of general-purpose and embedded compilers, and present the results below. These results are a little depressing.

The second problem is even scarier. The problem is that the C++ standard is not 100% clear about what the code above means. On one hand, the standard says this:

In general, the semantics of volatile are intended to be the same in C++ as they are in C.

A number of C++ compilers, including GCC and LLVM, generate the same code for cload_reg1() when compiling in C++ mode as they do in C mode. On the other hand, several high-quality C++ compilers, such as those from ARM, Intel, and IAR, turn the function cload_reg1() into object code that does nothing. We discussed this issue with people from the compiler groups at Intel and IAR, and both gave essentially the same response. Here we quote (with permission) from the Intel folks:

The operation that turns into a load instruction in the executable code is what the C++ standard calls the lvalue-to-rvalue conversion; it converts an lvalue (which identifies an object, which resides in memory and has an address) into an rvalue (or just value; something whose address can’t be taken and can be in a register). The C++ standard is very clear and explicit about where the lvalue-to-rvalue conversion happens. Basically, it happens for most operands of most operators – but of course not for the left operand of assignment, or the operand of unary ampersand, for example. The top-level expression of an expression statement, which is of course not the operand of any operator, is not a context where the lvalue-to-rvalue conversion happens.

In the C standard, the situation is somewhat different. The C standard has a list of the contexts where the lvalue-to-rvalue conversion doesn’t happen, and that list doesn’t include appearing as the expression in an expression-statement.

So we’re doing exactly what the various standards say to do. It’s not a matter of the C++ standard allowing the volatile reference to be optimized away; in C++, the standard requires that it not happen in the first place.

We think the last sentence sums it up beautifully. How many readers were aware that the semantics for the volatile qualifier are significantly different between C and C++? The additional implication is that as shown below GCC, the Microsoft compiler, and Open64, when compiling C++ code, are in error.

We asked about this on the GCC mailing list and received only one response which was basically “Why should we change the semantics, since this will break working code?” This is a fair point. Frankly speaking, the semantics of volatile in C are a bit of mess and C++ makes the situation much worse by permitting reasonable people to interpret it in two totally different ways.

Experimental Results

To test C and C++ compilers, we compiled the following two functions to object code at a reasonably high level of optimization:

extern volatile unsigned char foo;
void cload_reg1 (void)
void cload_reg2 (void)
  volatile unsigned char sink;
  sink = foo;

For embedded compilers that have built-in support for accessing hardware registers, we tested two additional functions where as above, SBUF is understood to be a hardware register defined by the semantics of the compiler under test:

void cload_reg3 (void)

void cload_reg4 (void)
  volatile unsigned char sink;
  sink = SBUF;

The results were as follows.


We tested version 4.4.1, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler loads from foo in both cload_reg1() and cload_reg2() . No warnings are generated. The C++ compiler shows the same behavior as the C compiler.

Intel Compiler

We tested icc version 11.1, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler emits code loading from foo for both cload_reg1() and cload_reg2(), without giving any warnings. The C++ compiler emits a warning “expression has no effect” for cload_reg1() and this function does not load from foo. cload_reg2() does load from foo and gives no warnings.

Sun Compiler

We tested suncc version 5.10, hosted on x86 Linux and also targeting x86 Linux, using optimization level -O. The C compiler does not load from foo in cload_reg1(), nor does it emit any warning. It does load from foo in cload_reg2(). The C++ compiler has the same behavior as the C compiler.


We tested opencc version 4.2.3, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler does not load from foo in cload_reg1(), nor does it emit any warning. It does load from foo in cload_reg2(). The C++ compiler has the same behavior as the C compiler.

LLVM / Clang

We tested subversion rev 98508, which is between versions 2.6 and 2.7, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler loads from foo in both cload_reg1() and cload_reg2() . A warning about unused value is generated for cload_reg1(). The C++ compiler shows the same behavior as the C compiler.

CrossWorks for MSP430

We tested version, hosted on x86 Linux, using optimization level -O. This compiler supports only C. foo was not loaded in cload_reg1(), but it was loaded in cload_reg2().


We tested version, hosted on Windows XP, using maximum speed optimization. The C compiler performed the load in all four cases. The C++ compiler did not perform the load for cload_reg1() or cload_reg3(), but did for cload_reg2() and cload_reg4().

Keil 8051

We tested version 8.01, hosted on Windows XP, using optimization level 8, configured to favor speed. The Keil compiler failed to generate the required load in cload_reg1() (but did give at least give a warning), yet did perform the load in all other cases including cload_reg3() suggesting that for the Keil compiler, its IO register (SFR) semantics are treated differently to volatile variable semantics.


We tested version 9.70, hosted on Windows XP, using Global optimization level 9, configured to favor speed. This was very interesting in that the results were almost a mirror image to the Keil compiler. In this case the load was performed in all cases except cload_reg3(). Thus the HI-TECH semantics for IO registers and volatile variables also appears to be different – just the opposite to Keil! No warnings was generated by the Hi-TECH compiler when it failed to generate code.

Microchip Compiler for PIC18

We tested version 3.35, hosted on Windows XP, using full optimization level. This rounded out the group of embedded compilers quite nicely in that it didn’t perform the load in either cload_reg1() or cload_reg3() – but did in the rest. It also failed to warn about the statements having no effect. This was the worst performing of all the compilers we tested.


The level of non-conformance with the C compilers, together with the genuine uncertainty as to what the C++ compilers should do, provides a real quandary. If you need the most efficient code possible, then you have no option other than to investigate what your compiler does. If you are looking for a generally reliable and portable solution, then the methodology in cload_reg2() is probably your best bet. However it would be just that: a bet. Naturally, we (and the other readers of this blog) would be very interested to hear what your compiler does. So if you have a few minutes, please run the sample code through your compiler and let us know the results.


We’d like to thank Hans Boehm at HP, Arch Robison at Intel, and the compiler groups at both Intel and IAR for their valuable feedback that helped us construct this post. Any mistakes are, of course, ours.

The Dreaded LPU

In academic publishing, the “least publishable unit,” or LPU, refers to the smallest contribution to scientific knowledge that can be successfully sneaked past the reviewers at a conference or journal.  Most often the term is used derisively, though I have sometimes heard it used as a modest complement, indicating that a researcher has astutely determined what can or cannot be published, and is extracting the maximum benefit from his or her hard work.

So far in 2010 I’ve participated in the program committee meetings for EuroSys and the USENIX Annual Technical Conference.  Both meetings featured a lot of smart people evaluating a lot of very good research in computer systems.  However, at both meetings I noticed something interesting: for a significant fraction of the strong submissions (those that were real candidates for acceptance), one or more reviewers had concerns that the paper was too incremental compared to previous papers by the same authors.  The thing that struck me was that the authors in question were, more often than not, up-and-coming young researchers at respectable institutions and with strong pedigrees.  Another thing that struck me is that I almost never noticed these problems myself.  Rather, I tend to take papers at face value, trusting that the material which looks new actually is new.  Therefore, at least three times at each of these two meetings, I found myself in the awkward position of supporting a paper that I had liked while others were attacking it as less-than-LPU.  This isn’t fun and it looks like I’ll have to change the way I do reviewing by looking much more critically at people’s previous work, as opposed to just trusting them.  I believe that the reason this problem showed up so strongly at USENIX ATC and EuroSys is that neither conference used double-blind submission, as is becoming very common.  I’m pretty sure that double-blind makes it a lot easier to get an LPU though a program committee.

I am annoyed.  The program committee shouldn’t have to be policing people’s publication strategies, it’s hard enough to simply evaluate the quality of the work.  Obviously people are under tremendous pressure to publish, but surely this doesn’t legitimize LPU as a tenure strategy.

A New Life Awaits You in the Off-World Colonies

As a recipient of research funding from the US government, I sometimes think about how this resource should be allocated.  Wikipedia claims that for most developed countries, research funding amounts to 1.5% to 3% of GDP, so we are talking about a lot of money here.  Appropriate focusing of this funding can change the world — and it already has, of course, many times.  (Just to be clear: for 9 months out of each year my salary is paid by the state of Utah.  However, my group operates on federal grant money typically exceeding my salary by several times.)

The research funding allocation problem is interesting because it’s not so much a matter of determining the most pressing needs, but of determining which important research areas are not being addressed adequately by the private sector.  Corporations’ responsibility to their shareholders makes them myopic, so there are many such areas.

My premise in this post is that at some point, something is going to get us.  By “us” I mean the human race and by “get” I mean utterly kill, or close enough.  I don’t think this is an unreasonably paranoid view: the solar system has a rich history of delivering very big rocks to us and we seem to be doing a perfectly good job in creating our own threats to continued existence.  Perhaps I should not have watched “Fail Safe” the other night.

What does this have to do with research funding?  It means that some significant fraction of our research money (25%, for example) should be devoted to work on the critical path to creation of sustainable off-Earth colonies.  In the long run, creating these colonies is literally the only thing that matters.  Clearly, we cannot build them right now.  So what are the missing technologies?  Obvious candidates include: surviving or avoiding microgravity, sophisticated and reliable automation, stable closed ecosystem design, and reliable energy sources.  I’m not trying to say anything new about space exploration; the point is just that we’re wasting valuable time.  The current under-focused approach to research funding is not going to give the human race a Plan B anytime soon.

How do we determine the specific topics to work on?  One option is to ask the NAS, NAE, and NRC, who would each spend two or three years forming committees and writing reports.  This is a fine idea, but boring.  We could also run some workshops where various experts from government, industry, and academia get together and talk. Again it’s hard to stifle a yawn; no matter how good the intentions, somehow these kinds of events do not seem conducive to innovative and critical thinking.  A better idea might be for small groups of people to self-organize and start doing the work.  As researchers we get a large amount of autonomy and surely this can be bent towards longer-term, more meaningful goals than we typically aim for.

How would my job change if tomorrow I decided to devote all of my efforts towards minimizing the time remaining until we can build a self-sustaining colony on the Moon or Mars?  At least in broad terms, probably not much!  It doesn’t really matter where these colonies are or what form they take: they’re going to need a huge amount of embedded software to sustain their existence and it had better work really well.  What, did you think I was going to end this post realizing I need to change jobs?

Computer Science Fiction

Science fiction explores the effect of technological progress on society.  It is ironic, then, that the majority of SF authors miserably failed to predict the impact of computers and information technology. Why does Google return no meaningful hits for “computer science fiction?”  Is it not obvious that this term needs to exist, if we wish to understand the next 50 years at all?

Looking back, it is clear that most SF authors made the same mistake: they took whatever computers existed at the time and envisioned larger, more powerful, smarter versions of the same kind of machine.  The genre is filled with this kind of obvious — and, in retrospect, boring — extrapolation. In contrast, the interesting thing that has happened over the past few decades is the development and deployment of new kinds of computer-based systems, which are now pervasively embedded in the world.  Something like 10 billion processors are manufactured each year and only a tiny fraction looks anything like a mainframe.

Of course there were other mis-steps, such as overlooking the impact of wireless communication.  One of my favorite mispredictions comes from Neuromancer, where Gibson makes a point of showing us that pay phones still exist in the middle of the 21st century.  People in Neuromancer have their brains wired directly into the network, but they don’t carry cell phones.

As a professional computer scientist, part of my job is a form of applied science fiction.  Instead of trying to predict the impact of science, we try to make the impact happen.  Doing this job well, however, requires real insight into the future.  Therefore, one of the questions I’m interested in is: Who in SF got it right?  That is, who saw beyond the obvious extrapolations about computer technology and really understood what the future might look like?  And how did they do it?

The best single example of a computer science fiction author is Vernor Vinge.  His story True Names preceded the cyberpunk movement and paved the way for subsequent work like Neuromancer and Snow Crash.  Vinge’s books contain a high density of good ideas, on a par with some of the best earlier idea-driven writers like Asimov and Niven.  A Fire Upon the Deep gets solid mileage out of pack animals that function as distributed systems and also provides a reasonable exploration of what really powerful machines might be like (“applied theology” indeed).  Subsequently, A Deepness in the Sky gave us the term “software archeology” — an ugly but highly plausible window into the future of software maintenance — and is the first and perhaps only work in the sub-sub-genre “sensor network science fiction.”  Of course, Vinge depicts pervasive embedded sensing as invariably leading to totalitarian control and then societal collapse.

There are two major CS-related themes running across Vinge’s work.  The first is the singularity, and probably too much has been written about it elsewhere.  It seems uninteresting to speculate about a point in time that is defined as being immune to speculation.  The second centers on the fickle nature of control in computer systems.  Very early, Vinge foresaw the kinds of battles for control in networked resources that are playing out now in botnets and in corporate and government networks.  Concepts like a trusted computing base and subversion via network attack look like they are probably fundamental.  Our understanding of how these ideas will evolve is flawed at best.  In fact, the entire history of computer security has been adversary driven: we put an insecure system out into the world, wait for it to be exploited, and then react.  This happens over and over, and I have seen very few signs that a more proactive approach is emerging.

How did Vinge do such a good job?  This is tough to analyze.  Like all good extrapolators, he separated the fundamental from the incidental, and pushed the fundamentals forward in useful ways.  He knew enough CS to avoid technical gaffes but somehow failed to let this knowledge interfere with his predictions.  It is no coincidence that few of the now-common top-level uses of the Internet were thought of by computer scientists: we’re too busy dealing with other levels of the system.  We know how computers are supposed to be used and this is a huge handicap.

As a group, the cyberpunk authors did a good job in seeing the impact of information technology, but as far as I can tell their contributions to computer science fiction were largely derived from things that Vinge and others had already predicted.  It’s interesting that almost all of these authors insisted on making the network into a physical space accessed using VR.  This was more than a plot device, I think: people really want cyberspace to be somehow analogous to physical space.  The real-world failure of VR as a metaphor for understanding and navigating networked computer systems has been interesting to watch; I’m not sure that I understand what is going on, but as I see it the network simply does not demand to be understood as a physical space.  Of course, spaces are useful for games and other kinds of interaction, but it’s not at all clear that VR will ever be the primary metaphor for interacting with the network.  The poor state of interface technology (Where are those brain plugs? Why are we still using stupid mice?) is obviously a factor as well.

200 Compiler Bugs

This morning I reported the 200th bug found by our compiler testing tool.  It is a new way to crash GCC.  The failure-inducing input is not pretty so I won’t give it here, but it can be found in GCC’s bugzilla.  Although the testing tool is now entirely developed by some excellent PhD students, I have remained the primary bug reporting person.  It is basically grunt work but it keeps me technically involved with the project.

It took us about two years of real time and maybe three person-years of effort to find these 200 bugs.  When will we reach 300?  It’s not clear.  On one hand, our testing tool is becoming extremely powerful and we are branching out into more target architectures.  On the other hand, both GCC and LLVM have developed a significant degree of immunity to our tests.  The co-evolution of the testing tool and the tested systems has been interesting to watch and I hope to write more about this at some point.  In any case, at this point our tester is the bottleneck for reporting bugs in LLVM and GCC (for x86/x64 and for the most common compiler flags, at least).  For most other compilers we have looked at, the bottleneck is either in creating good bug reports — this is time consuming — or in the compiler team’s ability (and sometimes, unfortunately, willingness) to fix the bugs that we report.

Our (generally) uncomplaining and silent partners in this effort are the compiler developers who fix the bugs that we report.  I often watch the progress of each bug report as it is confirmed, discussed, and eventually fixed.  Over the course of this project I’ve become very impressed with the kind of hacking talent the LLVM and GCC projects have attracted.  Hopefully their effort, and ours, is well-spent and in the long run we will end up with several highly reliable open-source compilers.

Finally, I should add that we are exceptionally grateful to DARPA for funding this effort.