Straight Man

Hank Devereaux, chair of the dysfunctional English department at a small university, is having a midlife crisis.  His wife, leaving town, fears he’ll be either in jail or the hospital before she returns — and she is not disappointed.  Straight Man is hilarious, I had to stop reading it in bed because it was too hard to giggle quietly enough to keep from waking Sarah up.  I don’t know anything about Russo but it is clear that he has spent time in academia, his portrayal of the odd personalities and petty politics is too perfect to be guesswork.  This is one of the best books I’ve read in years, I highly recommend it.

Plagiarism

A few years ago, I ran across an entire academic paper that was plagiarized.  I was gratified when alerting the (original) author of the paper started a chain of events that eventually lead to the pirated paper being retracted.  Another time, I was reviewing a paper and said to myself something like “that paragraph was really well written.”  Then I realized why I liked it: I had written the paragraph myself in an earlier paper.  Needless to say this paper was rejected with extreme prejudice. As these anecdotes indicate, as a researcher I’ve found plagiarism to be pretty easy to deal with. Since every conference proceedings or journal has a well-known “owner” who is loosely responsible for the integrity of its contents, it suffices to let this person know about the potential problem.

Web-based plagiarism is more difficult.  For example, in an article about the C language’s volatile qualifier, Nigel Jones showed a nice turn of phrase when he said that declaring everything as volatile is a bad idea “since it is essentially a substitute for thought.”  A few weeks ago, while looking for Nigel’s article, I did a Google search on his phrase and was surprised to see how many matches turned up.  Clicking on these links, it turns out that most have not acknowledged Nigel as the original author.  Rather, people who posted the material appear to be taking credit for it.  Since Nigel’s article is quite good, finding one or two copies of it wouldn’t be surprising, but ten copies does seem a bit much.  What can we do about web-based plagiarism?  When the plagiarized content lives in a third-party site (e.g. Wikipedia) it may help to complain.  If plagiarism occurs on a site hosted by an individual, it is probably very difficult to deal with.

My worst experiences with plagiarism have been as an instructor.  The problem is that prosecuting instances of plagiarism, even when it is done properly, is time consuming and very stressful.  It’s not that institutional support is totally lacking — the U’s Student Code outlines a reasonable implementation of academic sanctions — but rather that the actual process of telling students that they’re going to fail a class because they cheated is hard.  One possibility is a tearful confession followed by pleas for mercy (and sometimes, by pleas from parents and other relatives, especially if the failing grade delays the student’s graduation). Another possibility is the student who denies everything, making life difficult because it becomes a bit of a he-said/she-said game. Fortunately, I’ve never been involved with a case where lawyers came into the picture, but I’ve heard this can happen.  In the end, the simplest course of action for overworked instructors is to not try very hard to find instances of plagiarism by students.  My guess is that this is the implicit choice made by most teachers because, unfortunately, it seems that if one looks for plagiarism very hard, one tends to find it.

Margin in Software Systems

Margin of safety is a fundamental engineering concept where a system is built to tolerate loads exceeding the maximum expected load by some factor.  For example, structural elements of buildings typically have a margin of safety of 100%: they can withstand twice the expected maximum load.  Pressure vessels have more margin, in the range 250%-300%, whereas the margin for airplane landing gear may be only 25%.  (All these examples are from the Wikipedia article.)

We can say that a software system has a margin of safety S with respect to some external load or threat L only when the expected maximum load Lmax can be quantified and the system can be shown to function properly when subjected to a load of (1+S)Lmax.  Software systems are notoriously low on margin: a single flaw will often compromise the entire system.  For example, a buffer overflow vulnerability in a networked application can permit an attacker to run arbitrary code at the same privilege level as the application, subverting its function and providing a vector of attack to the rest of the system.

Software is often defined to be correct when, for every input, it produces the correct output.  Margin is an orthogonal concern.  For example, there exist systems that are formally verified to be correct, such as CompCert and seL4, that have no margin at all with respect to flaws not covered by the proof — a bug or trojan in the assembler, linker, or loader invalidates the safety argument of either system.  Similarly, there exist systems that are obviously not correct, that have considerable margin.  For example, my Windows laptop has enough RAM that it can tolerate memory leak bugs for quite a while before finally becoming unusable.

There are other examples of margin in software systems:

  • Many classes of real-time systems have a characteristic utilization bound: a fraction of total available CPU time that, if not exceeded, permits all sub-computations to meet their time constraints.  Real safety-critical systems are usually designed to use less CPU time than their theoretical utilization bounds, providing margin against spurious interrupts or other unforeseen demands on the processor.
  • Forward error correction provides margin against data corruption.
  • n-version programming and replication provide margin respectively against software and hardware defects.
  • It is common to pick a cryptographic key larger than the smallest currently-unbreakable key, providing some margin against future improvements in processing power and cryptanalysis.

The piece that is missing, as far as I can tell, is a collection of broader results about how margin in software systems relates to overall system reliability, and how useful kinds of software margin can be obtained at acceptable cost.

What are some general ways to gain margin?  Overprovisioning a resource, as in the examples above, is very common.  Defense in depth is also important: many software systems have only two lines of defense against attack: safety checks at the application level, and safety checks in the OS kernel.  If both of these defenses fall — as is common — the entire system has been subverted.  Virtual machines, safe language runtimes, and similar mechanisms can add layers of defense, as can firewalls and other external barriers.

Something I want to see is “margin-aware formal methods.”  That is, ways to reason about software systems under attack or load.  The result, ideally, would be analogous to well-understood principles of safety engineering.  Examples already exist:

  • Symbolic robustness analysis is an early proof-of-concept technique for showing that small perturbations in the input to a control system result in only small changes to the control signal
  • The critical scaling factor in scheduling theory is the largest degree of slowdown computations can incur before deadlines start being missed
  • Byzantine fault tolerance is a body of theory showing that a distributed system can produce correct results if fewer than a third of its nodes are compromised

An important obstacle to margin in software systems is the high degree of coupling between components.  Coupling can be obvious, as when multiple activities run in the same address space, or it can be insidiously indirect, including common failure modes in independent implementations, as Knight and Leveson observed in 1986.  There are many other kinds of coupling, including reliance on multiple copies of a flawed library, a single password used across multiple systems, etc.  It can be very hard to rule out all possible forms of coupling between components — for example, even if we use Byzantine fault tolerance and n-version programming, we can still be compromised if all chips come from the same (faulty or malicious) fab.

In summary: engineered physical systems almost always have margin with respect to failures.  Too often, software does not.  This should be fixed.  I want my OS and web browser to each come with a statement such as “We estimate that no more than 25 exploitable buffer overflows remain in this product, therefore we have designed it to be secure in the presence of up to 70 such problems.”

Picking a Research Topic in Computer Systems

This post is a collection of observations and advice for people who want to choose a research topic in computer systems.  I’m not claiming to be some kind of genius in this area, but I have enough ideas that they seemed worth writing down. This advice is probably most useful for graduate students in CS, but may also be helpful for junior profs and for undergrads interested in doing research.

Picking a research area

By “research area” I mean a sub-area of computer systems, such as object-oriented operating systems, distributed hash tables, or whatever.  This often ends up being a pretty pragmatic decision and it is understood that a good researcher can work concurrently in multiple areas or change area as often as every few years.

The easiest way is to choose an area that a lot of other people are working on.  I’m quite serious, and this method has some important advantages.  First, all those people at MIT and Berkeley aren’t stupid; if they’re working on ultra-reliable byzantine transcoders, there’s probably something to it.  Second, if you work on the same thing others are working on, the odds are good they’ll be interested in your work and will be more willing to fund and publish it.  The potentially serious drawback is that you’ll likely miss the initial, exciting phase of research where lots of new things are being discovered and there’s not yet a lot of commonly accepted wisdom about what the best solutions are.  In the worst case, you’ll miss the entire main period of interest and arrive late when the best people have moved on to other topics.

The research area that you choose should have a chance of making a difference.  Systems research tends to be applied, and a great deal of it is engineering rather than science.  The focus, in most cases, is making something work better for someone.  The more people benefit, the better.

Fundamentally, the area you choose needs to be one that you have some natural affinity for.  You should enjoy working on it and your intuition should strongly suggest that all is not right: there is work that needs to be done.

If you’re a PhD student seeking a research assistantship, it would be practical to choose a research area that your advisor has a grant to work on.

Departing from the accepted wisdom

Every good research idea is a departure from the accepted wisdom, but it’s important to depart at the right level.  Consider these extremes:

  1. You reject the notion of binary computers.  Instead, ternary computation will sidestep most of the fundamental problems faced by computer science.  Everything from instruction sets to complexity theory must be thrown out and rebuilt from scratch.
  2. You reject the notion of the semicolon as a statement terminator in programming languages.  Instead, the dollar sign should be used.  A revolution in software productivity will ensue.

The first idea diverges from the status quo in a fundamental way, but it will be difficult to get buy-in from people.  The second departure is too minute to matter, and nobody will care even if you do a big user study showing that the dollar sign is better with p<0.05.  In both cases, the research idea does not feel like one that will change the world.  In contrast, some great examples of departing from the conventional wisdom in the right way can be found in David Patterson’s work: see RISC and RAID.

Focusing a skeptical personality

If the point is to challenge the accepted wisdom in some fashion, you can’t very well go believing everything people tell you.  Computer systems is not exactly a rigorous area of science and you will hear all manner of ridiculous explanations for things.

Good systems problems, in my experience, come from noticing something wrong, some discrepancy between how the world works and how it should work.  However, it doesn’t appear possible to do good work based on someone else’s observations.  For example, you can tell me that parallel programming is hard or software transactional memory is too slow, but the fact is that if I haven’t seen the problems for myself, if long and bitter experience hasn’t trained my intuition with a thousand little facts about what works and what does not, then I’m probably going to come up with an irrelevant or otherwise bad solution to the problem.

How does this work in practice?  You go about your business building kernels or web servers or whatever, and along the way there are a hundred irritations.  Most of them are incidental, but some are interesting enough that you start pulling at the thread.  Most of these threads die quickly when you discover that the problem was adequately solved elsewhere, is boring, or is too difficult to solve at present.  Every now and then it becomes clear that there’s a real problem to be solved, that nobody else is attacking it (at least in the right way), and that it would be fun to work on.  These are your potential research projects.  There are probably other ways to find research ideas in computer systems, but this is the only one I know of.

Here’s an anecdote.  As an instructor of embedded systems courses I’d long been annoyed by buggy cross compilers for microcontrollers.  Students who are struggling to write correct code do not need this extra level of hassle.  Finally, one day I was showing a small snippet of assembly code in lecture and a student noticed that it was wrong.  I assumed that it was a cut-and-paste error, but it turned out the compiler was mistranslating a 2-line function.  This seemed beyond the pale, so I wrote some more test cases and tested some more compilers and kept finding more and more bugs.  Even at this point, after I’d spent probably 40 hours on the problem, it was not at all clear that I was moving towards any kind of research result.  It was only after hundreds of additional hours of work that it became obvious the project had legs: everyone knows that embedded compilers contain bugs, but probably most people would be surprised that we were able to find (so far) 200 compiler bugs including major wrong-code bugs in every C compiler we’ve tested.  So in this case, the surprising result is quantitative.

A few more observations

  • The best research problems are often those that are not yet of major industrial interest, but that will be addressed by billion-dollar industries in 10-20 years.  Once a problem becomes the focus of intense industrial interest, it becomes a difficult target to attack from academia.  At the very least, you need a seriously new angle.
  • Don’t get into a research area late in its life cycle.
  • Thomas Edison said it’s 1% inspiration, 99% perspiration.  In computer systems it’s more like 99.9% perspiration: if you’re not careful you can build things for years without getting any research results.
  • It’s really not possible to figure out in advance which promising ideas are going to pan out and which ones are distractions.  Keep a number of different ideas in mind and work in directions that cut off as few options as possible.
  • Ignore sunk costs.  Always be ready to drop an idea, no matter how proud of it you are.
  • If you become aware of someone credible who is working on your exact idea, drop it.  First, at this point you have a 50% chance of winning the race to publication.  Second, duplicated work wastes time and tax dollars.  Life’s too short.
  • Often, problems and obstacles that seem insurmountable at first can be flipped around and turned into interesting features of your work or even advantages.
  • Periodic re-examination of assumptions is useful.  A recent example I like is Google native client.  Most efforts to isolate untrusted binary code just take whatever the compiler outputs.  The Google project gets good mileage by hacking the compiler so that the code doesn’t contain so many stupid things.  It’s a good idea — who said the compiler was set in stone?
  • If your project seems boring, think about dropping it.  If you’re not excited why would anyone else be?
  • Writing a couple of paragraphs about an idea has the effect of revealing bad ideas for what they are, and making good ideas better.  It’s almost comical how many of my ideas look silly when written down.  It’s definitely possible to read too much, but it’s probably impossible to write too much.
  • Smart people love to solve problems, regardless of their relevance.  Avoid the trap where you improve a result far beyond the point of diminishing returns.
  • Code tweaking is seductive; if you do it, consider it time spent on a hobby, not time spent doing research.
  • Once you’re invested in a research area, it’s tempting to stay there since you’re on top of the learning curve.  This can turn into a trap.  When it’s time to move on, just do it.  Many of the best researchers change areas several times during their careers.
  • Having a grand vision up-front is OK as long as it’s somewhat vague and doesn’t prevent you from seeing the actual situation.
  • Computer systems speak to you, in their own way.  Always listen to what they’re telling you.

Summary

Picking a good research problem is at least half the battle, especially for PhD students.  It’s worth studying why some ideas and approaches are great while others are boring.

The Compiler Doesn’t Care About Your Intent

A misunderstanding that I sometimes run into when teaching programming is that the compiler can and should guess what the programmer means.  This isn’t usually quite what people say, but it’s what they’re thinking.  A great example appeared in a message sent to the avr-gcc mailing list.  The poster had upgraded his version of GCC, causing a delay loop that previously worked to be completely optimized away.  (Here’s the original message in the thread and the message I’m quoting from.)  The poster said:

… the delay loop is just an example, of how simple, intuitive code can throw the compiler into a tizzy. I’ve used SDCC(for mcs51) where the compiler ‘recognises’ code patterns, and says “Oh, I know what this is – it’s a delay loop! – Let it pass.”(for example).

Another example comes from the pre-ANSI history of C, before the volatile qualifier existed.  The compiler would attempt to guess whether the programmer was accessing a hardware register, in order to avoid optimizing away the critical accesses.  I’m sure there are more examples out there (if you know of a good one, please post a comment or mail me).

The problem with this kind of thinking is that the correctness of code now depends on the compiler correctly guessing what the programmer means.  Since the heuristics used to guess aren’t specified or documented, they are free to change across compilers and compiler versions.  In the short term, these hacks are attractive, but in the long run they’re little disasters waiting to happen.

One of the great innovations of the past 50 years of programming language research was to separate the semantics of a language from its implementation.  This permits the correctness of an implementation to be judged solely by how closely it conforms to the standard, and it also permits programs to be reasoned about as mathematical objects.  C is not the most suited to mathematical reasoning, but there are some excellent research projects that do exactly this.  For example Michael Norrish’s PhD thesis formalized C in the HOL theorem prover, and Xavier Leroy’s CompCert compiler provably preserves the meaning of a C program as it is translated into PPC or ARM assembly.

Of course, the intent of the programmer does matter sometimes.  First, a well-designed programming language takes programmer intent into account and makes programs mean what they look like they mean.  Second, intent is important for readability and maintainability.  In other words, there are usually many ways to accomplish a given task, and good programmers choose one that permits subsequent readers of the code to easily grasp what is being done, and why.  But the compiler does not, and should not, care about intent.

C and C++ Make It Hard to Read a Register for Its Side Effects

[ This post was co-written with Nigel Jones, who maintains an excellent embedded blog Stack Overflow.  Nigel and I share an interest in volatile pitfalls in embedded C/C++ and this post resulted from an email discussion we had.  Since we both have blogs, we decided to both post it.   However, since comments are not enabled here, all discussion should take place at Nigel’s post. ]

Once in awhile one finds oneself having to read a device register, but without needing nor caring what the value of the register is. A typical scenario is as follows. You have written some sort of asynchronous communications driver. The driver is set up to generate an interrupt upon receipt of a character. In the ISR, the code first of all examines a status register to see if the character has been received correctly (e.g. no framing, parity or overrun errors). If an error has occurred, what should the code do? Well, in just about every system we have worked on, it is necessary to read the register that contains the received character — even though the character is useless. If you don’t perform the read, then you will almost certainly get an overrun error on the next character. Thus you find yourself in the position of having to read a register even though its value is useless. The question then becomes, how does one do this in C? In the following examples, assume that SBUF is the register holding the data to be discarded and that SBUF is understood to be volatile. The exact semantics of the declaration of SBUF vary from compiler to compiler.

If you are programming in C and if your compiler correctly supports the volatile qualifier, then this simple code suffices:

void cload_reg1 (void)
{
  SBUF;
}

This certainly looks a little strange, but it is completely legal C and should generate the requisite read, and nothing more. For example, at the -Os optimization level, the MSP430 port of GCC gives this code:

cload_reg1:
   mov &SBUF, r15
   ret

Unfortunately, there are two practical problems with this C code. First, quite a few C compilers incorrectly translate this code, although the C standard gives it an unambiguous meaning. We tested the code on a variety of general-purpose and embedded compilers, and present the results below. These results are a little depressing.

The second problem is even scarier. The problem is that the C++ standard is not 100% clear about what the code above means. On one hand, the standard says this:

In general, the semantics of volatile are intended to be the same in C++ as they are in C.

A number of C++ compilers, including GCC and LLVM, generate the same code for cload_reg1() when compiling in C++ mode as they do in C mode. On the other hand, several high-quality C++ compilers, such as those from ARM, Intel, and IAR, turn the function cload_reg1() into object code that does nothing. We discussed this issue with people from the compiler groups at Intel and IAR, and both gave essentially the same response. Here we quote (with permission) from the Intel folks:

The operation that turns into a load instruction in the executable code is what the C++ standard calls the lvalue-to-rvalue conversion; it converts an lvalue (which identifies an object, which resides in memory and has an address) into an rvalue (or just value; something whose address can’t be taken and can be in a register). The C++ standard is very clear and explicit about where the lvalue-to-rvalue conversion happens. Basically, it happens for most operands of most operators – but of course not for the left operand of assignment, or the operand of unary ampersand, for example. The top-level expression of an expression statement, which is of course not the operand of any operator, is not a context where the lvalue-to-rvalue conversion happens.

In the C standard, the situation is somewhat different. The C standard has a list of the contexts where the lvalue-to-rvalue conversion doesn’t happen, and that list doesn’t include appearing as the expression in an expression-statement.

So we’re doing exactly what the various standards say to do. It’s not a matter of the C++ standard allowing the volatile reference to be optimized away; in C++, the standard requires that it not happen in the first place.

We think the last sentence sums it up beautifully. How many readers were aware that the semantics for the volatile qualifier are significantly different between C and C++? The additional implication is that as shown below GCC, the Microsoft compiler, and Open64, when compiling C++ code, are in error.

We asked about this on the GCC mailing list and received only one response which was basically “Why should we change the semantics, since this will break working code?” This is a fair point. Frankly speaking, the semantics of volatile in C are a bit of mess and C++ makes the situation much worse by permitting reasonable people to interpret it in two totally different ways.

Experimental Results

To test C and C++ compilers, we compiled the following two functions to object code at a reasonably high level of optimization:

extern volatile unsigned char foo;
void cload_reg1 (void)
{
  foo;
}
void cload_reg2 (void)
{
  volatile unsigned char sink;
  sink = foo;
}

For embedded compilers that have built-in support for accessing hardware registers, we tested two additional functions where as above, SBUF is understood to be a hardware register defined by the semantics of the compiler under test:

void cload_reg3 (void)
{
  SBUF;
}

void cload_reg4 (void)
{
  volatile unsigned char sink;
  sink = SBUF;
}

The results were as follows.

GCC

We tested version 4.4.1, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler loads from foo in both cload_reg1() and cload_reg2() . No warnings are generated. The C++ compiler shows the same behavior as the C compiler.

Intel Compiler

We tested icc version 11.1, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler emits code loading from foo for both cload_reg1() and cload_reg2(), without giving any warnings. The C++ compiler emits a warning “expression has no effect” for cload_reg1() and this function does not load from foo. cload_reg2() does load from foo and gives no warnings.

Sun Compiler

We tested suncc version 5.10, hosted on x86 Linux and also targeting x86 Linux, using optimization level -O. The C compiler does not load from foo in cload_reg1(), nor does it emit any warning. It does load from foo in cload_reg2(). The C++ compiler has the same behavior as the C compiler.

x86-Open64

We tested opencc version 4.2.3, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler does not load from foo in cload_reg1(), nor does it emit any warning. It does load from foo in cload_reg2(). The C++ compiler has the same behavior as the C compiler.

LLVM / Clang

We tested subversion rev 98508, which is between versions 2.6 and 2.7, hosted on x86 Linux and also targeting x86 Linux, using optimization level -Os. The C compiler loads from foo in both cload_reg1() and cload_reg2() . A warning about unused value is generated for cload_reg1(). The C++ compiler shows the same behavior as the C compiler.

CrossWorks for MSP430

We tested version 2.0.8.2009062500.4974, hosted on x86 Linux, using optimization level -O. This compiler supports only C. foo was not loaded in cload_reg1(), but it was loaded in cload_reg2().

IAR for AVR

We tested version 5.30.6.50191, hosted on Windows XP, using maximum speed optimization. The C compiler performed the load in all four cases. The C++ compiler did not perform the load for cload_reg1() or cload_reg3(), but did for cload_reg2() and cload_reg4().

Keil 8051

We tested version 8.01, hosted on Windows XP, using optimization level 8, configured to favor speed. The Keil compiler failed to generate the required load in cload_reg1() (but did give at least give a warning), yet did perform the load in all other cases including cload_reg3() suggesting that for the Keil compiler, its IO register (SFR) semantics are treated differently to volatile variable semantics.

HI-TECH for PIC16

We tested version 9.70, hosted on Windows XP, using Global optimization level 9, configured to favor speed. This was very interesting in that the results were almost a mirror image to the Keil compiler. In this case the load was performed in all cases except cload_reg3(). Thus the HI-TECH semantics for IO registers and volatile variables also appears to be different – just the opposite to Keil! No warnings was generated by the Hi-TECH compiler when it failed to generate code.

Microchip Compiler for PIC18

We tested version 3.35, hosted on Windows XP, using full optimization level. This rounded out the group of embedded compilers quite nicely in that it didn’t perform the load in either cload_reg1() or cload_reg3() – but did in the rest. It also failed to warn about the statements having no effect. This was the worst performing of all the compilers we tested.

Summary

The level of non-conformance with the C compilers, together with the genuine uncertainty as to what the C++ compilers should do, provides a real quandary. If you need the most efficient code possible, then you have no option other than to investigate what your compiler does. If you are looking for a generally reliable and portable solution, then the methodology in cload_reg2() is probably your best bet. However it would be just that: a bet. Naturally, we (and the other readers of this blog) would be very interested to hear what your compiler does. So if you have a few minutes, please run the sample code through your compiler and let us know the results.

Acknowledgments

We’d like to thank Hans Boehm at HP, Arch Robison at Intel, and the compiler groups at both Intel and IAR for their valuable feedback that helped us construct this post. Any mistakes are, of course, ours.

The Dreaded LPU

In academic publishing, the “least publishable unit,” or LPU, refers to the smallest contribution to scientific knowledge that can be successfully sneaked past the reviewers at a conference or journal.  Most often the term is used derisively, though I have sometimes heard it used as a modest complement, indicating that a researcher has astutely determined what can or cannot be published, and is extracting the maximum benefit from his or her hard work.

So far in 2010 I’ve participated in the program committee meetings for EuroSys and the USENIX Annual Technical Conference.  Both meetings featured a lot of smart people evaluating a lot of very good research in computer systems.  However, at both meetings I noticed something interesting: for a significant fraction of the strong submissions (those that were real candidates for acceptance), one or more reviewers had concerns that the paper was too incremental compared to previous papers by the same authors.  The thing that struck me was that the authors in question were, more often than not, up-and-coming young researchers at respectable institutions and with strong pedigrees.  Another thing that struck me is that I almost never noticed these problems myself.  Rather, I tend to take papers at face value, trusting that the material which looks new actually is new.  Therefore, at least three times at each of these two meetings, I found myself in the awkward position of supporting a paper that I had liked while others were attacking it as less-than-LPU.  This isn’t fun and it looks like I’ll have to change the way I do reviewing by looking much more critically at people’s previous work, as opposed to just trusting them.  I believe that the reason this problem showed up so strongly at USENIX ATC and EuroSys is that neither conference used double-blind submission, as is becoming very common.  I’m pretty sure that double-blind makes it a lot easier to get an LPU though a program committee.

I am annoyed.  The program committee shouldn’t have to be policing people’s publication strategies, it’s hard enough to simply evaluate the quality of the work.  Obviously people are under tremendous pressure to publish, but surely this doesn’t legitimize LPU as a tenure strategy.