Computer Science Culture Clash


It’s not uncommon for an empirical CS researcher to get a review saying something like “Sure, these results look good, but we need to reject the paper since the authors never proved anything about the worst case.” Similarly, when I interviewed for faculty jobs ten years ago, a moderately famous professor spent a while grilling me about the worst-case performance of a static analysis tool that I had written. This was, to me, an extremely uninteresting topic but luckily there’s an easy answer for that particular class of tool. I recall noticing that he did not seem particularly interested in what the tool did, or if it was actually useful.

Problems like these are yet another consequence of the deep and wide divide between the math and engineering sides of computer science. In caricature form, the two approaches work like this:

  • An engineer sees that characterizing the worst-case performance of a lawnmower is a fool’s errand, and focuses instead on work that will enable the creation of more effective and less expensive mowers.
  • A mathematician also sees that there’s no way to make any guarantees about the worst-case performance of a lawnmower, and responds by creating an abstract model of mowing that is easier to do proofs about: “Assume an infinite supply of gasoline, a frictionless engine, and a circular lawn of unit radius…”

The problem occurs when these people need to evaluate each others’ work. Both are likely to turn up their noses.

My own view is that guarantees are a wonderful thing. For example, the purpose of the static analysis tool that I was grilled about while interviewing was to make guarantees about embedded systems (just to be clear: I am very interested in guarantees about embedded systems, and wholly uninterested in guarantees about tools that analyze embedded systems). However, we need to keep in mind that guarantees (and their friends, impossibility results) are mathematical statements that abstract away a lot of real-world detail. Of course, abstraction is at the core of computer science and even the most hard-nosed engineers among us must abstract away details in order to accomplish anything. But some abstractions are good — they capture important details while dropping irrelevant clutter — whereas others drop something essential.

One of my favorite stories where the mathematical side of CS rescued the engineers comes from the early development of optimizing compilers. The basic idea behind optimization is to repeatedly replace part of the program with a faster piece of code that is functionally equivalent. The problem is that without a good definition of “functionally equivalent” some optimizations are going to break some programs and when this happens we have no good basis for determining whether it is the program or the compiler that is at fault. As an example, we might ask if it is OK for an optimizing compiler to remove a delay loop from a program. Clearly this will break programs that rely on the delay. Anyhow, as I understand the history, throughout the 1950s and 1960s this basic ambiguity did not deter the engineers from creating some nice optimizing compilers but there were persistent low-grade problems in figuring out which transformations were legal. Subsequently, people were able to develop theories — dataflow analysis and abstract interpretation — relating program semantics and compiler optimizations in such a way that it became possible to figure out whether any particular optimization is valid or not. These theories are concrete and pervasive enough that, for example, the string “lattice” appears about 250 times in the LLVM source code.

The point is that both the engineering and math sides can benefit from keeping the dialog open. Thus, I’ve tried to come up with a few recommendations for different situations we might find ourselves in.

When submitting papers:

  • Is there anything to worry about? If you’re an engineer submitting to SOSP, or a mathematician submitting to STOC, all reviews will be written by your own kind of people, so there’s no problem.
  • Engineers: Your first impression will likely be that the mathematical side has nothing to say about your work because you are solving real-world problems. This may be true or it may not be, but in any case don’t be ignorant. Read and understand and cite relevant work and describe how it relates to your research.
  • Mathematicians: The engineers want something that is, or could be, useful. If your work advances the state of the art in a way that could be transitioned into practice, say so and provide some evidence. In other words, help the reviewers understand why your work is of interest to people who are not mathematicians.
  • Make your paper’s contributions extremely clear, and moreover make it clear what kind of contributions they are. The second part is difficult and important. Then, regardless of how clear you have been, be ready for the engineers / mathematicians to fail to see the value in the work.
  • Be honest about your contributions. If you’re really an engineer, you’re probably on very shaky ground when proposing a fundamental new model of computation. If you’re really a mathematician, don’t start your paper with a page about making life better for users and then spend the remaining nine pages proving lemmas.
  • Know when to admit defeat. There are some communities that are simply not going to accept your papers reliably enough that you can make a home there. If you don’t want to change your style of work, you’ll need to take your business elsewhere. It’s their loss.

When reviewing papers:

  • Don’t dismiss the work out of hand just because it fails to attack any problem that you care about. Try to understand what the authors are trying to accomplish, and try to evaluate the paper on those terms. This is hard because they will often not state the goals or contributions in a way that makes sense or seems interesting to you.
  • Mathematicians: If your criticism is going to be something like “but you failed to prove a lower bound,” think carefully about whether the authors’ stated contributions would be strengthened by a lower bound, or rather if it is simply you who would like to work on the lower bound.
  • Engineers: If your criticism is going to be something like “you failed to evaluate your work on 10,000 machines,” think carefully about whether this makes sense to say. If the authors’ model has rough edges, can they be fixed or are they fundamentally at odds with the real world? Are there any insights in this paper that you or another engineer could use in your own work?
  • Mathematicians: If your criticism is going to be something like “this is a trivial application of known techniques,” be careful because the gap between knowing something and making it work is often wide. Also, it is a success, rather than a failure, if the engineers were able to readily adapt a result from the mathematical side. Of course, if the adaptation really is trivial then call them on it.
  • Engineers: If your criticism is going to be something like “this seems needlessly complicated” try to keep in mind that the penalty for complexity in math is less (or at least it affects us differently) than in engineering. Often the added complexity is used to get at a result that would be otherwise unattainable. Of course, if the complexity really is useless, call them on it.
  • If you are truly not qualified to review the paper, try to wiggle out of it. Often an editor or PC chair will be able to swap out the paper — but only if they are asked early enough. At the very least, clearly indicate that your review is low-confidence.

I hope that I’ve made it clear that close interaction between the mathematicians and engineers is an opportunity rather than a problem, provided that each side respects and pays attention to the other. I should add that I’ve made mistakes where there was a piece of theoretical CS research that I thought was useless, that turned out not to be. Since then I’ve tried to be a little more humble about this sort of thing.

In this piece I’ve sort of pretended that engineers and mathematicians are separate people. Of course this is not true: most of the best people can operate in either mode, at least to a limited extent.

,

15 responses to “Computer Science Culture Clash”

  1. You left out designers that can even befuddle the engineers, let alone the mathematicians. Check out Crista Lopes’s excellent post on PL design research if you haven’t already:

    http://tagide.com/blog/2012/03/research-in-programming-languages/

    Unfortunately, there are no conferences for designers in our field; somehow we have to sneak papers into technical conferences with some amount of theory and/or performance numbers to help the PC into accepting interesting papers.

  2. I did my OS and compiler without reading any papers. I took a course on compilers and a course on operating systems. I worked at Ticketmaster on their VAX operating system. My boss wrote the PASCAL compiler. I took 2 x86 assembly courses, architecture, digital design, 3 courses embedded design, graphics, numeric methods. When I made my OS and compiler, I did it without consulting anything — there isn’t anything tricky.

    God says…
    saw, they shouted, and fell on their faces.

    10:1 And Nadab and Abihu, the sons of Aaron, took either of them his
    censer, and put fire therein, and put incense thereon, and offered
    strange fire before the LORD, which he commanded them not.

    10:2 And there went out fire from the LORD, and devoured them, and
    they died before the LORD.

    10:3 Then Moses said unto Aaron, This is it that the LORD spake,
    saying, I will be sanctified in them that come nigh me, and before all
    the people I will be glorified. And Aaron held his peace.

  3. Hi Terry, indeed most practicing programmers do not seem to read research papers. There are many possible explanations, but surely part of it is that not enough CS papers contain useful information or results. In contrast, from what I can tell, many MDs do read the literature.

    Sean, yeah. Besides designers, engineers, and mathematicians there are managers, philosophers, scientists, and probably a few more. I know what you mean about sneaking papers into the technical conferences, I have had a lot of practice with that (and get some snarky reviews anyway).

  4. BTW, I just noticed that our live programming paper was put into the Types sessions at PLDI. Let’s hope the session chair goes easy on us 🙂

  5. I’m not sure practicing programmers not reading the literature is that unusual. My experience is that in other engineering fields, most non-academic, non-research engineers don’t read much in the literature, either.

    It might be cultural: do most medical papers actually contain much information that, say, a GP can put into use? Specialists are typically expected to be more like researchers, and I’m not sure the equivalent of “specialists” in CS don’t read papers fairly often, too, even if they aren’t doing “research” per se.

    On the other hand, engineers in all these fields use tools and methods that derive from the academic literature, with a time delay, constantly.

  6. this phenomenon is not limited to computer science. I had an instructor whose Engineering PHD dissertation was rejected : he was able to resolve signals in high noise environments with something like 1/10 the computing overhead that was state of the art. The quibble was he had successfully applied and elaborated on existing theory, even if no one before him had made it work before.

    So this is engineering viewpoint vs academic viewpoint, even within engineering.

  7. John, what do you think of the rise of StrangeLoop as a forum for communicating cutting edge ideas to a non-academic audience? I know Matthew went last year, and this year many academics I know have gotten their sessions accepted. It makes me wonder if their are deeper disconnects in our conference community, especially when trying to disseminate results to broader audiences.

  8. Man I am just a lowly self-taught game programmer(in that, long long ago when I was but a lad, like so many others I felt called to create a great video game, then an OK video game, then decided it would probably be wise to start by learning what the hell I was doing) but I love reading research papers. I read everything that comes across my various channels, and I have noticed that by continuously reading about the things people are doing, I am now understanding and learning to apply a pretty sizable percentage of it. I really really want to learn from all of what is available to me, if I can. I think that probably goes a long way.

  9. From the “practicing programmer” side of the fence working on an open source project I sometimes get frustrated at the divergence in goals between the maintainers and some of the academic researchers doing things based on it. Essentially the primary goal of the research seems to be a paper, not working code, so often enough of a prototype is produced to run some benchmarks for the results section but nobody ever does the work of actually making it usable or acceptable for upstream. (One example I ran across was testing the improvement in performance from making something multithreaded; it apparently crashed about one time in 10 due to insufficient locking, but since you could just use a benchmark result from the other 9/10 runs this wasn’t a problem for getting the paper out…)

    Section 4 of this Columbia University tech report (on KVM/ARM) has some good advice for people hoping to get their research ideas widely adopted by open source communities:
    https://mice.cs.columbia.edu/getTechreport.php?techreportID=1535&format=pdf&
    (I’d be interested to know if there’s anything you’d add/disagree with based on your experience getting the integer sanitizer into clang trunk.)

  10. pm215, we academics, in order to survive, have to prototype, implement, measure, and write up systems fairly quickly…because, to be honest, most ideas aren’t that great (i.e. not great enough)! So the artifact of a researcher is the paper that explains and argues for an idea, not their shoddy prototype they used during this evaluation.

    What the PhDs at Google have been doing very well is productising a lot of research for real. On the other hand, but there is an opportunity cost for this.

  11. Sean, I only know what Matthew told me about Strange Loop, but it sounds great. Doing work designed to impress a specific CS community is one of the main mistakes researchers make, and reaching out more broadly is extremely important. Of course that is one main reasons I have this blog: as a sanity check of my stuff against people who aren’t e.g. PLDI reviewers.

    Dan, great! I hope you are finding some useful stuff in there.

    pm215, thanks for the link, I hadn’t seen this and will read it. Basically, as Sean indicates, maintaining code is hard in an environment where grants are usually 3 years and students leave just as soon as they become really useful. Also, the open source world is merciless with respect to code changes: you either maintain stuff vigilantly or else it’s worthless within a year. I’m not trying to excuse the problem you mention of course.

    And frankly there are worse problems with most papers, such as not composing with other things you might want to do or giving good results only on a subset of benchmarks or only on small problems or only in narrow situations that do not occur frequently in practice.

  12. Sean, John: yes, I certainly see why the academic goal/artifact is the paper; I guess I’m just grousing about the gap between the two communities. Not that on our side we exactly make it easy — as John says there is a definite tendency to code churn that invalidates non-upstreamed forks, and to require code submitters to do all the work to get code upstreamed.

  13. I do a lot of empirical research under all three categories Tichy gave (i.e., Design, Empirical, and Hypothesis testing; the first two correspond more closely to engineering and the latter to general science <— note that this doesn't mean the latter is More The Awesome). The biggest two problems I get are 1) inane and incorrect dismissal of the details (e.g., people lecturing me on sample sizes based on, well, nothing; recent review paraphrase: "Well this paper is clearly off topic because you don't explain what at student t-test is; I'm a typical member of the community and *I* don't know what it is; thus it's off topic" <– Yes, it was this incoherent) and 2) inane and incorrect dismissal of the work (recent review paraphrase: "Where's the contribution? All you did was an experiment!").

    It's going to be a long hard slog to change these attitudes. Compare Tichy et al 1993 and Wainer et al’s replication in 2005.

    Not hugely heartening.

  14. Hi Bijan, I’ve seen similar things in reviews. One reasonable response would be to post the offending review(s). If editors and program chairs cannot get their reviewers to do a good job, perhaps a bit of shaming can help. Of course I’ve been a PC chair and know how difficult it is to get people to do a good job, and of course I’ve written my share of weak reviews. But still.

  15. A few years ago I complained (twice) to the PC Chair (same paper, different conferences). For the first conference, we had one reviewer who claimed it was off topic. During rebuttal we pasted the part of the call for papers that made it clear that it was on topic. The reviewer raised their relevance score and then lowered everything else to get the *same* overall score (amongst other shenanigans).

    When I pointed this out (again) to the PC Chair, then claimed that the discussion was adequate in spite of these markers. (Oh! The other two reviewers lowered their relevance scores (and nothing else).) Sigh.

    The second PC Chair ended up overriding the reviewers and so the paper got presented (eventually).