Vigorous Public Debates in Academic Computer Science


The other day a non-CS friend remarked to me that since computer science is a quantitative, technical discipline, most issues probably have an obvious objective truth. Of course this is not at all the case, and it is not uncommon to find major disagreements even when all parties are apparently reasonable and acting in good faith. Sometimes these disagreements spill over into the public space.

The purpose of this post is to list a collection of public debates in academic computer science where there is genuine and heartfelt disagreement among intelligent and accomplished researchers. I sometimes assign these as reading in class: they are a valuable resource for a couple of reasons. First, they show an important part of science that often gets swept under the rug. Second, they put discussions out into the open where they are widely accessible. In contrast, I’ve heard of papers that are known to be worthless by all of the experts in the area, but only privately — and this private knowledge is of no help to outsiders who might be led astray by the bad research. For whatever reasons (see this tweet by Brendan Dolan-Gavitt) the culture in CS does not seem to encourage retracting papers.

I’d like to fill any holes in this list, please leave a comment if you know of a debate that I’ve left out!

Here are some more debates pointed out by readers:

, ,

26 responses to “Vigorous Public Debates in Academic Computer Science”

  1. A recent one from the security world: “Code Pointer Integrity” (OSDI ’14) [1] was attacked by “Missing the Point(er)” (Oakland ’15) [2]; the authors of the former thought the latter was a bit of a cheap shot*, and published “Getting The Point(er)” as a poster [3].

    Secretly, I assume they did it all so they could make those pointer jokes in the titles.

    * Note: I’m putting words in their mouths here for dramatic effect.

    [1] http://dslab.epfl.ch/pubs/cpi.pdf

    [2] https://people.csail.mit.edu/rinard/paper/oakland15.pdf

    [3] http://dslab.epfl.ch/pubs/cpi-getting-the-pointer.pdf

  2. “Program repair,” introduced with GenProg, takes off in 2011: https://qosbox.cs.virginia.edu/~weimer/p/weimer-tse2011-genprog-preprint.pdf

    In 2015, Rinard’s lab lists (somewhat shocking) methodological deficiencies in the whole area GenProg begat: http://people.csail.mit.edu/rinard/paper/issta15.pdf

    Another contemporaneous claim that the whole thing may have been broken all along: https://people.cs.umass.edu/~brun/pubs/pubs/Smith15fse.pdf

    I haven’t seen a rebuttal from Forrest, Weimer, et al., but I’d love to read one!

  3. Work on the Java Memory Model amounted to a decade-long ongoing public… discussion. Unlike many of the other debates on this page, the root cause wasn’t so much philosophical differences, but rather the sheer (and often subtle) complexity of the problem being tackled.

    Steele et al published the original JMM spec in ’96 as part of the Java Language Specification.

    Pugh notes, in 2000, that “The Java memory model described in Chapter 17 of
    the Java Language Specification […] is hard to interpret and poorly understood; […] Guy Steele (one of the authors of [GJS96]) was unaware that the memory model prohibited common compiler optimizations, but after several days of discussion at OOPSLA98 agrees that it does.”
    http://www.cs.tufts.edu/comp/150IPL/papers/pugh00java.pdf

    Manson, Pugh, and Adve published a revised JMM spec at POPL 2005. They write “the new model […] clearly defines the boundaries of legal transformations.”
    http://rsim.cs.uiuc.edu/Pubs/popl05.pdf

    In 2007, Cenciarelli et al publish a paper with a counterexample to Theorem 1 from the POPL 2005 paper, demonstrating that reordering of independent statements is not permissible:
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.6162&rep=rep1&type=pdf

    Aspinall and Å evčík published a paper in 2007 with example programs highlighting “good, bad and ugly” behavior properties of the JMM and follow up in 2008 with another paper: “we find that commonly used optimisations, such as common subexpression elimination, can introduce new behaviours and so are invalid for Java.”
    http://groups.inf.ed.ac.uk/request/jmmexamples.pdf
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.1790&rep=rep1&type=pdf

  4. This might be too broad/vague for your list here, but I find it intriguing that in programming languages the dynamic versus typed debate has continued for decades. To me, the fact that such a disagreement has gone on for so long with smart people on both sides almost certainly means that neither Pareto dominates the other. But there are smart people willing to argue that such dominance is in fact the case. Bob Harper’s blog has some interesting discussion threads on the topic.

  5. Not exactly the same thing but; there are a number of intersections of CS and public policy (particularly around security) that seem to have a lot of active debate. However that tends towards the opinions being aligned with if the speaker is coming at it from CS or public policy.

  6. While not exactly a debate — more of a fundamental difference in outlook — these are interesting, completely opposite claims:

    Bob Harper[1]:

    > There is an alternative… without… reference to an underlying machine… [W]e adopt a linguistic model of computation, rather than a machine model, and life gets better! There is a wider range of options for expressing algorithms, and we simplify the story of how algorithms are to be analyzed.

    Leslie Lamport[2]:

    > Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. … State machines… provide a uniform way to describe computation with simple mathematics. The obsession with language is a strong obstacle to any attempt at unifying different parts of computer science.

    [1]: https://existentialtype.wordpress.com/2011/03/16/languages-and-machines/

    [2]: http://research.microsoft.com/en-us/um/people/lamport/pubs/state-machine.pdf

  7. I think Moshe would not mind if I said that he was one to run, as quickly as is possible while maintaining dignity and decorum, to vigorous debate!

    Despite coming from the CMU Ed Clarke camp, I have no dog in this fight: the model checkers I’ve implemented have mostly supported LTL, but in practice I’ve almost always model checked systems for safety properties — in fact, just assertion violations.

  8. Re: GenProg/automatic program repair: speaking only for myself (not having consulted with my coauthors before posting), I go back and forth between thinking a specific response to the ISSTA ’15 paper in particular might be warranted, and thinking (as I do right now as I type) that the ongoing, vigorous, peer-reviewed, academic conversation between multiple groups and authors (not just Rinard and the original GenProg team) allows the science of patch generation to advance as all good science does: via replication, further investigation, and the advancement/proposal of both new techniques and methods to evaluate them (as in, for example, FSE ’15).

    (All hallmarks of a good and interesting scientific debate, in my opinion!)

  9. The debate around whether or not causally and totally ordered communication support (CATOCS) violates the end-to-end principle was a very public debate. This timeless argument about the role of the network in distributed system design is continuously re-evaluated by the community.

    Understanding the limitations of causally and totally ordered communication
    http://dl.acm.org/citation.cfm?id=168623

    A response to Cheriton and Skeen’s criticism of causal and totally ordered communication
    http://dl.acm.org/citation.cfm?id=164858

    End-to-end arguments in system design
    http://dl.acm.org/citation.cfm?id=357402

  10. “The other day a non-CS friend remarked to me that since computer science is a quantitative, technical discipline, most issues probably have an obvious objective truth.”

    This suddenly struck me as a somewhat bizarre statement in that a considerable chunk of even the academic CS literature is basically flamewars over which programming languages are good/bad/the-spawn-of-the-dark-one.

  11. (Not bizarre for a non-CS person to make, just rather funny in that the feeling other people are horribly wrong, without a great way to prove it, has been a key part of CS since early days).