The Citation Telephone Game


(sorry, I have no good attribution for this image)

My kids often come home from school spouting crazy “facts” they’ve learned from classmates. It seems fundamentally human to repeat stories and, in the repeating, alter them—often unintentionally. Researchers do the same thing, and just this morning I was irritated to read an entirely inaccurate citation of one of my own papers. No doubt others have had similar feelings while reading my work.

The Leprechauns of Software Engineering, Laurent Bossavit’s in-progress ebook (or e-screed, perhaps), contains wonderfully detailed examples about how some well-known facts in the software engineering field such as “bugs get more expensive to fix as time passes” have pretty dubious origins. The pattern is that we start out with an original source that makes certain claims, hopefully based on empirical evidence. Subsequent papers, however, tend to drop details or qualifications, using citations to support claims that, over time, diverge more and more from those in the original paper. In science, these details and qualifications matter: just because a fact is true under certain circumstances does not mean that it generalizes. Worse, the fact may not even be true in its original form due to statistical issues, flaws in experiment design, and similar. Complicating matters more, Bossavit seems to be finding cases where the slant introduced during citation is self-serving.

One story in Leprechauns made me laugh out loud: Bossavit was lecturing his class on a particular piece of well-known software engineering lore and realized halfway through that he wasn’t sure if or why what he was saying was true, or if he was making sense at all. Something similar has happened to me many times.

Although Leprechauns takes all of its examples from the software engineering field, I have no doubt that something similar could be written about any research area where empirical results are important. Bossavit’s overall message is that the standards for science need to be set higher. In particular, authors must read and understand a paper before citing it. Of course this should be done, but it’s not a total solution to the telephone game. As I think I’ve pointed out here before, the actual contribution of a research paper is often different from the claimed contribution. Or, to put it another way, we first need to understand what the authors intended to say (often this is not easy) and then we also need to understand what was left unsaid. A subtle reading of a paper may require a lot of background work, including reading books and papers that were not cited in the original.

,

8 responses to “The Citation Telephone Game”

  1. I have not read Leprchauns yet, so I am just going off your article. . . sort of makes anything I say at this point fairly meaningless.

    Does the burden of unravelling the context fall exclusively to the current author doing the citation?

    While that would be nice, it does not seem very practical. We should expect a higher standard for research papers than blog comments, but I assert that’s an issue of degrees, not nature. At some point there has to be trust that somebody is actually saying what they meant, right?

  2. Hi Nathan, you’re right, there’s no way people can be responsible for verifying results they cite. I think the basic issue here is that (for many reasons) people like to take complex stories, or collections of facts that do not amount to a story, and turn them into simple stories. This comes to mind:

    http://xkcd.com/904/

  3. Regarding people explaining your work in ways that you feel are not faithful to it: the natural idea is “well, you should comment on that to set the record right¹”. While you can do this on a blog post (or a mailing-list), it is awkward to do when the explanation is the “related works” section of a paper. Where do you publicly react to a paper? It often happens that the original author is one of the reviewer, and can comment on this privately during the review phase. But this is a situation where, in my opinion, private debate alone is less useful to the research community than public debate².

    I think we need a platform that leaves more place to comments than conference proceedings. Blogs may be one of the answer. I think in the long term research methods may shift from papers to some other form of reference research presentation (that may be more blog-like, not printed in dead tree form by default and, please, legal to freely distribute and share). In the short term, we can use blogs (or reddit or other web discussion platforms) to react to articles to say “I’m not sure about the presentation of my own work in this article for this and this reason”, as long as the authors participate in those new platform.

    ¹: or at least give your opinion. I suppose it some case it happens that the researchers have a point about their vision of a paper, even if the author disagrees with it. In any case, we want to have the contestable comment and the author answer close to each other.

    ²: there is still a place for private debate before things are expressed in public, or as a way to smooth thing over if the discussion becomes less civil. But while it is considered good practice to email people privately to tell them “hey, I cite your work in my new paper, please check that you’re happy with it”, it would be disastrous to the interest of research articles to only send this “related work ” section privately to the concerned authors and the reviewers. Similarly I think there would be a better place for non-private (or post-private) “reviews”.

  4. While it might be less common, it’s still often in theory that people mis-cite results, or let folklore claims about algorithms persist, or even incorrectly repeat attributions. There are famous examples of theorems being attributed to the wrong person.

  5. Jeffrey, thanks. I think xkcd is nearing Far Side in its near-universal applicability to regular life situations.

    Hi gasche, I agree about the need for public forums to debate papers. My understanding is that although lots of people want this, there aren’t yet (even close to) any winners emerging for CS. Something like arXiv + forums would seem to be a good start.

    Thanks for the link Derek, I had not seen this.

    Hi Suresh, I agree and have seen similar things in theory-oriented papers. I guess maybe the way to think about it is that empirical results (in particular, those derived from human studies) might add a bit of extra potential for misinterpretation or misrepresentation.