What Peers?

[This post was motivated by, and includes things I learned from, a discussion with some of my blogging and non-blogging colleagues.]

Ideally the peer review process benefits not only the conference or journal that does the reviewing, but also the authors who receive valuable feedback on their submission. And usually it does work like that: I’ve received a large amount of great feedback on my work through paper reviews. However, I feel like some part of the process isn’t working very well. On the reviewer side, I keep getting papers to review that are far enough from my areas of expertise that the reviews I write are superficial (over the weekend I reviewed a paper on mapping temperatures in small bodies of water using sonar — what am I supposed to say?). On the author side, at least 50% of reviews that I get are similarly flawed: they don’t tell me anything new or useful. This is confusing to me because I would have expected that the massive proliferation of conferences and tracks at conferences that has taken place in computer science would facilitate good matching of reviewers to papers. Somehow this doesn’t seem to be happening. In contrast, when I need feedback on a paper I typically send it off to a few acquaintances and friends, and usually get tremendously useful feedback. Also, when I send people feedback on their work I often get a reply along the lines of “thanks — that was way more helpful than the reviews we got from conference X.”

A few random guesses about what’s happening:

  • The peer review system is groaning under the overload of papers produced by all of us who are trying to get a higher h-index, trying to get a faculty position, trying to get tenure, etc.
  • Overworked program chairs are resorting to algorithmic methods for assigning papers to reviewers. This never, ever works as well as manual assignment (though it may be useful as a starting point).
  • There may be an increasing amount of work that crosses community and sub-community boundaries, making it harder to effectively pigeonhole a paper. Conferences and journals do not keep up well as boundaries and interests shift.

The interesting question is: how can we use blogs, online journals, and other modern mechanisms to better match up the people producing research to the people who are best able to evaluate it? It seems like some of the obvious answers, such as adding a reviewing facility to arXiv, suffer from problems where I submit a paper and then get my ten best friends to submit reviews saying how great it is.

Update from Jan 18, evening: Because it’s so easy to throw stones, I wanted to add a couple of positive anecdotes:

  • A student and I submitted a paper to POPL 2011. Although it was rejected, the reviews were super useful and totaled to almost 25 KB of text (a bit over 4000 words) after deleting all boilerplate. This is especially amazing considering that POPL got a lot of submissions, or so I heard.
  • I was recently invited to be on the “external review committee” for ISMM, the International Symposium on Memory Management. The invitation said “ISMM reviews are often on the order of one typewritten page” which is again pretty amazing. I’ve submitted only one paper to ISMM and the reviews (again after removing all boilerplate) came out to nearly 21 KB (not quite 3400 words).

ISMM is an interesting conference because it is of unusually high quality for the relatively small community it serves. I’m not sure how they have accomplished that, but my guess is that it’s a fairly tight-knit community where there are good incentives to provide people with useful feedback.

9 thoughts on “What Peers?”

  1. Your first guess is closest to what I think is going on. But it seems there is also a deeper problem. The incentive structure is completely ass-backwards: there’s no concrete reward for a good review. Rational actors simply spend their time elsewhere, typically writing more papers. And it obviously compounds on itself, because at the moment you realize reviews are mostly shitty, you would be rationally inclined to rely on maximizing the number of submissions.

    Your comment about using your personal network of friends and acquaintances to get effective reviews goes to the heart of this incentive structure thing. It is rational for me to give a good honest review of a friend’s paper, since I’m implicitly counting on him to give me a good honest review back. Such reciprocity is nowhere to be found in the current structure of peer review.

  2. Following the point Carlos made, resorting to friends may work to have a “working review”, i.e. finding small flaws or parts that are hard to understand, but to be accepted you need to be reviewed by the person the editor choose… If it is a bad choice, you have to cope with it (or switch journals). I have only one published paper and luckily for me, this first one had a very good set of comments and things to add, which made the article slightly more interesting, OTOH a friend of mine is still waiting for his review (after more than a year, mine was in less than 5 months).

    Cheers,

    Ruben

  3. There are several alternatives to the typical peer-review system that are currently being used in other areas. For example, in the BMC journals, all pre-publication history is public. You can go to the paper webpage:

    http://www.biomedcentral.com/1471-2334/10/190/prepub

    and download all manuscript versions, referee reports (including referee names), editor comments, etc…

    On one hand, this addresses some of your criticisms. When refereeing, you know that the author know who you are so you have more of an incentive to provide useable feedback. Also, the discussion around the paper may also contain valuable insights that can be used by the readers.

    On the other hand, this also puts a lot of pressure on the referee. How likely are you, as a post-doc or a tenure seeking professor, to be honest about a bad paper coming from one of the big names in his or hers field? Specially when that same person may end up having a decisive role in your career at some point in the future?

    I think the BMC system is a good step in the right direction, but I’m doubtful it is the final answer.

    However, the current systems in CS conferences (and having had the opportunity to publish in a couple of them) seems significantly worse. First, since it is a conference and not a journal, there is a hard deadline for submission. If you miss it, you might end up having to wait up to a year for another opportunity (if there are no other suitable conferences). In the rush to get everything done in time, you may end up commiting mistakes that will cost you that long sought after publication. And of course, any type of coming and going between authors and referees is completely impossible since decisions (due to the time constraint) are always final.

    The overall system is definitely broken, and overwhelmed, but solutions will not be easy to find.

  4. I disagree about there being a lack of incentive (at least for conferences). I’d be embarrassed if I were on a PC and didn’t provide reviews that were as detailed as my peers. If I don’t earn my PC invitation, I expect the PC invites will dry up eventually.

    A couple of solutions… PC Chairs should keep track of the PC members that don’t provide adequate reviews. The steering committee can aggregate such a blacklist and ensure that such folks are invited to a limited extent on future PCs.

    I have also noticed that folks that are more junior in their career tend to write more detailed reviews (more incentive, less busy, less jaded). I strongly feel that PCs should include more junior folks. I realize that senior members bring a great perspective as well and should of course be included, but there is room to move the dial further towards the youth. For example, in this year’s ISCA PC (my community), of the 50 members, only 15 got their Ph.D.s in the last 10 years and less than a handful in the last 5 years.

  5. It goes back to incentives. Even if there are some people embarrassed enough to write good reviews, that only suggests that we need more scrutiny of the reviewer and the reviewing system. Secondly, all good intentions get thrown out when you’re in a time crunch and are trying to prioritize.

    Fundamentally, the system disincentivizes good reviewing, so any good reviewing is a function of the community and happens despite the system, not because of it. So you have to foster a good community, or change the system, or both.

Comments are closed.