Quality Not Quantity?


The perverse incentives for academics to maximize publication and citation counts, as opposed to maximizing the quality and impact of the underlying research, are well-known. Stan Trimble’s recent letter to Nature suggests a partial solution: academic institutions should limit the number of publications that are part of a tenure or promotion case. This is simple enough that it might actually work. I’d suggest a three-publication limit for faculty applications, five for tenure, and perhaps seven for full professor cases. A bit of gaming would be required to navigate certain situations:

  • Do I list a mediocre paper that appeared at a top conference, or a stronger one from a minor venue?
  • Do I list a major paper where I played a minor role, as opposed to a less impressive single-author paper?
  • If I’m switching areas, do I focus on the previous area (which is what my letter writers are likely to discuss) or the new one?

There’s plenty of gamesmanship in the process already, so these shouldn’t be a big deal.

A more extreme version (attributed to Matthias Felleisen, though I don’t know if he originated it) is: faculty are hired as full professors. Upon publishing their first paper, they are demoted to associate professor, then assistant professor, and then after the third publication they are required to leave academia.

The idea behind these various proposals, of course, is to cause people to view publication as a precious resource, rather than as a metric to be maximized. There would be pushback from researchers who act as paper machines, but I’d expect it could be overcome.

,

12 responses to “Quality Not Quantity?”

  1. I do like the 3-5-7 idea, although, one hopes that committees and letter-writers are already implicitly doing that. In fact, I do see committees working hard to identify top-tier venues for each candidate. Oversimplifications don’t usually work for evaluation processes. For example, in addition to seeing the impact of one’s top 5 papers, I’d like to see how a candidate cultivates strong publication CVs for their students. That includes “salvaging” some work via second-tier pubs.

    Since evaluation committees are savvy enough to recognize venue quality, one is less likely to publish at lesser venues early in their career. Late in one’s career, the urge may be higher when one is more concerned with getting work out efficiently without battling selective program committees. So I’m not sure these suggestions fix a large part of the problem.

  2. Wait, I don’t see the connection in this context. But it is uncanny that you read my mind correctly even when we were talking about something else… 🙂

    Btw, I might have missed the background for this post. What is the problem that you’re trying to address? I don’t necessarily buy the premise that people are chasing publication counts. Surely, every junior faculty member knows that a few strong pubs is better than many mediocre pubs.

    I also don’t see a problem with people chasing citation counts (whatever that means). Citation counts are just a quantitative substitute measure for impact. If I am evaluating someone in a different field, I’d trust citation count over my own opinion of their work.

  3. @Daniel:
    “Somehow, bloggers and novelists are not rated based on quantity…”

    Unfortunately, they are!!! Most of the blog ranking algorithms take into account the number of “recent” posts, and most of the (novel) publishers pressure their writers to write as much as possible [Some novelists are well known for that, like Amelie Nothomb in France, writing one new novel each year…].

  4. I’ll save my citation defense for your citation post. 🙂

    The article talks about problems that are largely absent in CS. Is there really an increase in the number of relevant venues that you must pay attention to? Are papers at these venues receiving zero citations in five years?

    There are lots of little things in the article that just don’t apply to CS. And then there are things mentioned that we already do. For example, my tenure CV did include citation counts.

    As a field, I think CS is far ahead of many other disciplines. In short, I don’t see a significant problem in either the way people are evaluated or the way junior faculty go about optimizing their CVs. I’d be very surprised if junior faculty are shelving their best ideas to instead work on little things that boost pub count at lesser venues.

  5. “I’d be very surprised if junior faculty are shelving their best ideas to instead work on little things that boost pub count at lesser venues.”

    ok my head just officially exploded.

  6. But what about cases where one publication seems normal in the present but becomes valuable in the future? Maybe thats the reason why everyone tries to publish so much?

  7. http://citeseerx.ist.psu.edu/stats/venues
    Ranking are generated by Garfield’s ranking: http://en.wikipedia.org/wiki/Impact_factor

    For arch research the generally accepted “top tier” venues are ISCA, MICRO, ASPLOS, HPCA (probably in that order for most peopole). Secondary conferences like PACT, ISPASS, ICS, SC, HiPC generally are lumped together.

    According to citeseer the rankings should be ASPLOS (8), HPCA(38), ICS(39), ISCA(81), MICRO(115), PACT (217), HiPC(444), SC(566), ISPASS (unranked). While there is noise in any ranking system – if you’re trying to play the game and get citation counts up it looks like the conf’s that should be targeted are not what people generally think.

  8. It is interesting that you mention this 3/5 threshold.

    In France, for applications for junior professorships (maître de conférence, which you would translate into assistant professor) or junior research positions (chargé de recherche), we tend to ask for a list of 3 most significant publications; for senior professorships (professeur des universités, which you would translate as associate or full professor) ou senior research positions (directeur de recherche), we tend to ask for 5. In

    (In both cases, a complete list of publications is asked in appendix.)