Peer review, reviewed

Partners in crime

Revise and resubmit.

Rebecca Schuman, who has almost single-handedly turned Slate into one of best big websites for coverage of the many trials and tribulations of academia, turns to peer review for scholarly journals, in which an author’s academic peers volunteer to weigh in on whether a manuscript is worthy of publication. Schuman discusses the problems of of both how long the process takes—routinely more than a year, especially with the back-and-forth of revisions—and tone:

Think of your meanest high school mean girl at her most gleefully, underminingly vicious. Now give her a doctorate in your discipline, and a modicum of power over your future. That’s peer review.

And she suggests something that might sound familiar to those of us who hang out in the evolutionary ecology blog-o-verse: enforced reviewing reciprocity.

… what if in order to be eligible to submit an academic article to a journal, a scholar had first to volunteer to review someone else’s article for that same journal? … You want to publish and not perish? First you have to earn that right by making a punctual, non-petty investment into the publishing enterprise.

Paging Jeremy Fox:

A few years ago, Owen Petchey and I proposed a reform known as PubCreds, the purpose of which was to oblige authors to review in appropriate proportion to how much they submit. For instance, if each paper you submit receives two reviews, then arguably you ought to perform two reviews for every paper you submit.

Fox and Petchey and Lindsay Haddan actually tested the hypothesis that the average ecologist receives more reviewing services than he provides, using data on papers sent out for review by the journals of the British Ecological Society. They found (given caveats and follow-up discussion in the post linked above) that the relationship between reviewing and being reviewed was actually pretty well balanced. While a number of authors contributed fewer reviews than their own papers required, another large group reviewed more than required.

Figure 1 from Petchey et al. (2014).

Depending on different assumptions about the relationship between reviewing and the need for review, from 12 to 44% of authors for BES journals did at least twice as much reviewing as they “needed” to; and from 20 to 52% did less than half. Figure 1 from Petchey et al. (2014).

Of course, things may be quite different in the humanities, which are Schuman’s academic homeland. That may also explain why Schuman doesn’t bring up the other problem of peer review that occupies most scientists, of finding a journal at the right “level” for a paper—i.e., determining whether the paper is novel or important enough to match a particular journal’s (self-image of) prestige. In my experience, which has mostly involved faster reviewing time frames than Schuman describes, the real time-suck in publishing comes when you submit to a prestigious journal, go through review, and end up with a decision along the lines of “this is nice, but it’s not a big enough deal for us.” Then you have to reformat, and often extensively re-write, for a different venue a little farther down the prestige ladder. Review at each journal might be timely and constructive, but by the time the paper appears in print you’ve burned 18 months.

This problem is the motivation for the Axios pre-submission review service (co-managed by our own Tim Vines), in which authors submit manuscripts with a list of possible target journals, and reviewers select the best match—then forward their reviews and the manuscript, revised accordingly, to that journal. It’s a workaround, but as the number of participating journals increases, it promises to substantially reduce the time to publication.

Apart from the prestige-level-finding problem, I have generally had pretty good peer review experiences—and in the one or two cases where a reviewer behaved badly, the journal’s editor treated the bad review appropriately. (There is, of course, the caveat that any peer review process that ends with a published paper feels like it went all right in the end.) Everyone would like reviews to happen faster, but I think most of us put them on the calendar just before the official deadline, and it’s never difficult to think of other things on one’s to-do list that should happen before writing a review.

As for the differences between science and the humanities—I am currently awaiting reviews of my first truly interdisciplinary manuscript. Maybe I’ll feel quite differently in a week or two, or ten.

Reference

Petchey O.L. & Lindsay Haddon (2014). Imbalance in individual researcher’s peer review activities quantified for four British Ecological Society journals, 2003-2010, PLoS ONE, 9 (3) e92896. DOI: http://dx.doi.org/10.1371/journal.pone.0092896

RedditDiggMendeleyPocketShare and Enjoy

About Jeremy Yoder

Jeremy Yoder is a postdoctoral associate in the Department of Plant Biology at the University of Minnesota. He also blogs at Denim and Tweed and Nothing in Biology Makes Sense!, and tweets under the handle @jbyoder.
This entry was posted in peer review, science publishing. Bookmark the permalink.
  • Marty Kardos

    Perhaps some of the bad components of the review process (eg petty, mean comments; taking months to respond; killing a paper because of non-disclosed competing work) would largely disappear if reviewers routinely signed their reviews.

    Perhaps most of us would tend to be more helpful, fair and timely, and less offensive if we were not hidden behind anonymity.

    • http://www.denimandtweed.com/ Jeremy Yoder

      We’ve actually had an extensive discussion of anonymity in peer review here at The Molecular Ecologist, and I probably should’ve brought it into this post as well: I conducted a small survey of readers’ opinions, then collected comments for and against anonymity.

      I’m actually not inclined to think that reviewer anonymity, as it’s usually practiced at scientific journals, contributes much to bad behavior. That’s because, although the authors don’t know who is reviewing their work, the editor does, and I (for one) want to be on my best behavior when I’m interacting with a journal editor in any capacity. Also, the worst reviews I’ve gotten were actually cases where it was pretty easy to figure out who the bad reviewer was!

      If anything, there’s a strong case to be made that more anonymity in the review process would make it better—meaning double-blind review. When reviewers don’t know the names of authors, we see, for instance, more similar outcomes for male and female authors.

  • Pingback: Friday links: revisiting old papers, life after lab closure, and more | Dynamic Ecology()

  • Pingback: Stuff online, feet of clay edition | Denim & Tweed()

  • Tim Vines

    I like this take on the Schuman piece: http://prosedoctor.blogspot.ca/2014/07/peer-review.html

    • http://www.denimandtweed.com/ Jeremy Yoder

      Approve—
      Sent from Mailbox