Do we sign our peer reviews? Mostly, no.

Update, 24 November 2014: There’s been a renewed interest in this post, so now is as good a time as any to note that, in addition to this survey, I also posted written responses from folks who choose to sign their reviews and those who remain anonymous. I recommend reading them all!

Last week, inspired by discussions with my co-bloggers and a post by Terry McGlynn, I asked our readers to tell me whether they do peer review anonymously, and why. A total of 87 folks responded to a brief online survey, and here’s what they said: most of us review anonymously, and a lot of us do it to protect ourselves in interactions with senior colleagues.

First, the headline result: how many Molecular Ecologist readers review anonymously? Of the 87 survey participants, 82% (71) said that generally they do no sign their peer reviews.


But I also asked participants how many reviews they’d done in the last year, and how many of those were anonymous—and this revealed that those general statements aren’t ironclad.


The 16 participants who said they generally sign their reviews actually signed a median of 79% of the reviews they performed in the last year. The 71 who generally don’t sign their reviews were most likely to have stuck to anonymity the whole time—but over 10% of them (8) said they’d signed more than a quarter of their reviews.

Participants’ career stages had a marginally significant effect on the proportion of reviews they signed (ANOVA p = 0.052), and the differences among career stages don’t line up cleanly with seniority.


Grad students, the least senior group, reported a wide range of practices—but this probably reflects the fact that they performed fewer reviews. (There is, not surprisingly, a strongly significant effect of seniority on the number of reviews participants conducted in the last year; ANOVA p = 0.007. Grad student participants reviewed a median of 3 papers, compared to 7 for postdocs, 9 for tenure-track faculty, and 12 for tenured faculty.) Postdocs and non-tenure-track faculty mostly review anonymously, consistent with concerns about their interactions with colleagues who still have a lot of control over career advancement. Tenure-track faculty were most likely to sign their reviews; but tenured faculty were among the participants least likely to sign.

What reasons did participants give for their decisions to sign or not sign reviews?

Of those who
Percent saying Do sign Don’t sign
It’s what I always do13%48%
It promotes better peer review75%45%
Concerned for reputation/ interaction with colleagues25%54%
Journal policies13%21%

Participants who said they generally review anonymously were more likely to cite habit, and to say that they were concerned for their reputation or their interactions with colleagues. Folks who said they generally sign their reviews were more likely to say that they thought this practice promotes better peer review.

Finally, it’s also worth noting that participants with different reviewing practices did not differ in the amount of reviewing they did in the past year.


That’s anonymity in peer review by the numbers—at least for folks who read The Molecular Ecologist. Some of the recent discussion of peer review has pointed up the differences among even relatively similar scientific fields, and our small sample here is probably mostly folks who would call themselves evolutionary ecologists or population geneticists. I also asked participants whether they’d be interested in discussing their practices at greater length, and a number of folks agreed to answer some questions by e-mail—I’ll be posting those responses next week!


About Jeremy Yoder

Jeremy Yoder is an Assistant Professor of Biology at California State University, Northridge. He also blogs at Denim and Tweed, and tweets under the handle @jbyoder.

This entry was posted in community, peer review, science publishing. Bookmark the permalink.
  • I had no idea I’m a double outlier. First, for being a tenured faculty member reading this. And by being (it looks like) the only of those four tenured faculty who with a nonzero signing rate.

  • Arianne Albert

    Interesting discussion Jeremy. Just a couple of questions. 1. From the second figure it looks like one of the “no” people actually signed 100% of their reviews. Perhaps they should be relabelled as a “yes”? 2. Why an ANOVA for proportions? A logistic regression would do a better job there, but you’ll probably need to pool some categories. Not to be nitpicky, just curious if the results hold when the analysis is better suited to the data.

    • 1. Because the survey first asked people what they do generally, then asked how many reviews they’d done in the last year, and how many of those reviews were anonymous. So this is a person who says s/he generally reviews anonymously, but nevertheless signed all the reviews s/he did in the past year. People are funny.

      2. Mostly because this is what you get while I’m running numbers for a blog post through R first thing in the morning? I also didn’t apply for IRB approval for research with human subjects. More seriously: I don’t think that logistic regression at the cost of pooling categories is an appropriate way to ask whether the apparent among-category differences are greater than expected by chance … but you’re right, straight-up ANOVA isn’t appropriate. A redo with the data arc-sine transformed returns p = 0.059.

      • Arianne Albert

        Cool, thanks!

  • Pingback: Friday recommended reads #23 | Small Pond Science()

  • Pingback: Why we don’t sign our peer reviews | The Molecular Ecologist()

  • Pingback: Why we sign our peer reviews | The Molecular Ecologist()

  • Pingback: Science online, footprints of destruction edition | Jeremy Yoder()

  • Hope Jahren

    Okay, from the perspective of a long-time editor, here goes:

    When a reviewer signs a review (good, bad, indifferent), here’s what I immediately get back from the author:

    “Dr. X is an idiot and a notorious unethical meanie. S/he’s had it out for me from day one and this is yet another unjust example of her/his gleefully vindictive nature, which a higher power will no doubt punish one fine day. Blah, blah, blah, unproductive blah and more blah.”

    When a reviewer remains anonymous on her/his review (again good, bad, or indifferent), here’s what I immediately get back from the author:

    “Reviewer #2 suggests our result 2+2 = 4.0 is better framed as 2+2 = 4. This constitutes a massive reframing of our argument and we resist this for the following 87 reasons: Blah, blah, blah, productive blah and more blah.”

    My point here is signing your review turns the subsequent revisions into personality war between three (or so) people within the author’s mind, and derails focus from the content of the manuscript. I also note that this conversation about whether to sign your reviews is overwhelmingly focused upon what this might/might not do to one’s advancement, reputation and (essentially) power position. No one that I’ve seen has mentioned the effect on the *science* within the manuscript. This troubles me.

  • Ellen Simms

    I never sign. Once in a blue moon, I’ll forward a copy of my review privately to the author with some very specific, detailed comment that comprises what I hope is a helpful suggestion that they can use in the revision; for which I’d like to be acknowledged (but without actually stating that).

  • Pingback: The Fourth Reviewer: What problem is open peer review trying to solve? |()