This September, physicist Sergey Kravchenko of Northeastern University in Boston did something that scientists do hundreds of times over the course of their careers: He and his colleagues submitted their latest research findings to a scientific journal. The researchers had performed a study that they say experimentally verifies a theory on how electrons interact in semiconductors. They submitted their report to the prestigious journal Nature.
Getting findings published, either on paper or on the Web, is the final step in entering a researcher’s work into the scientific record. To make sure that the research is worthy of joining this hallowed collection, most journals use a method called peer review.
Put simply, peer review subjects scientists’ work to the scrutiny of other scientists in the same field. These typically anonymous reviewers weed out research with flawed methods or conclusions and work that isn’t a good fit for a particular journal. They also provide feedback to improve the scientists’ work and validate outstanding research.
The peer review process has changed little from journal to journal for centuries, and Kravchenko was expecting that his most recent paper would get vetted through the same process that put his 80-odd earlier papers into print. This time, however, an e-mail from Nature invited Kravchenko to participate in a peer review experiment. Besides sending research articles to two or three anonymous reviewers, the journal was posting some papers on a Web site where any interested scientist could voice his or her opinions about the research, as long as the commenters revealed their identities.
“I said, ‘Of course we’ll participate.’ The more people know about our work, the more feedback we’d get, and the better our research would be,” Kravchenko says.
Opening the peer review process to the larger scientific community could have multiple benefits—scrutiny by a broader audience may give study authors more ideas for improving their research and catch more low-quality and fraudulent papers before they enter the scientific record. But posting papers before they’re published might also open scientists’ work to plagiarism or the possibility of being scooped by competing labs.
Nature‘s peer review experiment is over, and the journal is now analyzing the results. But if the trial pays off, the traditional method for communicating research could get its biggest facelift in hundreds of years.
Peering at review
The peer review process that most journals use got its start in the early 1700s. Since then, journals have streamlined the process. Nature‘s version of traditional peer review is a typical one, explains Linda J. Miller, the journal’s executive editor.
When scientists send a research article to Nature, a group of editors on the journal’s staff makes a first-pass decision on its quality and suitability for the journal. If the paper makes that initial cut, the editors select a few researchers considered authorities in the appropriate field of science and ask them to read the manuscript. Those reviewers, also called referees, then read the report and privately voice their opinions. Most journals don’t reveal the identities of reviewers. The guarantee of confidentiality allows reviewers to be candid without fearing backlash from study authors.
These reviewers check the paper against a set of criteria: Is the experiment’s setup sound? Do the results make sense? Are the conclusions plausible, and are they novel and significant enough to make a contribution to the scientific record?
If the referees answer yes to these questions, journal editors usually accept the article for publication. Any negative feedback could send the paper back to its authors with a rejection notice or suggestions for improvement and resubmission.
Even though this traditional method works well, says Miller, “it’s a double-edged sword.” Serving as a referee can take a significant amount of a scientist’s time, so journals such as Nature usually choose only two or three reviewers for each paper. This small number of reviewers can miss significant shortcomings or instances of outright deception. Furthermore, while a referee’s anonymity provides protection from a rejected author’s wrath and enables the reviewer to openly offer constructive criticism to a known colleague, it also means that referees with biases or grudges against authors can reject a paper, sting a rival, and yet face no negative consequences.
Study authors often complain about the secretive nature of traditional peer review, Miller notes. While new technologies have encouraged science and society to become more open, the workings of this old-school system have remained shrouded. With Web bulletin boards, listservs, and Web diaries, or blogs, cranking out scientists’ thoughts continuously, “people are more open about things that previous generations were private about, and now they expect other people to be more open too,” says Miller.
Commenting central
Miller explains that the editors at Nature wondered whether this boom in openness might benefit peer review. In June, they launched a 3-month trial to find out.
The new, experimental form of peer review was to be identical to the old system, with one important tweak: During the several weeks that referees were considering a paper’s merits, study authors had the option of having their research article posted on a Web site so that anyone could read it. Like a blog, each paper posted on the site had a place for comments, though only scientists associated with a research facility were permitted to post them. All comments—their authors identified—were also publicly available for anyone to read.
When referees turned in their anonymous comments on a submitted report, it was moved from the Web site and the comment period closed. Nature‘s editors could then base their decision to accept or reject a manuscript on the comments from both the referees and all the scientists who had posted to the site.
“The idea was we’d go from two or three people’s advice to the advice of as many who would care,” says Miller.
Gathering such unofficial comments and suggestions isn’t a new idea to researchers in the physical sciences, says physicist Neil Mathur of the University of Cambridge in England, who like Kravchenko chose to post his team’s paper on Nature‘s Web site. Since the early days of the Internet, researchers in various fields related to math, physics, and astronomy have posted their research reports on preprint servers—Web sites where scientists can gather comments and suggestions from their peers on how to improve their papers before submitting them to mainstream journals.
For such scientists, says Mathur, “putting [papers] on Nature‘s server is no big deal.”
In contrast, says Miller, researchers in the biological sciences rarely release study results before they’re submitted for publication or presented at meetings. This split between the disciplines probably played a strong role in researchers decisions of whether to participate in Nature‘s experiment. Most papers submitted to the peer review Web site were in the physical sciences, with just a handful of biology papers posted, says Miller.
Regardless of the field in which a paper was classified, the potential payoff for joining the experiment was clear to Kravchenko. If a study is of high quality, he notes, the positive comments it’s likely to attract will help it get accepted into Nature. And if a paper could use some fine-tuning, then perhaps the open review process could garner helpful suggestions for the research.
The incentive for commenting on others’ papers was cloudier, Mathur notes. Although he would have liked to have some comments on his own paper—by the end of the experiment, it had none—he points out that reading papers and voicing opinions could be burdensome for scientists who are already strapped for time.
“It’s work,” says Mathur. “I haven’t been motivated to see if there’s a paper I’d like to comment on because I have plenty of other things to do.”
Final decision
At the end of September, Nature ended its experiment. Did it pay off?
In total, 72 research teams contributed their articles to the experiment, and 95 commenters posted their opinions. Most of the comments were given to just a few papers, and almost half of the papers received no comments at all.
Miller notes that these comments weren’t nearly as thorough as those that official reviewers are expected to make. “They’re generally not the kind of comments that editors can make a decision on,” she says.
While Miller isn’t sure why more study authors didn’t participate in the experiment, she speculates that researchers may have feared that putting unpublished work out in the open could leave them vulnerable to theft of their ideas. “People are afraid of being scooped,” she says.
Giving her initial take on the experiment, Miller adds, “I am not convinced that it was a value to the editors—enough value to change our processes permanently.” She wouldn’t predict whether Nature will repeat the experiment or use it as a guide to change its current peer review system.
But even with the experiment’s somewhat disappointing results, other high-profile journals are taking notice.
“We’re following the experiment with interest,” says Monica Bradford, executive editor of Science. “Peer review is central to scientific communication, and it’s important that we’re open to examining the peer review process to ensure that it remains a reliable means of vetting research.”
Diane Sullenberger, executive editor of the Proceedings of the National Academy of Sciences, agrees. “I thought [Nature‘s experiment] was a gutsy move that seemed long overdue,” she says. “I’m not sure if open peer review in and of itself would solve the problems with the peer review system now, but the only way we can tell is by conducting experiments such as these.”
Nature is still crunching numbers and plans to survey the experiment’s participants, before declaring the trial a success or failure. Regardless of the final outcome, Kravchenko notes that he and his team have been pleased to participate—their paper ended up with 10 comments, more than any other paper in Nature‘s trial. “It’s definitely been a good experience for us,” he says.
It’s hard to say whether the Web comments helped or hurt Kravchenko’s attempt to publish in Nature. The paper was rejected even though the public comments were generally positive. The team plans to revise its paper—on the basis of the official comments—and to resubmit it to Nature soon.
Experiments in Progress
Journals customize peer review
Though most journals use a review system much like Nature‘s traditional one, a growing number of science publications are choosing alternative methods that give readers a chance to offer their opinions.
For example, Atmospheric Chemistry and Physics, which launched 5 years ago, initially publishes online the submissions that make a first cut as “discussion papers,” explains executive editor Ulrich Pöschl. During an 8-week period, both official referees and any interested scientists can post comments on these papers in a public forum, choosing to stay anonymous or to sign the remarks. Afterward, the journal’s editors use the entire discussion to decide whether to publish the article in printed form.
“What we offer with discussion papers is strong papers and free speech very quickly,” Pöschl says.
Philica, an online interdisciplinary journal that’s still in testing, offers a new twist for incorporating readers’ opinions. Every report that’s submitted to the journal is posted online without initial review. Then, readers with academic credentials can leave comments and numerically rate papers on scales that cover such factors as originality and importance.
“It’s been described to us as eBay for academics,” says Nigel Holt, a psychologist at the University of Bath in England and one of Philica‘s founders.
Just as eBay’s users rely on ratings to judge the quality of buyers and sellers, Philica‘s users can lean on ratings to judge the quality of scientific papers.
“You wouldn’t want to buy something from someone rated extremely low,” Holt says.