Peer review in a changing world – preliminary findings of a global study

In recent years, the peer-review process has attracted criticism, from individuals involved in the process, and the press who have questioned its validity 1,2. What is the role of peer review? What does peer review do for science and what does the scientific community want it to do? Does it illuminate good ideas or shut them down? Should peer review detect fraud and misconduct? Should reviewers remain anonymous? These questions are asked of 4,037 researchers in one of the largest ever international surveys of researchers and reviewers. Peer review now results in approximately 1.3 million3 learned articles published every year. It is fundamental to integration of new research findings in hundreds of fields of enquiry. It is the front line in critical review of research, enabling other researchers to analyse or use findings and, in turn, society at large to sift research claims. It is growing year on year with the expansion of the global research community, and with it has come a corresponding expansion of concerns about involving the next generation of researchers in peer review in sufficient numbers. Can the peerreviewing effort be sustained? What has the impact been of electronic technologies, and will alternative metrics play a greater role? This survey builds upon earlier research conducted by the Publishing Research Consortium (PRC) as part of the Peer Review Survey 20074. Some of these original questions are repeated for comparison, and a set of new questions about future improvements, public awareness and new pressures on the system have been added in consultation with editors and publishers.


Introduction
In recent years, the peer-review process has attracted criticism, from individuals involved in the process, and the press who have questioned its validity 1,2 . What is the role of peer review? What does peer review do for science and what does the scientific community want it to do? Does it illuminate good ideas or shut them down? Should peer review detect fraud and misconduct? Should reviewers remain anonymous? These questions are asked of 4,037 researchers in one of the largest ever international surveys of researchers and reviewers.
Peer review now results in approximately 1.3 million 3 learned articles published every year. It is fundamental to integration of new research findings in hundreds of fields of enquiry. It is the front line in critical review of research, enabling other researchers to analyse or use findings and, in turn, society at large to sift research claims. It is growing year on year with the expansion of the global research community, and with it has come a corresponding expansion of concerns about involving the next generation of researchers in peer review in sufficient numbers. Can the peerreviewing effort be sustained? What has the impact been of electronic technologies, and will alternative metrics play a greater role?
This survey builds upon earlier research conducted by the Publishing Research Consortium (PRC) as part of the Peer Review Survey 2007 4 . Some of these original questions are repeated for comparison, and a set of new questions about future improvements, public awareness and new pressures on the system have been added in consultation with editors and publishers.

Methodology
Sample 40,000 researchers were invited to complete the survey. They were randomly selected from the Thomson/Reuters author database, which contains published researchers from over 10,000 journals. As the authors were randomly selected it is reasonable to expect they were from variety of sources, and represent a mixture of different publishers.

Approach
The researchers were contacted via e-mail and requested to complete the survey. The online survey was conducted between 28 July 2009 and 11 August 2009. Each researcher who had not completed the survey after a week was sent a reminder.

Research tool
The researchers were asked to complete a short survey, which took approximately 15 minutes to complete. They were asked to specify their level of agreement with a number of statements. The scale used to determine agreement was a five-point Likert 5 scale. (In order to avoid the 'halo' effect, a form of bias that occurs when a series of answers are influenced by responses to former statements, for different respondents, the order in which statements appeared was rotated. Respondents were also given the option to say how peer review could be improved via an open end question.)

Representation
Altogether, 4,037 researchers completed our survey. This represents a margin of error ± 1.5% at 95% confidence levels. Reviewers answered a subset of questions aimed specifically at reviewers (3,597a subset of the base), and the error margin for this group was ± 1.6% at 95% confidence levels.
The distribution of respondents is representative of the research community, and is broadly in line with the geographical, subject and organization type distribution one might expect for the research community and reflects a distribution seen in the PRC study 6 . The distribution according to subject, geography, age, gender, organization and position can be seen in Figure 11 (reproduced at the end).

Response rate
It is difficult to be precise about response rates as a number of e-mails would have been stopped by spam filters, but the response rate was good, at just over 10%.

Results
While some in the media, and indeed in science itself, may be questioning the purpose of peer review, it is clear that researchers want to improve peer review, not replace it. Most researchers (69%) are satisfied with the current system of peer review. This response is even more emphatic when you consider that it represents an increase of five points since the same question was asked in the PRC study in 2007 ( Figure 1). Most (84%) believe that without peer review there would be no control in scientific communication (the same as 2007). However, it is clear the system is far from perfect, with just under a third (32%) thinking that the current system is the best that can be achieved ( Figure 2).
It is seemingly an unrewarding job with a few fringe benefits, so why do it? Reviewers indicate it is mainly because they believe they are playing an active role in the community (90%), and, quite simply, many (85%) just enjoy being able to improve papers ( Figure 3). Reviewers tend to be driven by altruistic reasons. Only 16% of respondents said they agree to review because they feel it will increase their chances of having future papers accepted.
When it comes to the purpose of peer review, researchers have high expectations. Of those responding, 79% or more think that peer review should identify the best papers, determine their originality and importance, as well as improve those papers. Improving papers is where peer review is most successful. Almost all researchers (91%) believe that their last paper was improved as a result of peer review ( Figure 4 and Figure 5); and the biggest area of improvement was in the discussion.
Given the nature of science, and the need to repeat and build upon previous work, it is interesting to see that the vast majority of authors and reviewers think peer review should detect fraud (79%), but only a small number (33%) think it is capable of this ( Figure 4). It is the practicalities involved that make it difficult; researchers point out that examining all raw data would mean peer review grinds to a halt. When asked how peer review can be improved, very few mention fraud, clearly indicating that is neither widespread nor a pressing issue in the minds of researchers.  Most researchers (81%) think peer review should ensure previous research is acknowledged. However, just over half (54%) think it is capable of doing this (Figure 4). This reflects current discussions in the research community that there is a need for new studies to be set in the context of existing evidence 7 .
As might be expected, researchers agree that peer review, as a concept, is well understood by the scientific community. However, this level of understanding is in sharp contrast to the research community's perception of the public's awareness of peer review: just 30% believe the public understands the term (Figure 2).
There are a number of types of peer review that exist: single blind, the most common form in science, where the author is known, but the reviewers' identity is hidden, double blind, where both identities are hidden from one another, and open peer review, where both the author's identity and reviewers' identities are known to one another 8 . Most reviewers want anonymity; more than half (58%) of the researchers say they would be less likely to review if their signed report was published alongside the paper reviewed. Similarly, 51% would be discouraged from reviewing if their name was disclosed just to the author and 45% would be discouraged if their name was published alongside the paper as a reviewer ( Figure 6). These results support previous research on the 'British Medical Journal' which suggested that open peer review significantly increased the likelihood of reviewers declining to review 9 . Over three quarters (76%) favour the double-blind system where just the editor knows who the reviewers are, but some researchers questioned whether an author's identity can be truly anonymized (Figure 7).
When it comes to incentivizing reviewers, just over half (51%) of reviewers thought receiving a payment in kind (e.g. subscription, waiver of their own publishing costs, etc.) would make them more likely to review. A large minority (41%) wanted payment for reviewing, but this drops to just 2.5% if the author had to cover the cost. Acknowledgement in the journal was popular for many, with 39% stating they would be in favour ( Figure 6). While some researchers are undoubtedly more likely to review if they were to be incentivized in Question: Please say whether the following would make you more or less likely to review for a journal some way, the majority of respondents enjoy reviewing and will continue to review (86%) (Figure 8), regardless of incentive. Technology has enabled a great deal of change in the peer-review process over recent years. Much of the process is now managed online for the majority of journals, and 73% of reviewers (a subgroup in the study) believe that technological advances have made it easier to do a more thorough job reviewing now than five years ago ( Figure 8). The advent of technology has also made if feasible to consider replacing peer review with alternatives. For many this is a step too far; just 15% of respondents felt that 'formal' peer review could be replaced by usage statistics (Figure 7). However, a number (47%) believe supplementing peer review with some form online commentary or user rating would be advantageous (Figure 7).
In terms of improving the peer-review process, knowing why reviewers reject requests to review should give us an indication of where there might be weaknesses. Over the course of a year, on average, a reviewer turns down two papers ( Figure 9) and 61% of reviewers have rejected an invitation to review an article within the last year, citing lack of expertise as the main reason -this suggests that journals could better identify suitable reviewers ( Figure 10). Many think that more could be done to support reviewers; 56% believe there is a lack of guidance on how to review, while 68% agree that formal training would improve the quality of reviews (Figure 8).

Discussion
This study shows that in spite of increased pressures on the peer review in this changing world, the process remains critical to effective scholarly communication. Peer review continues to perform the critical functions: filtering and improving manuscripts. It is clear that there is no desire to replace it with the 'wisdom of the crowd' via metrics such as usage statistics, but instead to augment it or to subtly change its approach. Publishers, who are taking an increasingly more active role in peer review, have systems available and in development that will deal with some of the weaknesses identified in this study. Plagiarism can be more easily identified via systems such as CrossCheck 10 . The number of inappropriate manuscripts being sent to reviewers is likely to reduce as electronic systems grow and become more efficient at matching records and individuals according to key words. Double-blind peer review is perceived as the most effective form of peer review. Comments suggest this is because it is considered the most objective and thus will help eliminate reviewer bias. But is it likely to do this? A number of researchers identified what would seem like insurmountable obstacles. Authors, though their name is hidden, would reveal their identity either through the field of study, especially in niche areas, their citation pattern, or style of writing. A study by Justice et al 11 tends to support this position, 'blinding' was not successful in 32% of cases, and well-known authors had been far more difficult to blind. More research is needed to establish whether or not double-blind peer review is more effective than other methods.
Fraud continues to attract attention in the media, but within the community is not perceived as a critical issue. Nonetheless, there is a desire on behalf of the community for preventive measures to be taken, but exactly what those measures should be is unclear. It is difficult to develop a system that guarantees fraudulent papers are never published -such, it could be argued, is the wider role of science. Repeating the experiment is perhaps the most effective way, but experimental outcomes may genuinely vary, especially in the life sciences. Reviewers can only do what they do best; identify if research is new, interesting, correctly conducted, acknowledges previous work, and is appropriately summarized. Preventing fraud is most likely to be successful when done by the institute at which the research is being conducted. It is the institute that will have access to the laboratory notes and the raw research files.
Peer review, though far from perfect, is still manifestly a key service for the research community and the public at large. It is not sweeping, large-scale changes that are required to improve peer review, rather incremental improvements, such as better training of reviewers, clearer review direction, better matching and supplementing peer review with post-publication commentary. It is these small-scale changes, across science, that will make for an improved peer-review system.
As noted above, the distribution of respondents was representative of the research community and is detailed in Figure 11 below.