Ana Marusic ( Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia. Editor-in-Chief, Journal of Global Health. )
The science behind the reviewing is actually the science that has been presented over more than 20 years at the International Congresses on Peer Review and Biomedical Publication.1 From its inception in 1989, these scientific meetings stimulated research into peer review and other topics in research publishing, leading to a significant growth in the body of research as judged by the number of abstracts presented and articles published.2 Peer review also emerged as a new research field and became a new concept in medical research, so that the National Library of Medicine introduced "Peer Review, Research" as a term in its vocabulary thesaurus for indexing articles for PubMed — Medical Subject Headings (MeSH) in 1994. The definition of peer review research by PubMed was defined as:3
"The evaluation by experts of the quality and pertinence of research or research proposals of other experts in the same field. Peer review is used by editors in deciding which submissions warrant publication, by granting agencies to determine which proposals should be funded, and by academic institutions in tenure decisions."
However, over the 20 years of peer review research in medical journals, we are still not sure about best practices and there are no solutions for editors and researchers to be sure that peer review is a fair and objective process. In this essay, I will provide a brief insight into what we already know from research into peer review and offer possible future approaches to peer review.
The best summary about peer review as the way to improve the quality of research articles in biomedicine is provided by the most recent Cochrane systematic review on this topic by Jefferson and his colleagues.4 They searched 10 biomedical databases up to June 2004 and performed a descriptive review of 28 studies with sufficient methodological rigour (prospective or retrospective comparative original research studies with two or more comparison groups, generated by random or other appropriate methods).
They could not confirm the benefit of different types of reviewer or author concealment on the quality of peer review or its outcome. It seems that journals should be free to choose any of the three common methods of review: double blind, where neither the author nor the reviewer know the identity of the other party (most common type of review in social sciences); blind or anonymous, where only the author is blinded to the identity of the reviewer (most common practice in medical journals); and open, when the process is fully transparent (practice in some medical journals, such as BMJ and BioMed Central journals).
The systematic review also could not find evidence for effectiveness of reviewers\\\' training on the quality of their reviews and methods of editorial communication with or means of dissemination to the reviewers had seemed to have no effect on the quality of the peer review process.
The use of checklists, such as those for statistical evaluation of submitted manuscripts, had beneficial effect on the quality of published articles. Also, editorial peer review seemed to increase the quality of the published reports, but the authors of the systematic review did not find that 2 studies on this topic could not be fully generalized to all journals.
The review could also make no conclusion on the bias in peer review as there were only two studies that looked into whether there was a bias among peer reviewers on their assessment of conventional vs. unconventional intervention, where one study found but the other could not find evidence of bias against an unconventional intervention.
As the systematic review addressed only biomedical field, it could not draw on evidence from other fields, such as psychology, where publication bias was addressed in an experimental way already in the seventies. In his classical study, Michael Machony was able to confirm that reviewers had a strong confirmatory bias, i.e. they were strongly biased against manuscripts reporting results that contrasted the common view of the field.5 His proposition to deal with such prejudice against negative results would be to ask reviewers "to evaluate the relevance and methodology of an experiment without seeing either its results or their interpretation".5
To see whether there was some new evidence of the effectiveness of peer review, I made a quick, non-systematic search of PubMed for randomized trials in peer review, and came with a few new studies. Their results were again rather inconclusive, and we probably are not more clear about peer review as we were in 2004.
There were six randomized controlled studies from BMJ, which showed that short training packages for reviewers had only a slight impact on the quality of their reviews,6 that author- but not editor-suggested reviewers made more favourable recommendations on the manuscript but produced reviews of the same quality,7 that reviewers are not very efficient in spotting errors in manuscripts and that training has only a slight impact,8 and that telling reviewers that their signed reviews will be posted online had no important impact on the quality of their reviews.9 Another randomized study from Annals of Emergency Medicine looked at the effect of partnering new and senior reviewers in a mentoring programme. The study showed that such mentoring did not improve the quality of the review.10
Two studies looked into blinding as the method to increase the objectivity of the review process. A study performed in a small, national, non-English general medical journal, Ugeskrift for Læger, showed that open and blinded peer review did not differ in their quality, but that the authors preferred identifiable reviewers of their manuscripts, whereas reviewers preferred double- or single-blinded review process.11 In a specialist journal from dermatology, British Journal of Dermatology, blinding or opening of the peer review did not affect the reviewers\\\' recommendations.12
The study on the effect of including special statistical review in addition to standard review of a manuscript was performed in a Spanish general medical journal, Medicina Clinica.13 In a randomized trial, there was a significant positive effect on the quality of the published article when a statistical review was introduced along with a field specialty review in comparison to only suggesting the use of a reporting guideline to a specialty reviewer. The same research group also used randomized study design to evaluate the benefit of asking reviewers to use reporting guidelines for specific study designs, such as CONSORT for randomized controlled trials or STROBE for observational studies, in comparison to conventional peer review.14 They found out that the use of additional reporting guidelines improved the manuscript quality. However, the authors reported that the observed effect was not definitely demonstrated and that authors had problems to satisfy high methodological standards of reporting guidelines.
Finally, two randomized studies looked at biases in peer review. A study from two orthopaedic journals, participating journals, The Journal of Bone and Joint Surgery (American Edition) and Clinical Orthopaedics and Related Research, looked at how reviewers assessed manuscript with a positive result versus the same manuscript with the results altered to show no difference.15 Reviewers were more likely to give higher methodological scores and make positive recommendation for manuscripts with positive results, whereas they detected more errors in manuscript that showed no differences among two arms of a randomized trial. Similar finding were reported for a pharmacology journal, Journal of Population Therapeutics and Clinical Pharmacology, where manuscripts with negative finding received lower scores and more frequent rejection recommendations.16
What is the message of this brief overview of research that provides basis for practice in medical journals? We still don\\\'t know well what works and what is good for journals, reviewers and authors. We need more research and we should be as rigorous about ensuring the quality of our work, just as we as editors demanding methodological excellence from journal authors. Until the next update of the systematic review on peer review, we can just voice the most important recommendation of its authors: "A large, well-funded programme of research on the effects of editorial peer review should be urgently launched."
1. Rennie D. Integrity in scientific publishing. Health Serv Res 2010; 45: 885-96.
2. Rennie D. Fourth International Congress on Peer Review in Biomedical Publication. JAMA 2002; 287: 2759-60.
3. National Library of Medicine. Medical Subject Headings. (Online) (Cited 2013 February 4). Available from URL: http://www.ncbi.nlm.nih.gov/mesh?term=peer%20review%2C%20research.
4. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev 2007; :MR000016.
5. Mahoney MJ. Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research 1977; 1: 161-75.
6. Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled trial. BMJ 2004; 328: 673.
7. Schroter S, Tite L, Hutchings A, Black N. Differences in review quality and recommendations for publication between peer reviewers suggested by authors or by editors. JAMA 2006; 295: 314-7.
8. Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med 2008; 101: 507-14.
9. van Rooyen S, Delamothe T, Evans SJ. Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ 2010; 341: c5729. doi: 10.1136/bmj.c5729.
10. Houry D, Green S, Callaham M. Does mentoring new peer reviewers improve review quality? A randomized trial. BMC Med Educ 2012; 12: 83.
11. Vinther S, Nielsen OH, Rosenberg J, Keiding N, Schroeder TV. Same review quality in open versus blinded peer review in "Ugeskrift for Læger". Dan Med J 2012; 59: A4479.
12. Alam M, Kim NA, Havey J, Rademaker A, Ratner D, Tregre B, et al. Blinded vs. unblinded peer review of manuscripts submitted to a dermatology journal: a randomized multi-rater study. Br J Dermatol 2011; 165: 563-7.
13. Cobo E, Selva-O\\\'Callagham A, Ribera JM, Cardellach F, Dominguez R, Vilardell M. Statistical reviewers improve reporting in biomedical articles: a randomized trial. PLoS One 2007; 2: e332.
14. Cobo E, Cortés J, Ribera JM, Cardellach F, Selva-O\\\'Callaghan A, Kostov B, et al. Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical journal: masked randomised trial. BMJ 2011; 343: d6783. doi: 10.1136/bmj.d6783.
15. Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS. Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med 2010; 170: 1934-9.
16. Koren G, Fernandes A. Reviewers\\\' bias against the null hypothesis: the reproductive hazard of binge drinking. J Popul Ther Clin Pharmacol 2010; 17: e281-3.