Peer Review ~ Scientific Process or Faith?
Scholarly journals have been in existence for nearly 350 years. Peer review is perceived as one of the mainstays of scientific publishing. While peer review was less common among early journals, the majority of scientific and scholarly journals implement some level of peer review today. Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Amazingly it is a flawed process, full of easily identified defects with little evidence that it works. Papers that undergo the peer review process are generally considered to be of high quality since they are scrutinized by experts before publication. The processes involved are, as we are told, to “Guarantee the quality of research and expansion of true science” Despite its long history and firm establishment in scholarly communities, peer review seems to be failing us.
An example of point, Cyril Labbé, of Joseph Fourier University in Grenoble, discovered 120 conference proceedings and papers related to specific conferences were computer-generated gibberish. These papers went through the peer review process and were undiscovered. How is this possible if the system is full of safeguards that promise us true peer analysis?
Let us consider the problems we have observed in the peer review system:
- Reviewers are often not of the discipline they are reviewing. This leads to something that looks good, pass scrutiny and get published. A great example would be the case of the counterfeiter. If his home made money is well made, no one questions its authenticity. If it is poorly made, everyone sees it for a fake immediately. Scientific papers are no exception. I myself had a paper published in the British Open Journal of Biological Research. Not only did I make up the scientific names, I also made up the university that did the research. Just one little bit of actual fact checking would have discovered my paper was a fraud.
- The publish or perish mentality can be considered a major contributing factor. Scientists cannot get research grants unless they produce and publication is proof of production. Even if said publication is full of fraudulent data. Is the “publish or perish” culture responsible for such scandals? Unethical researchers are pushed to publish fake papers. It is known to exist but a lack of good checks and balances allow it to persist. most interesting of all, is that there is no negative consequences for such unethical activity.
- Some papers do not even take peer review seriously. Former editor of the Lancet, Robbie Fox, once joked, “…the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom.” As a man who was no admirer of peer review, he wondered “would anybody notice if I were to swap the piles marked `publish’ and `reject’.”
- Subscription vs Open Journals. The old boys network would have us believe that Subscription journals are better and provide a more clear understanding of science and what manuscripts should reach publication. The problem with this is that the vast majority of journal entries that have been found to be flawed, have come from these very same prestigious journals. Open Journals are still new and only time will tell if they will prove to be a better choice but so far we have seen many great manuscripts printed in open journals. David J. Solomon said, “A number of highly respected journals have begun experimenting with innovative peer-review models. The British Medical Journal did away with blinding in their peer-review process as early as 1999 (Smith 1999) and many of the BioMed Central journals provide open access to the complete review record. For a three-month period starting in June 2006, Nature experimented with posting preprints for public comment in parallel with traditional peer review (Campbell 2006) and the Public Library of Science (PLoS) is in the process of launching a new journal, PLos One that will publish articles almost immediately with minimal screening and allow for public comment.” It is this public comment that seems to be making the case of open journals very appealing.
- So lets look at how a paper passes peer review. According to Dr. Richard Smith of the Journal of the Royal Society of Medicine, the process is alot like this: “The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you’d expect by chance.” May journals do not have the expert reviewers needed to properly assess a paper. This makes the system very problematic.
- Smith said, “At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers. Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust.”
- To publish or not to Publish? Reviewers are asked this question on a daily basis. They must choose the strengths and weaknesses of the manuscripts presented to them. Bias then creeps in and we get a variety of responses and opinions. Since most journals decide on publication after allowing only one or two reviewers to look over the manuscript, many errors are made. An example given by the BMJ of two reviewers of the same paper are: Reviewer A: “I found this paper an extremely muddled paper with a large number of deficits.” Reviewer B: “It is written in a clear style and would be understood by any reader.” As you can see, the same paper got very different reviews and we the public suffer the choice of an editor to either see or not see this paper.
- Bias is rampant in journals and people from prestigious universities get published much easier than those from small of unknown universities. Add in name recognition and we get a whole other level of bias. If you are famous you get published, even if your work is crap. Men get published easier than women. Whites easier than blacks, if the race of the author is known. A famous bit of evidence comes from Dr DP Peters and Dr SJ Ceci. They took a dozen studies from prestigious universities that had already been published in well respected journals. They renamed the studies, changed the university, author, and made minor changes to the abstracts. They even went so far as to create fictitious universities. Then they submitted the papers to the journals that had originally published them. Three journals caught the prank. Of the other nine, eight were rejected due to “poor quality”. Remember all twelve papers were lauded as good solid science when originally submitted. This shows immense bias as the authors concluded. Called the Mathews Effect, “To those who have, shall be given; to those who have not shall be taken away even the little that they have, ” journals are caught in a trap.
- Interesting? If a manuscript talks about subject matter that is boring or uninteresting, it tends not to get published. This forces authors to doctor up their work and add an artistic flair or fabrication in order make it come to life. Some journals are actively trying to break this problem with poor to little success.
Journalist and scientists alike seem to take any peer reviewed article as being Blessed or Sacred. It has undergone the process and proven to be true. Scientists take it on faith that a peer reviewed article is solid and good, even though the evidence overwhelmingly show otherwise. Since there is no obvious alternative, scientists will continue to believe in peer review. Putting science in the awkward position of having its checks and balances being rooted in belief. One could argue that makes science no different than a religion which also has a peer review process of so called experts. The whole peer review question needs to be reevaluated and we should try to come up with alternatives but for now, it is what we have and we are stuck with it. The joy of it all, is that we get to continue quoting peer reviewed articles as if they were the word of God. Oh Joy.
Altman, D. G. 2002 Poor-quality medical research: What can journals do? JAMA 287;21:2765-2767. [doi: 10.1001/jama.287.21.2765]
Baxt, W. G., J. F. Waeckerle, J. Berlin, and M. L. Callahm. 1989. Who reviews the reviewers? Feasibility of using fictitious manuscript to evaluate peer reviewer performance. Annals of Emergency Medicine 32:310-317. [doi: 10.1016/S0196-0644(98)70006-X]
Bloom, T. 2006. Systems: Online frontiers of the peer-reviewed literature. In Nature. http://www.nature.com/nature/peerreview/debate/index.html.
Campbell, P. 2006. Nature Peer Review Trial and Debate. In Nature. http://www.nature.com/nature/peerreview/index.html.
Chang, A. 2006. Online journals challenge scientific peer review. Retrieved on November 19, 2006 from http://www.mercurynews.com/mld/mercurynews/news/breaking_news/15655422.htm.
Debate. 2006. “Peer Review” In Nature. http://www.nature.com/nature/peerreview/debate/index.html
Editorial. 2005. “Revolutionized peer review?”. Nature Neuroscience 8; 4:397.
Faxon Institute. 1991. An Examination of Work-related Information Acquisition and Usage among Scientific, Medical and Technical Fields Westwood, Mass. Faxon Company.
Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA 1998;280: 237-40 [PubMed]
Godlee, F. 2002. Making reviewers visible. Openness, accountability and credit. JAMA287; 21:2762-2765. [doi: 10.1001/jama.287.21.2762]
Goldbeck-Wood, S. 1999. Evidence on peer review: scientific quality control or smokescreen? BMJ 318:44-45 (2 January). http://www.bmj.com/cgi/reprint/318/7175/44.
Gotzxche, P. C. 1989. Methodology and overt and hidden bias in reports of 196 double-blinded trials of non-steroidal anti-inflammatory drugs in rheumatoid arthritis. Controlled Clinical Trials, 10:3159.
Guédon, J. 2001. In Oldenburg’s long shadow: Librarians, research scientists, publishers, and the control of scientific publishing. Presentation to the May 2001 meeting of the Association of Research Libraries (ARL), at http://www.arl.org/arl/proceedings/138/guedon.html.
Horton R. Pardonable revisions and protocol reviews. Lancet 1997; 349: 6. [PubMed]
Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review: a systematic review. JAMA 2002;287: 2784-6 [PubMed]
Jefferson, T, E. Wager, and F. Davidoff. 2002b. Measuring quality of editorial peer review. JAMA June 5, 287; 21:2786-2790.
Jefferson, T. 2006. Quality and value: Models of quality control for scientific research. In Nature . http://www.nature.com/nature/peerreview/debate/nature05031.html.
Jefferson, T., P. Alderson, E. Wager, and F. Davidoff. 2002a. Effects of editorial peer review. JAMA June 5, 287; 21:2784-2786.
Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D, the PEER investigators. Does masking author identity improve peer review quality: a randomised controlled trial. JAMA 1998;280: 240-2 [PubMed]
Koop T., and U. Pöschl. 2006. An open, two-stage peer-review journal. In Nature . http://www.nature.com/nature/peerreview/debate/nature04988.html.
Kumashiro, K. K. 2005. Thinking Collaboratively about the Peer-Review Process for Journal-Article Publication. Harvard Educational Review 75; 3:257-287.
Lock S. A Difficult Balance: Editorial Peer Review In Medicine. London: Nuffield Provincials Hospital Trust, 1985
McNutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. A randomized trial. JAMA 1990;263: 1371-6 [PubMed]
Peters D, Ceci S. Peer-review practices of psychological journals: the fate of submitted articles, submitted again. Behav Brain Sci 1982;5: 187-255
Pocock, S. J., M. D. Hughes, and R. J. Lee. 1987. Statistical problems in the reporting of clinical trials. A survey of three medical journals. New England Journal of Medicine 317:426-432.
Regehr, G., and G. Bordage. 2006. To blind or not to blind? What authors and reviewers prefer. Medical Education 40:832-839. [doi: 10.1111/j.1365-2929.2006.02539.x]
Rennie D. Misconduct and journal peer review. In: Godlee F, Jefferson T, eds. Peer Review In Health Sciences, 2nd edn. London: BMJ Books, 2003: 118-29
Sandewall, E. 2006. A hybrid system of peer review. In Nature. http://www.nature.com/nature/peerreview/debate/nature04994.html.
Schafner, A. C. 1994. The future of scientific journals: Lessons from the past. Information Technology and Libraries 13:239-47.
Smith, R. 1999 Opening up BMJ peer review. BMJ 1318:4-5 (2 January). http://www.bmj.com/cgi/content/full/318/7175/4.
van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ 1999;318: 23-7 [PMC free article] [PubMed
van Rooyen S, Godlee F, Evans S, Smith R, Black N. Effect of blinding and unmasking on the quality of peer review: a randomised trial. JAMA 1998;280: 234-7 [PubMed]
Van Rooyen S., F. Godlee, S. Evans, N. Black, and R. Smith. 1999. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomized trial. BMJ 318:23-27 (2 January). http://www.bmj.com/cgi/reprint/318/7175/23.
Wennerås C, Wold A. Sexism and nepotism in peer-review. Nature 1997;387: 341-3 [PubMed]
This post has been seen 1758 times.