The New York Times recently reviewed a brief selection of notable scientific papers that have been retracted since 1980. From 1977 to 2010, retractions have increased nearly 10-fold. Our own Seth Robey highlighted some of these instances of scientific “tomfoolery” in our April issue. The dramatic surge in retractions in recent years is not just an artifact of an increase in total publications, as the number of publications increased only 44% over the last decade. In other words, more and more erroneous science is slipping through the cracks of peer review.
The most disturbing part of this upswing is that 67% of retractions (in 2013) were due to misconduct, such as fraud, duplication or plagiarism, compared to the minor 21% that were attributable to error (primarily contamination of biological samples). Some respondents to this developing trend say it’s positive, indicating an increased ability to detect and rescind false data. Is that true? Or are these retractions indicative of increased fraud and/or a flawed review system in desperate need of an overhaul?
Are we becoming better detectives or is scientific fraud increasing?
Some argue that we are now better at identifying fraudulent data. Advances in plagiarism-detection technology (e.g Déjà Vu) have allowed us to more easily detect bad actors. And indeed, the increase in retraction notices around 2009 correlates with more widespread use of these tools. Additionally, the length of time between publication and retraction has decreased significantly, especially in higher impact journals, suggesting that we are identifying errors more quickly.
Moreover, while the US Office of Research and Integrity, the governmental body in charge of monitoring research fraud, is hearing more allegations of misconduct, fewer and fewer end in guilty verdicts. This trend suggests increased retractions are due to greater vigilance rather than increased crime. Therefore, it seems that our ability to detect misconduct, not misconduct itself, is on the rise.
But there are some contradictory and troubling statistics.
On average, 2% of scientists admitted to having fabricated, falsified, or modified data or results at least once. As much as 33% admit to other problematic practices such as “dropping data points based on a gut feeling.” You might hope I’d be surprised about this data, but many of us know mentors who have suggested ignoring “outliers” or depicting “representative” figures that were ideal rather than typical. The frequency of scientists’ admitting fraud outpaces the frequency of retracted publications.
Something is amiss.
Lack of transparency hinders accurate estimates of fraud
One significant hurdle to understanding if fraudulent publications are increasing is the obscurity with which retractions notices are made. Some journals will merely state, “This article has been withdrawn at the request of the authors in order to eliminate incorrect information,” without offering the source of the incorrect data. Some notices have been even less forthcoming: when editor L. Henry Edmunds at the Annals of Thoracic Surgery was asked why a paper was withdrawn from his journal, he told reporters, “IT'S NONE OF YOUR DAMN BUSINESS!”
Lack of transparency not only impairs our ability to accurately assess incidences of fraud but also may give the false impression of wrongdoing. In instances where papers are retracted due to honest error, a vague retraction notice may do more to discredit the authors than a thorough explanation. Some proponents of veiled retraction notices say that retractions are far rarer than they should be because authors are (understandably) reluctant to retract their work; authors would be even less inclined to retract their work if detailed descriptions of their errors were published alongside their names.
In contrast, UK-based medical writer Elizabeth Wager hypothesized that retraction notices by the author and journal are deliberately ambiguous to save face, avoid defamation lawsuits, or just out of laziness. In order to comprehensively qualify retractions—especially in instances of misconduct—journal editors must sift through all possible sources of error, even demanding raw data if necessary. Not all editors are that scrupulous or dedicated; an opaque retraction notice prevents significant hassle.
Moreover, retraction practices vary significantly from journal to journal. Top tier journals such as New England Journal of Medicine, Cell, Science and Nature, tend to retract more papers (likely due to more scrutinizing readers and the fact that pressure to publish in quality journals lends itself to airbrushing data). Surprisingly, not even our world's best journals have articulated retraction policies. At last count, 78% of responding journals polled had no retraction policy or relevant information about retractions posted on their websites. It is difficult to conclude if there in an increase in fraud among retracted publications if retraction policies are inconsistent and hidden.
A flawed system
The frequency of retractions is tightly tied to scientific review. When a manuscript is submitted to a journal, it is first reviewed by an editor for content, innovation, and relevance to the journal and its audience. Upon approval, the editor will send the manuscript to multiple reviewers, supposedly experts in their field. Scientists can tell the editor who they would prefer (or not) to review their work. Reviewers then determine if the manuscript is suitable for publication, needs more data, or should be rejected. In theory, this review process should also capture writing that has been plagiarized, data that has been duplicated, or figures that have been Photoshopped.
How, then, are fraudulent publications slipping through the cracks?
Richard Smith summarizes is a scathing critique: “In reality…[peer review] is ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.” Smith, a former editor of the BMJ, illustrated the inefficiency of peer review by sending a 600-word paper containing 8 errors to 300 reviewers. Their success rate was dismal, most only found just 2 mistakes.
Other studies have shown that reviewers agree with one another about whether a manuscript should be published little more than would be expected by chance, indicating that peer review does not provide a reliable consensus on the quality of a publication.
That’s if a manuscript gets sent to a real, live reviewer! In a mind-boggling scam, authors sent manuscripts to their friends or to fictitious reviewers, resulting in the quick and easy approval (and now retraction) of over 170 publications across multiple publications.
A new sentinel
While the majority of reviewing scientists appear to be asleep at the wheel, Adam Marcus and Ivan Oransky started RetractionWatch, a blog detailing what data is getting retracted and why. In 2014, they received the MacArthur “Genius” award, to establish an online database of retracted publications. The development of this database is an important step in clarifying retractions and allowing scientists to learn from retracted data.
Why it matters
Retraction notices have risen sharply in the last six years, and according to RetractionWatch, the numbers show no signs of decline. The scientific review system is not sensitive, accurate, or efficient, and it isn’t evolving quickly enough. Incorrect data can take four years or longer to be retracted, at which point it may have already been re-circulated and accepted as fact. Even retracted papers linger on and continue to get cited.
Science is a pillar of modern society. It influences policy, public opinion, decisions in health and medicine, and technological advances. Falsified data erodes not only the public’s trust of science and scientists, but also human progress. The peer review and publication system needs an immediate overhaul.