Get Help Sign In
ProcessingProcessing
  1. Blog
  2. Are retractions on the rise? Yes—but that does not necessarily mean science is rife with error

Are retractions on the rise? Yes—but that does not necessarily mean science is rife with error

The bottom line: Faked data is not the same as honest mistakes
Are retractions on the rise? Yes—but that does not necessarily mean science is rife with error hero image

A 2008 paper in the journal EMBO Reports made a startling, unsettling claim: the retraction rate in scientific journals was on the rise.

Using the Medline database, the authors, led by Murat Cokol from the Department of Biological Chemistry and Molecular Pharmacology at Harvard Medical School, found that since 1950, more than 17 million articles were recorded in Medline, and as of the fall of 2007, 871 had been retracted.

The first retractions were seen in the 1970s and “raised awareness of the problem of scientific misconduct and triggered the establishment of the US Office of Research Integrity,” Cokol, et al., reported. From there, the percentage of retracted articles rose steadily, from roughly 0.25% in the late 1980s to more than 1% by the early 2000s.

Retractions are usually ugly. When they happen, it means the publisher is stating that the article is not a valid source of knowledge. Retractions can ruin reputations, derail funding, stall professional advancement, and/or result in long-term public shaming. According to a 2015 study from the National Bureau of Economic Research, retractions result in researchers taking a 10% penalty in future citations, with that figure crawling to 20% in the case of retractions due to misconduct. (Plot twist: a 2019 article in Science and Engineering Ethics found that authors who retract papers due to “honest mistakes” are praised by peers and actually see a reputational bump.)

Cokol, et al., offered two possible explanations for the rise in retractions over the years: the pressure to publish meant flawed articles were produced at a higher rate, or the easier identification of flawed manuscripts means that self-correction was improving.

Since then, reported Science in 2018, the number of annual retractions has continued to grow, but the rate of increase has slowed. Concurrent with that, journals have instituted improved oversight, aided in part by more attentive peer reviewers and also by helpful software such as iThenticate, which plumbs the depths of scholarly publishing hunting for plagiarism.

Why are articles retracted? Enago Academy lays out the two main reasons:

  • Human error: Researchers made mistakes when collecting data or classifying them, applied problematic statistical analyses, or otherwise submitted information which peer review was not able to verify.
  • Intentional misconduct: Gnarlier than human error, this involves made-up data, manipulated research, plagiarizing, failure to comply with research protocols, forged signatures, fake reviews, or salami slicing.

FWIW, Wikipedia adds a third category:

  • Public outrage: References in the paper, usually religious in nature, which spark a PR disaster and force the journal to backpedal.

While the stats suggest the number of retractions in academic publishing continues to rise, a deeper dive into them shows that not all retractions are created equally. A 2018 analysis of retractions found that of the 30,000 authors named in Retraction Watch Database, just 500 accounted for one-quarter of the 10,500 analyzed retractions, and 100 of those authors had 13 or more retractions each.

Further, the Science analysis found that 40% of all retractions in the database were from one publisher—the Institute of Electrical and Electronics Engineers, which pulled thousands of abstracts published between 2009 and 2011, generally from Chinese authors and on a diverse range of topics. Those abstracts, the publisher said, failed to meet guidelines.

What are researchers—and those who depend on scientific publications—to make of all this? The stigma associated with retractions makes them harder to confront, notes Science, and many journals prefer to simply correct papers, not yank them. The Committee on Publication Ethics has issued guidelines which attempt to draw the line between the two actions and what journals need to say publicly when they make a decision. Science suggests that standardizing the difference between the two might be a step in the right direction.

“Reserving the fraught term ‘retraction’ for papers involving intentional misconduct and devising alternatives for other problems,” the journal notes, “might also prompt more authors to step forward and flag their papers that contain errors.”

IDT's blog, delivered straight to you