fb-pixelDana Farber retractions: How they slipped through peer review Skip to main content

How does bad data slip through? Allegations of research fraud raise questions about ‘peer review.’

The troubles arise amid pressures to publish, too many journals, and too few reviewers.

Dana-Farber Cancer Institute is moving to retract six research papers and correct 31.Craig F. Walker/Globe Staff

When a blogger posted allegations that researchers at the Dana-Farber Cancer Institute had manipulated data in published studies, the reports were shocking — and yet also familiar. Such cases are appearing with growing frequency, raising concerns about the integrity of scientific research and how carefully papers are vetted at even prestigious journals.

The matter is still under investigation, although the hospital is moving to retract six papers and correct 31. But coming on the heels of the discovery of data manipulation at Stanford, which prompted president Marc Tessier-Lavigne to resign last year (although an investigation found that he did not personally engage in research misconduct), the question remains: Many of these studies had been published in so-called “peer-reviewed journals,” supposedly the most reliable. Why didn’t the peer reviewers catch the errors? How does bad data slip through?

Advertisement



“Peer review has never been the gold standard or Good Housekeeping seal of approval … that journals and scientists and universities and federal agencies want us to think it is,” said Ivan Oransky, a founder and editor of Retraction Watch, a website that tracks articles that are retracted from scientific journals. “It’s a system that can help when it works.”

But currently the system is strained by too many journals and papers, and too few reviewers. “The volume of requests for peer review is just way, way too high,” said Lisa Bero, professor of medicine and public health at the University of Colorado Anschutz Medical Campus.

When a medical journal receives a submission from a researcher that it considers possibly worth publishing, the editors will send the work to two or more experts in the same field for their opinion. These “peer reviewers” are faculty members who are tenured or in tenure track positions.

They don’t get paid for this work, but it’s considered “part of the expectation of labor on behalf of your salaried position,” said Lisa M. Rasmussen, a philosophy professor at the University of North Carolina at Charlotte and editor-in-chief of the journal Accountability in Research. Many also consider peer review their contribution to the scientific process, one they expect to be reciprocated when they have their own research to publish.

Advertisement



But as the number of faculty positions diminish, the number of people available to review papers has likewise shrunk, Rasmussen said.

Peer reviewers are not necessarily capable of analyzing the data and may not even be asked to, she added. Much of the material “cannot be evaluated by an automated tool… The raw data, even if submitted, is not something a reviewer can assess.” The solution is not to ask reviewers to vet the data, Rasmussen said. “The obvious solution is to find more tools that we would implement in house before sending out to a reviewer.”

Reviewers’ work “varies widely from very cursory to very thorough,” Bero said. The directions they receive are often vague, and rarely include scrutinizing the data, she said.

Instead of having more peer reviewers, Bero recommends having reviewers who specialize in certain aspects of research.

The freelance data detectives, such as Sholto David, the scientist who drew public attention to the Dana-Farber studies, don’t employ a single method that could be instantly adopted by journals or peer reviewers, she noted. They’re using a variety of methods and none is simple. “It takes time to do this, and it takes resources. Some of these methods are quite sophisticated,” Bero said.

Advertisement



Elisabeth Bik, a microbiologist and science integrity consultant who raised concerns about the altered images in papers co-authored by the former Stanford president, agreed that journals need to hire more staff to better scrutinize manuscripts. They have the money, since they charge exorbitant prices for subscriptions, and in some cases they require authors to pay to publish their work. (For example, the journal Nature charges $12,000 for authors to publish articles that are “open access,” or free to readers.)

“Where does all that money go? It doesn’t really cost $10,000 to publish a paper, right? It’s all online, it’s not even in print anymore,” she said.

She also recommends that more papers be posted online in popular pre-print databases like bioRxiv.org before peer review even begins, to allow wider scrutiny. Honest researchers “are very happy to have multiple eyes on their papers and are happy to receive feedback,” she said.

Claudio Aspesi, a former senior investment analyst at Bernstein Research who covered the academic publishing market, said that publishers often have few incentives to invest in resources like staff to better vet articles.

“Yes, these companies are extremely profitable. They could spend more, and they could do better,” Aspesi said. “But the pressure from the market and from investors is not necessarily thinking that way. The pressure is on improving profitability in the near term, to have a clear path to increase profitability over time.”

Advertisement



But too many retractions could ultimately harm a journal’s bottom line if its reputation is tarnished. “There is a point where if you do things badly enough, it really does matter,” he said.

How often research fraud happens — and whether it’s increasing – isn’t known. But certainly the problem has lately attracted more attention, and as a result more papers are being retracted, Oransky said.

“There were more than 10,000 retractions last year from the scientific literature, and even that doesn’t represent all of the fraud and misconduct. That number has been rising quite steadily,” he said.

Two decades ago, about 0.02 percent of scientific papers were retracted. Now, it’s up to 0.2 percent, Oransky said. “It’s still a pretty small number, but that’s a tenfold increase in 20 years. So people are doing something about it. They’re not doing enough and they’re doing it quite slowly. But things have changed.”

The august New England Journal of Medicine has retracted two papers in recent years. One involved an image stolen from another publication, not a study. The other relied on a database that turned out to have been fabricated. The editors and reviewers were not familiar with the database, and learned their lesson, said Dr. Eric J. Rubin, editor-in-chief. Afterward, he said, “We were much more careful about vetting databases.”

The journal often makes corrections to articles — but typically these are not errors that change the study’s main findings, Rubin said. If an error has bearing on a study’s conclusions, the journal will retract it, or ask for it to be rewritten and resubmitted. But these are rare events, he said.

Advertisement



Before an article is published it goes through a plagiarism-checking software, and increasingly the editors are taking a close look at images for evidence of manipulation, Rubin said. After peer review, the journal has in-house statisticians who check the data and editors who further review the work. “We have full-time editors who are MDs, a whole bunch of them,” he said.

While there are incentives to publish quickly, Rubin notes, “The disincentive for cheating is so substantial that it will end your career if you’re caught doing this. We all know of individuals to whom that’s happened.”

Holden Thorp, editor-in-chief of another prestigious journal, Science, said that the journal published four or five retractions a year over the past four years, more than previously. He attributed the uptick to the growing number of academic “sleuths” such as Bik.

“We’ve tried to be more forthcoming than most of our peers,” said Thorp, who retracted two papers that Science had published from Tessier-Lavigne.

Science also uses software that detects plagiarism in words and images.

Thorp doesn’t blame peer reviewers for missing data problems. “That’s not really what we’re asking them to do,” he said. “Peer reviewers are not detectives. They’re people who mostly take it at face value that the experiments that have been done in the paper were done properly.”

Would it help to pay peer reviewers?

Oransky is in favor of that, provided it’s accompanied by other measures, such as reducing the volume of papers and investing in more expertise at the journals.

Fraud and errors in scientific research emerge from a complex — and some say troubled — system that involves more than just peer reviewers. “Our academic environment really drives this. There is a very, very strong requirement to publish. That helps you get grants, promotions,” Bero said. “I don’t lay blame on any particular part of the system. We do need to change academic incentives. We need to have fewer journals. … We need to incentivize and support really, really good peer review by narrowing it down.”

To prevent fraud, academic leaders need to establish a culture that rewards diligence and honesty above producing spectacular results, Rasmussen said. In a highly competitive environment, the temptation to put the thumb on the metaphorical scale can be hard to resist, she said.

“So you could, for example, create a lab environment in which failure and transparency are rewarded. ‘Great job coming forward and showing me that this is a mistake.’”



Felice J. Freyer can be reached at felice.freyer@globe.com. Follow her @felicejfreyer. Aidan Ryan can be reached at aidan.ryan@globe.com. Follow him @aidanfitzryan.