fb-pixel Skip to main content
Science In Mind

Pressure grows to improve research validity

Drug treatments that appear to work in mice often do not garner equally significant results in human studies.iStockphoto/iStock

It is a story that unfolds in biomedical research more often than anyone would like: a promising research result sparks excitement about treating or understanding a disease. Then, efforts to repeat the experiment in other laboratories fail.

In the latest example that has been quietly brewing in scientific circles, a Cambridge-based nonprofit reported Wednesday in the journal Nature that more than 100 potential drugs for the lethal neurodegenerative disease amyotrophic lateral sclerosis failed to show benefit in carefully designed mouse studies. Nearly a fifth of those compounds had been previously reported to slow the disease in mice, and eight of them were ultimately tested and failed to work in thousands of patients.


The report from a scientist at the ALS Therapy Development Institute is just the latest to raise deep questions about how to improve the validity of research that is largely funded by taxpayer dollars. Results that turn out not to be repeatable due to easily preventable reasons, such as insufficient understanding of how an animal version of disease mimics human illness, not only wastes money, but also squanders precious time for patients.

“The problem is definitely not isolated to ALS. . . . It’s pandemic across drug discovery,” said Steven Perrin, chief scientific officer of the ALS Therapy Development Institute. “A patient in a disease like ALS, or a very aggressive oncology indication like pancreatic cancer, possibly has one shot on goal to find a treatment that might slow down their disease. . . . It’s our responsibility as scientists and doctors and clinicians to push the best opportunities forward for patients.”

Holly Ladd of Newton has ALS and has participated in clinical trials. She is paralyzed from the neck down and depends on a feeding tube for nutrition.

“I’m optimistic when reading that things work in mice,” Ladd wrote using an eye-controlled computer. “But it is a long way between mice and humans. I’ve learned not to get excited.”


In science, the gold standard for any new result is whether it can be reproduced by other laboratories. In very rare cases, results cannot be repeated because of scientific misconduct. Far more often, experiments may fail when other laboratories try to repeat them because the sample included in the original study was not the right size, the description of the method was imprecise, or the original scientists failed to properly guard against bias.

Two years ago, a cancer scientist who formerly worked at the pharmaceutical company Amgen disclosed that his team was able to successfully repeat only six of 53 landmark research studies that had been cited hundreds of times. The scientists contacted some of the original teams that produced the results and asked them to repeat the experiments, with one simple change: that the scientists conducting the experiment were blinded to which group received the intervention and which had not. With the simple change, they could not reproduce their results.

A year earlier, scientists at Bayer HealthCare reported that among 67 studies they reviewed, in only a quarter of basic research studies — the laboratory experiments that precede human trials — were the reported data successfully replicated.

“What we’re saying is in preclinical research, the majority — perhaps the vast majority — are not able to be reproduced,” said C. Glenn Begley, the chief scientific officer of biotechnology company TetraLogic Pharmaceuticals in Malvern, Pa., who formerly worked at Amgen. “I don’t understand how we can squander precious research dollars to generate results where we know maybe 70, 80, 90 percent are not correct.”


Increasing reproducibility has become a central topic for the National Institutes of Health, the nation’s largest funder of biomedical research, which has launched a number of pilot programs to see what changes could improve the process. Efforts range from establishing a training module to help young scientists learn how to rigorously design experiments to making alterations in the way that grant applications are reviewed, with increased scrutiny of the work on which the grant is premised.

“In order to ensure the very best use of research funds, we need to begin to ferret out what the underlying causes are and begin to address these different components with interventions,” said Lawrence Tabak, principal deputy director of the NIH.

Tabak said that correcting the problem will take a team approach. Scientific journals will need to enforce more rigorous standards about which studies get published and require more convincing evidence to support findings. Institutions will need to recognize the pressures on scientists to publish in high-profile journals, an incentive that may inadvertently increase the number of false results.

As for the surge of awareness of the issue, most people are unsure where to point the finger. Leonard Freedman, chief executive of the nonprofit Global Biological Standards Institute, which is trying to find ways to improve the quality of biomedical research, said that the problem may be getting greater scrutiny because a recent push for translational research brings basic research closer to the lives of real people.


“The public is much more aware of what’s going on,” Freedman said. When the studies are “so much closer to a clinical candidate or a clinical trial, the stakes are really much higher.”

Carolyn Y. Johnson can be reached at cjohnson@globe.com. Follow her on Twitter @carolynyjohnson.