
In the debate about the impact of drugs on health care costs, most people focus on the end of the spectrum that hits closer to home: rising drug prices. But we must also think of where it all begins, and of the hundreds of millions that are wasted on scientific research that is unreliable.
The National Institutes of Health invests $31 billion a year in medical research, a significant portion of which goes to funding preclinical studies at academic centers across the nation. That work is supposed to be the cornerstone of drug discovery, since it is used by biotechnology and pharmaceutical companies to design the drugs of the future.
But a lot of what is published by academia is not reliable. Several investigations over the last few years have found that an unacceptably large percentage of preclinical studies, many of which are published in very prestigious journals, can’t be replicated.
Researchers from Bayer Healthcare and Amgen have recently reported on this issue. At Amgen, scientists were able to confirm only 11 percent of results from seminal papers in hematology and oncology that the company had deemed promising, a result the authors of the report described as “shocking.” At Bayer, almost two-thirds of early-stage projects surveyed were delayed or eventually terminated because published results could not be repeated. The authors of the Amgen piece expressed concern that some of that research had led to unnecessary clinical studies, “suggesting that many patients had subjected themselves to a trial of a regimen or agent that probably wouldn’t work.”
This isn’t just a skewed industry view. In a study published this May by the University of Texas MD Anderson Cancer Center, surveying its own scientists, 50 percent of respondents said they had tried to reproduce a published finding and had not been able to do so. A National Institutes of Health attempt at replicating spinal cord injury studies had little success repeating the majority of published findings.
Advertisement
What’s emerged is a system where those wanting to move discoveries forward (usually in industry) are spending money and resources to repeat experiments that have already been published, partly out of fears that what’s out there is not trustworthy. Even some venture funds have started to set money aside to replicate published results, before they become the basis for new biotechnology companies. In an era of constrained budgets, this extra spending takes money away from valuable new discoveries.
Advertisement
To be sure, trial and error are part of the process of discovery. Some disparate results are simply a reflection of how nature works: Variations in cell types and organisms lead to opposite conclusions as knowledge progresses. But the biggest factors that can be controlled are scientific misconduct and lack of rigorous scientific practices, and those two should be urgently addressed. Both problems thrive in the high-pressure environment of academia, which is not optimized to reward high-quality, highly reproducible work.
It’s no secret that the peer-reviewed and grant-giving systems encourage scientists to publish a lot, publish what’s novel, and publish first. About 30 percent of those surveyed by the MD Anderson study said they had felt pressure to publish results to prove their mentor’s hypothesis, even if they were not sure of the data. Some 18 percent said they had been pressured to publish findings about which they had doubts.
If academic labs consistently followed practices known to yield reproducible data — such as repeating the same experiment multiple times and not discarding “ugly” results, or running experiments with proper controls and in a blinded fashion — many irreproducible studies would disappear. This is not done systematically today. To that end, it’s encouraging that high-profile journals are starting to revamp author guidelines and use checklists to better scrutinize submissions, and that the NIH has formed a committee to address concerns with data replication.
Advertisement
One novel program worth highlighting is the Reproducibility Initiative, which was launched last year and offers academics the chance to volunteer their studies for verification by independent vendors. If the work is replicated it can be published in a journal and becomes part of a reproducibility collection. Over 1,800 studies have been submitted so far, and the initiative is seeking funding to launch the verification studies. Disease foundations should jump at the chance to find out if the researchers they want to support are producing verifiable science.
We can no longer afford to publish research that spawns hundreds of secondary studies, and may even lead to clinical trials, if it hasn’t first proved to be repeatable. If academia is to be the crucial first step in creating the drugs of tomorrow, it should be held to similar quality standards as industry.
Sylvia Pagán Westphal is content editor for the Boston Biotech Conference series.