When a group of scientists discovered in 2011 that some particles travel faster than the speed of light, it shook the world of modern physics. The announcement drew widespread media attention and seemed to upend one of the bedrock theories of 20th-century science, special relativity. There was just one problem: It wasn’t true. It had all been a result of mismeasurement, in part from a loose cable.
This may be a particularly glaring example of research gone wrong, but it’s not the only example. In fact, there are good scientific reasons to think that lots of published research is actually false. In 2005, a research professor named John Ioannidis published a much-cited paper titled: “Why Most Published Research Findings Are False,” in which he showed how the pressures of academic life, the small size of many scientific studies, and the preference for unexpected findings mean that even premier journals are surprisingly likely to publish findings that just aren’t true.
Here’s a breakdown of why, exactly, so much research turns out to be wrong and how we should treat new findings when they come out.
How do we know studies are false?
One way we know studies are false is that they get disproved by larger or better studies. Early studies on a B vitamin called Niacin, for instance, suggested that it could help reduce heart attacks by raising so-called “good” cholesterol. Years later, a more comprehensive experiment found that it actually had no impact on heart attacks.
Drug companies face this problem all the time. They read about cutting-edge discoveries being made in academic labs, but when they try to reproduce the experiments, they can’t. Scientists at a German pharmaceutical company who tried to reproduce the results in 67 published studies told the readers of Nature that they only succeeded one quarter of the time. Likewise, the American company Amgen found they could only replicate the results for 6 out of 53 published cancer studies.
Is it because of fraud?
Not really. To be sure, there have been some colorful and high-profile cases of fraud in recent years, including a Harvard animal researcher who monkeyed with his data, and a Korean scientist who tricked editors into letting him review his own paper. Ultimately, though, these kinds of incidents aren’t a big reason that so many studies prove to be false.
Then why are they wrong?
Imagine you’re a budding young cancer researcher trying to make your mark by investigating whether watching movies about cancer actually causes cancer. And let’s say, for the sake of argument, that it doesn’t. People who watch cancer movies are, in fact, no more likely to get cancer than anybody else.
Does this mean your research will all come up negative? Not necessarily. You may pick people for your study who aren’t good representatives of the population at large. Just by chance, you could end up with an unusual number of folks who like cancer movies and get cancer. You might get a false positive, meaning your study would show a link between cancer and cancer-movies even though they’re not really connected.
Of course, if you take all the appropriate precautions, it’s extremely unlikely that you’ll get this kind of false positive. But it isn’t impossible. Remember there are thousands of cancer researchers conducting similar studies. And with thousands of studies, the likelihood that one of them will produce a false positive increases dramatically.
If you’re the researcher who gets that false positive, you have no reason to doubt your findings. In fact, if you check your math, you can prove that your experiment, considered on its own, was extremely unlikely to lead to false results. What is more, with such surprising and original results, you have a great pitch for the journals, even if it’s entirely based on a fluke.
Does this mean I should stop trusting scientific findings?
Not at all. But it does mean you need to treat even very high-quality research with a certain skepticism. Here are some tips:
• Pay attention to the size of the study. The larger the study, the harder it is to get false positives. It’s the difference between flipping a coin 10 times, where someone might get all heads by chance, and flipping it 1,000 times, where the odds are much lower.
• Be especially cautious if it’s the first such finding. Later corroborations and broader literature reviews are less likely to be misleading.
• Trust your prior beliefs. The more unlikely something seems, the less you should trust it. If you find yourself thinking, “I guess I was totally wrong about that,” you might want to say, instead, “I guess I should be a bit less sure than I was before.”
Ultimately, though, we all get taken in by new and promising findings, even the editors at top journals. Early results, though, are just the first phase in a research process that involves future studies and broader efforts at replication. For that reason, many of the breakthroughs and discoveries we hear about will end up being disproved, and what looked like an important step forward will often turn out to be a misstep.
More from Evan Horowitz:
Evan Horowitz digs through data to find information that illuminates the policy issues facing Massachusetts and the United States. He can be reached at email@example.com. Follow him on Twitter @GlobeHorowitz