A few years ago, scores of middle-aged men took part in a clinical trial of a new drug, intended to ease the symptoms of their enlarged prostates. But this wasn’t just a study of a drug’s power to help. It was also a study of the mind’s power to hurt.
Before getting their prescriptions, half of the men were told about a small risk of erectile dysfunction and other sexual side effects; none of the men had a history of such troubles. Of the men who got the warning, about 44 percent developed these symptoms. But among those kept in the dark, the rate was far lower: only about 15 percent.
It’s known as the nocebo effect: When patients are warned of possible pain and unpleasant side effects, it increases the likelihood that they’ll experience them. If the placebo effect—in which positive patient expectations can be therapeutic—is relatively famous and well explored, the nocebo effect is its little-known evil twin.
In the past two decades, however, mounting evidence for the nocebo has begun to accumulate through comparisons of side effects in the placebo groups of clinical trials—side effects, in other words, that also show up in people getting no drugs at all. More recently, a handful of studies have investigated nocebos more directly, and have suggested that patients’ negative expectations may indeed increase their suffering.
If the nocebo effect works as it seems to, it means that doctors could almost certainly reduce suffering by downplaying or even never mentioning certain side effects. But this places doctors in an ethical bind. The implication of nocebo is that the Hippocratic Oath—do no harm—may sometimes run into direct conflict with the powerful doctrine of informed consent, which obligates doctors to tell patients all the risks of a potential treatment. Doctors and bioethicists are now seriously asking an unexpected question: whether, if the truth can literally hurt, it might sometimes be acceptable, or even in the patients’ interest, to hide the truth instead.
For most of medicine’s history, doctors didn’t think patients needed to know much about their own treatment. The doctor knew best; the patient kept quiet. This started to change in the early 20th century, after a number of doctors who performed surgery without a patient’s permission were successfully sued for battery. After World War II came the Nuremberg Trials, and the horrifying accounts of Nazi doctors brutalizing and often killing prisoners in “medical” experiments. And in 1972, an investigation by the Associated Press exposed the lack of informed consent in the 40-year Tuskegee Syphilis Study, in which poor, black sharecroppers with the disease were not told their true diagnosis and were denied proper treatment.
By the 1970s, “informed consent”—the consent of patients to treatment after learning all possible risks and benefits—had become an issue of human rights as much as legal liability. Doctors also found that clinician honesty fostered trust, which made patients more likely to follow medical advice and speak up about important symptoms. So, by the 1980s, federal health agencies and the American Medical Association formally adopted ethical standards of informed consent. Most states passed laws requiring practitioners to tell patients what a “reasonable person” would consider important—say, that triptans, a class of drug prescribed for migraine headaches, might cause hot or cold sensations, drowsiness, difficulty swallowing, dizziness, chest pain, weakness, dry mouth, upset stomach, nausea, numbness, or tingling. The FDA required lists of potential “adverse reactions” to be printed on drug labels and listed in those rapid-fire voiceovers in TV drug ads.
“There was a push in the field to tell patients every possible risk,” says Dr. Frank Chessa, associate director of the ethics and professionalism course at Tufts Medical School. “What do you need to tell the patient? You need to tell the patient everything.”
But what doctors say, it turns out, can have unintended health consequences. Even subjects who get a placebo in drug trials routinely experience precisely those side effects listed on the informed consent documents.
Empirical studies focused directly on nocebos are rare, in part because of the double ethical hurdle of deceiving research subjects in a way that could cause suffering. A 2012 review turned up only about 30 studies on nocebos ever. But in these early studies, researchers have triggered asthma attacks using inhalers filled with a sham broncho-constrictor. Post-operative patients in a pain study who agreed to have their morphine drip interrupted for four hours felt twice as much pain if a doctor openly cut off the drip, compared to patients whose drip was stopped covertly.
“As more and more studies come out, the evidence starts to get pretty persuasive that the nocebo effect is real,” says Frank Miller, a bioethicist at the National Institutes of Health. “The much trickier issue is what to do about it.”
Can doctors simply withhold or fudge the truth whenever they think it will help their patients? Both ethically and legally, that’s a nonstarter. But some are trying to plot more dexterous routes through the minefield.
In the March 2012 issue of the American Journal of Bioethics, two Harvard researchers proposed “contextualized informed consent” as one possible way out of the puzzle. While informed consent obligates doctors to tell patients the truth about a treatment’s risks, the authors argue that medical truth is elusive. After all, as Dr. Rebecca Wells, a neurologist and the paper’s first author, said in an interview, “There is no black-and-white truth about what side effects exist for every medicine. [A doctor’s] words don’t simply describe the likelihood of a side effect but can actually change that likelihood.” In the paper, Wells and her coauthor, renowned placebo expert Ted Kaptchuk, write that while doctors must disclose dangerous side effects to their patients, they should choose their words much more carefully when the side effects are minor, especially when the patient has a history of experiencing minor side effects and if the potential upside of treatment is great.
What would this mean for patient care? Wells and Kaptchuk note several strategies have been suggested in theory, and to some extent practiced by doctors in the field.
One idea is to frame costs and benefits in a way that accentuates the positive, says Dr. Luana Colloca, a research fellow at the National Institutes of Health who is studying the potential impact of placebos and nocebos in clinical practice. “It’s different if we say, 2 percent of subjects experienced this nasty side effect rather than saying 98 percent did not,” she notes. In one study, for instance, people told the small chances of developing a fever or muscle aches due to a flu shot reported significantly more of these side effects than people told the much larger chances of not getting them. The wording made no difference in whether a person chose to get the shot—it just changed whether they suffered the side effect.
Another idea is for doctors to discuss nocebos with patients and ask if they would rather not know about minor side effects, while encouraging them to report any problems that do arise. “Informed consent is about respect for patient autonomy,” says Dr. Howard Brody, a bioethicist at the University of Texas, Galveston. “If, as a patient, I can make my own choices, then I can choose not to be informed about certain things.”
Still, as patients, how many of us would be willing to give up knowing about all potential side effects even if we knew this ignorance might save us some pain? It would mean ceding some control and, at a certain point, letting doctors decide what we should know about a proposed treatment. For many of us, that’s an uncomfortable thought.
Both positive framing and getting a patient’s consent not to be informed can be taken too far, Kaptchuk admits. “There’s a fine line between exuberant positivity and dishonesty,” he says. And, as one response to the “contextualized informed consent” article points out, side effects do occur independent of a nocebo effect, and many are “mild” only “as long as the patient stays in bed.” Even a small risk of drug-induced drowsiness is serious for a school bus driver, for instance.
From a legal perspective, then, telling a patient about every possible side effect is safer. There are other pragmatic concerns as well: Making nuanced decisions about contextual informed consent rather than routinely telling every patient everything takes time and more personal attention, in an era when doctors’ time per patient is on the decline. “It requires a collaborative relationship with the patient, to explain what the nocebo effect is,” says Dr. Wayne Altman, the director of student education in the department of family medicine at Tufts Medical School, who teaches the importance of encouraging positive thinking in conversations with patients. “Because if they don’t trust what I’m telling them, they can just go home and look it up online in two minutes.”
Many of those trying to draw attention to the power of nocebos say that the path forward depends on more empirical studies of their benefits and risks. Only then is a concern about nocebos really likely to take root in medicine. “We should educate future clinicians that this is not just an issue of being empathic to their patients. The words they use can have a neurobiological effect,” says Colloca, who’s working on a study comparing informed consent strategies for corticosteroid injections to treat shoulder pain.
Altman says such research would be “fascinating,” but he doesn’t think it’s needed for physicians to “help patients utilize the power of their minds” to foster healing and avoid side effects.
“We come across a lot of things in our day-to-day work that we don’t have good evidence for to guide us,” he says. “At the same time, my intuition is so strong here that it’s not something I question. I feel pretty confident that I’m helping my patients by collaborating with them in this way.”
Chris Berdik is a journalist in Boston. His book, “Mind Over Mind,” will be published in October by Current, an imprint of Penguin.
Correction: An earlier version of this story had the incorrect title of Chris Berdik’s book in the authorline. The book is “Mind Over Mind.”