TO DEMONSTRATE HOW a computer can identify cancer — or the earliest suggestion of abnormal cell growth — Regina Barzilay points to a screen showing three images of her own mammograms. The first image is from 2014, the year she was diagnosed. Then she points to her tests from 2013 and 2012, in which a small white mass is clearly visible.
“It was already there,” she says, looking at me incredulously. A mammogram might be the best tool available to screen for breast cancer, but it failed to catch hers. The computer scientist, whose cancer is in remission thanks to surgery and radiation, hopes that for future patients, breast cancers will be spotted earlier. In her research at MIT’s Computer Science and Artificial Intelligence Lab, she is developing software that aims to teach a computer to analyze mammogram images more effectively than the human eye can and to catch signs of cancer in its earliest phases — maybe even before abnormal cell growth turns into the disease. In other words, she’s trying to reinvent the mammogram through artificial intelligence.
Experts disagree on whether regular mammography screenings lead to lower breast cancer mortality. While a standard mammogram can detect a tumor as small as 1 mm, it is limited both by the sensitivity of the machine and the ability of the human eye reviewing the images, which is why Barzilay’s cancer wasn’t detected through a standard mammogram. Barzilay, a 48-year-old born in Moldavia, won a MacArthur Foundation “genius grant” in 2017 for her research in natural language processing. In 2015, her lab began analyzing 71,000 mammograms from 31,000 patients who either did or didn’t get a cancer diagnosis after five years. The goal was to train the computer to identify patterns, unable to be seen by the human eye, that indicate the earliest stages of cell mutation.
“Our next step is to use the model to look at the image and tell me that in two years or five years this woman is likely to get cancer,” she says. Her partner in this effort is Dr. Constance Lehman, the chief of breast imaging at Massachusetts General Hospital, which plans to implement the AI risk assessment process in January. Barzilay’s team has used data from the Henry Ford Hospital in Detroit to ensure that the program works effectively across different demographic groups.
“We’ve made great progress in medical oncology in the past five years,” says Dr. Lecia Sequist, a lung cancer specialist at MGH and the head of its new Center for Innovation in Early Cancer Detection. “But once a cancer spreads, we can try to keep it under control or convert it to a chronic disease, but we can’t really eradicate it.” She ticks off screening technologies such as mammograms, colonoscopies, and a new low-dose CAT scan used in lung cancer. But existing tests can over- or under-diagnose in ways that make our health care system more costly. And dozens of other types of cancer have no screening technologies. Sequist aims to change that as the director of the hospital’s new center, which brings together specialists from different medical disciplines and beyond, including those developing new screening tools such as molecular analysis of blood samples.
IN LATE 2014, I WENT to the doctor for a routine mammogram. “No sign of malignancy,” my report read. Genetic testing had already revealed that I carried a gene mutation called BRCA2 that put my risk of developing breast cancer by the time I turn 80 at 69 percent. That information had bumped me into the high-risk category, so my doctor had also ordered annual MRIs, which is how, in spring 2015, an almost matching set of invasive ductal carcinomas were detected. By the time of my double mastectomy, the cancer had already spread to one lymph node. Would artificial intelligence have caught it earlier?
The idea of early detection in medicine is relatively new. For centuries, doctors have treated disease only after a patient had begun showing symptoms. Medical advances in the 20th century often involved new treatments, including innovative surgeries, radiation, or new drugs. But we are seeing the beginnings of a new era in which people may be diagnosed — and treated — before the disease gets ahead of the doctors. Patients could be monitored not for cancer, but for suspicious signals that are likely to lead to cancer. An annual blood test could find proteins that suggest a patient is likely to have a heart attack or develop diabetes in 10 years. And neurological differences such as autism or dyslexia could be detected earlier, opening the window of intervention. The term “early detection” is nebulous: What constitutes early? And if a doctor identifies a condition earlier, can medical interventions make a difference — or even save or prolong a life?
On a drizzly morning in late November, I visit Dr. Mehmet Toner’s lab in the Charlestown Navy Yard to see the seemingly impossible: a chip his lab developed that can find a single tumor cell in a test tube containing 10 billion cells. Sometimes called a “liquid biopsy,” experts once dismissed the idea as impossible. Toner, an expert in microfluidics, runs MGH’s Bio MicroElectroMechanical Systems Resource Center, which develops tiny electromechanical diagnostics. (MEMs stands for microelectromechanical systems.)
“If you think about it, blood has every piece of information you need to know about almost any disease, whether it’s Alzheimer’s, Parkinson’s, prenatal diagnoses, cancer, stroke,” he says. “We just don’t know how to read the blood.”
The blood test is the world’s most prescribed diagnostic, mostly because it offers an easy window into our general health. A tube of blood, measuring 5 to 10mm, contains white and red blood cells and platelets, but it also contains lipoproteins like cholesterol and chemicals such as glucose. In the future, Toner envisions a high-octane blood test that can detect any particles or cells circulating in the blood that shouldn’t be there.
Toner’s lab developed its chip as part of a project to detect tumor cells in lung cancer patients before they metastasize. But he envisions using the test to find abnormal cells before they become cancerous. Two hundred million people in China have cirrhosis and of those, 2 to 8 percent might develop liver cancer in a year or two. “If you could diagnose early, you could ablate the tumor, remove the lobe,” he says. “You can cure cancer before it spreads.”
TorpedoDx, a Cambridge startup that Toner helped found, is about to start a large multistage clinical trial for early detection of liver cancer, but Toner says his lab has seen good results in detecting skin, breast, and prostate cancers as well. If we can learn from a tube of blood that cancer might soon or is already beginning to develop in a patient, what else can we learn from it?
Quite a bit, says Dr. Robert Gerszten, chief of cardiovascular medicine at Beth Israel Deaconess Medical Center, an expert in proteomics, or the study of proteins in the body. We can learn a lot from our genes, but proteins determine what happens in our body minute by minute — and that information can be found in the blood if we look for it.
“You go to your doctor and you get a blood test. You get your total cholesterol level. You might get your thyroid checked, or blood sugar levels. It’s what we often call a chem 20 — that’s your routine yearly physical,” he says. “But now we could do a chem 2,000 or almost a chem 10,000, so that the window into what’s transpiring in our body is just so much greater.”
If someone has high markers of certain proteins, for instance, that person’s risk of developing heart disease is great whether or not he or she has high cholesterol. And those markers are detectable in a blood test 10 years before someone might develop heart disease or diabetes. Currently, the cost of running these comprehensive blood tests is more expensive than a standard version by an order of magnitude. Gerszten shows me the $500,000 machine his lab uses to analyze the proteins in a blood sample. “At some point the prices should be the same,” he says. “You should be able to measure 2,000 things for what it costs to measure 20 today.”
Next-generation blood tests could also help doctors to predict which patients will respond to a specific therapy, Gerszten says. The goal is to know, based on a patient’s proteomic profile, whether putting a healthy person on a statin to reduce cholesterol or a drug to lower blood pressure will stop heart disease before it starts. Gerszten is also exploring the possibility of measuring proteins in the blood to detect mental illnesses like schizophrenia, for which “there really are no guideposts,” he says. Down the line, he envisions testing children to identify diseases they might not develop for decades.
COLETTE BELLIVEAU’S FEET DANGLE, her silver high-top sneakers swinging beneath the table in a Boston Children’s Hospital testing room. The 5-year-old is looking at an app about an animated bird named Pip who wants to reach a pond to sail his toy boat. The game prompts Colette to touch the appropriate symbols as each letter or letter sound is named, and moves on to identifying initial sounds in words and naming speed, measuring how quickly Colette can name a series of objects. It’s testing whether or not she is at risk for a learning disability such as dyslexia.
On average, children are diagnosed with dyslexia in third grade, yet it’s well known that the most effective window for early intervention is in first grade, kindergarten, or perhaps even earlier. Dr. Nadine Gaab, a professor of pediatrics at Boston Children’s Hospital and Harvard Medical School, calls this “the dyslexia paradox.” Which raises the question: Why aren’t children screened earlier?
The five or six-hour battery of tests currently used to diagnose dyslexia is designed for older children, administered by a specially trained professional, and costs thousands of dollars. Gaab’s app offers a less expensive, more accessible, and generally easier process for identifying kids at risk of not hitting the state’s literacy milestones well before they fail. Such early screening will soon be required in Massachusetts: A bill signed into law in October ensures public school districts identify at-risk students beginning in kindergarten. The Andover Public School System has signed on to test Gaab’s app starting this month.
“The ability to screen children at a much younger age, before they have encountered reading failure, is very exciting,” says Sara Stetson, Andover’s assistant superintendent of student services. Ultimately, Gaab hopes to make the test available in schools nationwide, as well as in public libraries and pediatrician’s offices.
Down the hall from Gaab’s lab, a happy 2-year-old named Tyler is distracted with bubbles and stickers as a researcher prepares him for EEG testing by placing a mesh cap fitted with 128 sensors on his head. The device detects the electrical activity in his brain as his eyes track an object moving on a screen and, later, as he watches a woman’s face on a screen. Sometimes she will say a word, but there is no audio — Tyler just sees her lips move. Sometimes her lips might appear to be saying “ball,” but the audio track says “cat.” The point is to identify how Tyler’s brain integrates audio and visual information.
The test, a high-tech screening for autism, is followed by a more traditional test in which Tyler interacts with a researcher in a room filled with toys. At one point, the researcher places his hand in front of a toy he knows Tyler wants to play with and waits to see if the boy makes eye contact. Will he focus on the person who is blocking his access to the toy or on the toy itself? The first time, he reaches out and moves the researcher’s hand out of the way. The next time, he looks up, wondering. I’m seeing this unfold on camera as I sit next to a researcher in an adjacent room, and she quietly says, “Yes!” It’s a sign of social interaction that suggests Tyler might not be autistic, which is what she is hoping for behind the scenes.
But if he does show signs of being on the autism spectrum, at least it would be caught earlier. “On average children in the US are diagnosed with autism at around 3 years old,” says Dr. Charles Nelson. That’s unfortunate, both because the younger a child is diagnosed, the more impact intervention can have on a developing brain, and because access to publicly funded early intervention programs often ends when a child turns 3. Nelson’s lab starts testing kids known to be high risk — children who have older siblings with autism, for instance — as young as three months. But the lab is also involved in a multisite trial looking to identify biomarkers — a biological measure — associated with autism, with the hope that once they are found, new treatments could be developed.
WHAT IF? THAT’S THE QUESTION that runs through my head while I sit with these experts. What if my friend Bill — who ran circles around me as we jogged along the San Francisco Embarcadero — had taken a blood test that detected his risk of heart disease before he dropped dead at mile 24 of a marathon at the age of 43? What if my own breast cancer had been caught earlier, potentially saving me from a terrifying year?
The week after Thanksgiving, Regina Barzilay’s lab analyzes my last mammograms, which were taken in the years before my diagnosis and subsequent double mastectomy, chemo, and radiation treatment. In scans taken six years before I was diagnosed, the algorithm assessed my five-year risk of developing cancer as higher than 75 percent of the women screened at MGH. The software can’t see a tumor — it identifies patterns in my breast tissues as possibly cancerous. What if?
“We have a long way to go,” Barzilay tells me as she walks me to the door. “But we’ll get there.”