“It’s time for your daily check-in,” a pop-up notification prods me. I click on my phone’s Mental Fitness app, which offers up the text prompt “How are you feeling today?” After I speak into the mic for 30 seconds, rambling about whatever comes to mind, the app churns out my well-being score on a 1-to-100 scale: “51: Pay Attention.”
That assessment sounds about right, at least for the moment — I’m exhausted and numb after skimping on sleep, and my coffee hasn’t yet had a chance to kick in.
If I’d visited a real-life counselor, she might have given me a similar warning. But the Mental Fitness app, developed by Boston-based Sonde Health, derives its verdict in a totally different way. Billed as “the world’s first fitness tracker for your mind,” the app homes in on subtle features of your voice, such as its smoothness, changes in pitch, and how long you pause between phrases. On the next screen, the app explains what’s behind my so-so score: I spoke smoothly, which weighed in my favor, but the pace of my speech was sluggish. A hesitant affect could be a sign of depression, studies suggest.
Mental Fitness joins a phalanx of artificial intelligence apps and programs that detect and track signs of mental distress from things like your status updates, your texts, your sleep schedule, and the tone of your voice. The idea, developers say, is to spread the pursuit of well-being — and help at-risk people nurture their mental health so they won’t take a plunge requiring months of white-knuckled recovery.
Advertisement
The prospect of a virtual mental health monitor pricked up my antennae for personal reasons. I have a history of depression that surfaces during stress, and a few years ago, my psychiatrist leveled with me: I was going to have to be vigilant pretty much forever if I wanted to keep my illness at bay. That chat convinced me to stay on a low-dose medication that was working well.
Advertisement
But I know medicine sometimes stops working, and I’ve learned the hard way that the worse my mental state gets, the longer it takes me to recover. For someone like me, AI early warning systems could be transformative if they live up to their billing.
The newer systems are more sophisticated than those of a few years back, like an early Facebook tool that tried to gauge suicidal intent from comments between friends such as “Are you OK?” While the new tools aren’t yet used for official diagnoses, studies suggest they can pick up signs of mental health trouble early on, sometimes before users realize anything is wrong.
“What we’re doing is giving you a read of what your voice or your biomarker is telling us at that moment in time,” says Sonde CEO David Liu. “I like to call it an early warning system. It’s a window into your body, through your voice, that has not been open before.”
In the fog of pandemic-era stress, having a wide range of accessible mental hygiene aids is more appealing than ever — especially since the United States has a shortage of mental health providers. More than half of American counties don’t have a single psychiatrist. And the stakes of timely intervention couldn’t be higher, as studies show that unchecked bouts of depression seriously damage the brain.
Advertisement
But whether AI data monitoring truly sustains well-being is still an open question. And users will have to decide whether the benefits justify letting companies track some of the most sensitive data you could possibly generate.
Pinpointing subtle changes
Though each mental health app or program collects different data from users, many of them work in similar ways. Before launching an app, researchers train its AI system by supplying it with large data sets — for example, thousands of texts, updates, or voice recordings, some from people in robust mental health and some from people who are struggling.
As the program processes this avalanche of data, it learns to zero in on subtle features that distinguish members of the first group from those in the second. Once the AI system is fully trained, it should — at least in theory — be able to detect whether an incoming text or voice update raises mental health concerns. If your voice displays the kind of “jitter,” or lack of smoothness, Sonde’s AI has found in depressed people’s voices, your Mental Fitness well-being score will drop accordingly.
While the science of AI mental health monitoring is young, the technology does show promise in detecting which users are at risk.
Researchers from Australia’s University of Newcastle trained one new AI tool on thousands of Twitter messages, some from people diagnosed with depression and others from nondepressed people. The system later proved more than 70 percent accurate at identifying new text-based messages that came from depressed people, even if the messages made no mention of diagnosis, sadness, or depression at all. And Sonde scientists, collaborating with other researchers, have reported that their AI-based system reliably picks out voice recordings that come from depressed people.
Advertisement
Heeding early warnings
By flagging signs of trouble in user data or at check-ins, developers say they’re empowering people to take steps that bolster their mental health. “Pay Attention” notifications are meant to serve as a gut check, motivating you to book a counseling visit or see a doctor. “With objective data and information,” Liu says, “people get smarter about their own bodies, about their own health, and they will take action.”
An app called WeBeLife, the brainchild of Barbara Van Dahlen, a psychologist, Time 100 honoree, and former executive director of a mental health task force at the Department of Veterans Affairs, takes things to the next level by learning from users’ updates about their moods, physical activity, and sleep habits. That means the system starts to notice things like whether you have a predictable dip in well-being after days of poor sleep — and urges you to do something concrete to prevent that dip. When the app finds something amiss in your data, “you get an alert,” Van Dahlen says. “And it immediately links you: ‘Here’s some tips to try. Check this out.’” WeBeLife also encourages users to join small groups called “pods,” whose members keep tabs on one another and swoop in with extra support when needed. One US university, the College of William and Mary, will be recommending the app and its pods to students as tools to maintain their well-being.
Advertisement
Initial reviews of AI well-being apps hint at their potential as an affordable safety net for users at risk. “This app was recommended to me at a time I was having one of my ‘low’ days. I started using it right away, and I love it,” wrote a reviewer of an app called Kintsugi, who said the app helped identify what triggered their mood spirals. “Being able to go back and see my history gives me additional insights that help me to better understand myself.”
Right now, though, it’s too early to tell if apps like these can help keep people afloat for a long time. It’s one thing to show that your AI system’s pretty good at identifying text or voice samples from depressed people — and quite another to prove that AI tracking consistently boosts people’s well-being. “There is still a long way to go before AI-powered vocal biomarkers can be endorsed by the clinical community,” The Lancet’s editorial board recently argued, in a piece about the technology’s potential to diagnose a variety of illnesses.
Companies like Sonde are starting to study the real-world results. In a small pilot study at the Cognitive Behavior Institute in Pittsburgh, Mental Fitness got positive reviews from patients and mental health providers. Sonde recently launched a larger follow-up study of 150 adults with depression symptoms who will use Mental Fitness to track their voice samples over a three-month period. The study’s findings, Sonde hopes, will lend more insight into how the platform affects users’ mental outlook.
Listening in the background
While long-term verdicts on the apps’ effectiveness are pending, the way they work is already forcing users to consider how far they should let startups into their mental space. Everyone’s used to social networks that collect people’s online data to drive a profit, and for many people, the rewards of engagement seem to outweigh the privacy drawbacks. But should that calculus change with apps that give companies a direct line into your darker moments?
That depends how much you trust the developers. Both Liu and Van Dahlen stress their commitment to user privacy and say their companies won’t share personally identifiable data with outside sources. “The only time we ever share individual data is if someone is a danger to self or others,” says Van Dahlen. “That’s it.”
At the same time, Sonde, WeBeLife, and Kintsugi have privacy policies that state they may let third parties use software to assess how users interact with their apps. University of Maryland computer scientist Jennifer Golbeck is wary of the idea of bringing such outside tracking software on board. Third-party trackers may not connect the data they gather to users’ real names, but some obtain IP addresses or device IDs that might be traceable to individuals, she notes. It’s often unclear what happens to the user data that third-party software collects, Golbeck says. “Does it go to data brokers? Is it used for advertising? It’s kind of a black box.” She urges app developers to give users the option of storing mental health data on their own devices so that no one else — including external data crunchers — will have access to it. (Sonde’s Liu notes that third parties the company works with are required to keep data they collect confidential. WeBeLife’s privacy policy says third parties can’t use your data for promotional purposes.)
Apps like these will have to vouch for the integrity of partners they bring in. Sonde’s goal isn’t really to get everyone using the Mental Fitness app, Liu says, but to secure deals with bigger fish: businesses with a vested interest in mental health monitoring. Sonde has signed a contract with the wellness company SNAP Brands, which plans to bundle Sonde’s tech into its own app as an adjunct to its offerings. And WeBeLife is seeking corporate customers who’ll pay for mental health monitoring and the aggregated data it generates. “We’re having conversations within government, we’re looking at health care delivery systems, we’re looking at populations that are especially in need of this kind of platform,” Van Dahlen says.
Still, grass-roots enthusiasm is what can best propel AI well-being trackers to success, and it’s hard to say how much that enthusiasm will build. My own trial of Sonde’s monitoring app was inconclusive. While I liked watching my Mental Fitness scores come in (and was relieved when they weren’t terrible), I noticed they all hovered in a narrow range between 51 and 61, even as things like delayed flights and party-planning anxiety affected my mood over time.
Since I’m feeling mostly OK these days, I’ve scaled back on my Mental Fitness routine. If I sense I’m getting closer to the edge, I might resume daily check-ins. I’m not sure whether I fully trust companies with my most sensitive data. But I also know that if I let my mood issues snowball, I’ll be left shoveling out the wreckage. In the end — as with so much else in mental health management — I’ll just have to go with my gut.
Elizabeth Svoboda, a writer in San Jose, Calif., is the author of “What Makes a Hero?: The Surprising Science of Selflessness.” She’s at work on a book about psychological pacing.