The popular artificial intelligence app ChatGPT can do many things, but it can’t tell you how long you’re going to live. Not yet, at least.
But now a team of scientists at Boston’s Northeastern University and in Denmark say they’ve built an experimental AI system that can predict the patterns of a person’s life with disquieting accuracy. According to a peer-reviewed paper published last week in the journal Nature Computational Science, the AI system, called “Life2vec,” can accurately assess whether the person is extroverted, whether they are likely to emigrate, and even approximately how long they will live.
One of Life2vec’s creators, Sune Lehmann, said the AI system belongs strictly in the lab for now, because its predictive capabilities could have unpredictable real-world consequences.
“There’s some decisions you should never use it to make,” said Lehmann, a professor of networks and complexity science at the Technical University of Denmark and a former postdoctoral fellow at Northeastern and Harvard universities.
Life2vec is a “large language model” AI built on the same principles as ChatGPT. This type of AI can process vast amounts of data and extract patterns of behavior that are invisible to humans or to less sophisticated software.
Life2vec was trained on a massive database maintained by Statistics Denmark, a government-funded service that contains detailed information on 6 million citizens. Access to the data is tightly regulated for reasons of privacy, so Lehmann and his colleagues needed special permission to use it.
The information included home and work addresses, school records, medical records, marital status, and career and income data. It also included the results of personality and social behavior tests taken by thousands of Danes. The research team looked at citizen data collected between 2008 and 2015.
“We now live in an era where we can look at all the data all at once,” said Northeastern computer science professor Tina Eliassi-Rad, a coauthor of the paper. “It means you can pay attention and learn correlations between every piece of data we have.”
The researchers tested Life2vec’s predictive abilities by selecting 100,000 people from the training database. About 30,000 of those chosen had died between 2016 and 2020, after the training period. To make it tougher to predict people’s lifespans, all 100,000 were between 35 and 65 years old, because relatively few people in this age range would die in the subsequent four years. Still, when it was asked to pick which of them had lived and which had died, the computer got it right nearly 79 percent of the time.
Eliassi-Rad and Lehmann aren’t sure how the AI could figure this out. They found that even when Life2vec didn’t take medical records into account, it was still fairly accurate in predicting mortality. This suggests that the AI is relying on other factors that affect lifespan, such as a person’s choice of career.
Lehmann said that because the AI is built with detailed information on millions of people, researchers could use its mortality estimates to figure out the best behaviors for good health. For instance, they could single out all the 40-year-olds who died early, and ask the system about what they had in common.
Life2vec was also tested on something far more subjective and subtle: Could it predict whether a person considered himself an extrovert? The scientists compared the predictions to data from thousands of Danes who have taken self-administered personality tests and found that Life2vec was consistently able to predict whether a person was introverted or extroverted. Lehmann thinks the AI figures it out based on the individual’s career choices and family relationships.
“Extroverted people tend to live extroverted lives,” he said.
In a final test, Life2vec was asked to predict whether specific people in the database would emigrate to another country over the next four years. When its answers were checked against government emigration statistics, the AI system proved correct about 73 percent of the time.
One major caveat: Life2vec won’t produce reliable results for people living in the United States or any other country than Denmark. It’s been trained on the personal data of Danes, who presumably live differently from people elsewhere. So a US version would have to be based on data collected from US residents.
That might already be happening, Lehmann said. The same kind of data stored by Statistics Denmark is already filed away in US corporations and government agencies. Much of it is also held by giant social networks like Meta. It’s conceivable, if unproven, that organizations are building AI systems that seek to predict the future of their users.
“I think it’s naive to think they’re not doing it already,” Lehmann said.
That could be a good thing, according to Eliassi-Rad, who foresees the use of predictive AI by scholars and governments to evaluate the impact of social policies.
“You can scan your society and see how your policies, your laws, are affecting people,” she said.
For instance, AI might someday be able to predict whether a child will have problems in school. That could be good news, Lehmann said, because parents and teachers could obtain remedial help for the child. But it could also do harm by causing some parents or teachers to give up on the child.
“This is the kind of discussion that we need to start having, because these machines are going to start arriving,” Lehmann said.