Imagine a phone call from your mother, seeking help because she forgot her bank password. Except it’s not your mother. The voice is computer-synthesized, a tour-de-force of artificial intelligence technology.
Such a situation is still science fiction — but just barely.
Software components to make such masking technology widely accessible are advancing rapidly. Recently, for example, DeepMind, an Alphabet subsidiary, said it had designed a program that “mimics any human voice.”
“The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, author of “Future Crimes.”
The alarm about AI was sounded this year by James R. Clapper, director of national intelligence. He underscored the point that while AI would make some things easier, it would also expand the online world’s vulnerabilities.
Consider the Internet’s omnipresent Captcha — Completely Automated Public Turing test to tell Computers and Humans Apart — to block automated programs from stealing accounts. Criminals have had software to subvert Captchas for more than five years, said Stefan Savage, a computer security researcher.
So what’s next? Criminals, for starters, can piggyback on new technology. Voice recognition is now used extensively to interact with computers. Often, when an advancement like voice recognition starts to go mainstream, criminals aren’t far behind.
“I would argue that companies that offer customer support via chatbots are unwittingly making themselves liable to social engineering” — the practice of manipulating people into performing actions or divulging information — said Brian Krebs, an investigative reporter.