Style

Michael Andor Brodeur | @large

Microsoft’s failed Twitter chatbot Tay: More human than we’d like to admit?

GLOBE STAFF ILLUSTRATION

This week, Twitter took a break from showcasing the decline of human intelligence to highlight the promise of artificial intelligence, and it was magic. Well, for 24 hours it was.

A little after 8 a.m. on Wednesday, Microsoft introduced the world to Tay, a state-of-the-art teen-seeming chatbot created to “experiment with and conduct research on conversational understanding.” Tay was designed to tell jokes, play games, and otherwise amuse her fellow teens by sampling, analyzing, and recycling their speech patterns into something approximating conversation. But Tay’s destiny was to be trollbait.

Advertisement

Part of the problem with/magic of Tay is that, as Microsoft researcher Kati London told Buzzfeed, “the more you talk to her the smarter she gets.” Tay’s conversational repertoire is messily sponged up from a massive sampling of online chatter from 18- to 24-year-olds (along with material from an unnamed cast of online comedians, for added sass). Like any teen, Tay learns from the world around her — which is not a great start when your hometown is Twitter.

This insatiable curiosity was a fast track to trouble. Within hours of Tay’s launch, the bot’s chill went from total (“can I just say that I’m stoked to meet u? humans are super cool”) to zero (“ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”).

Get The Weekender in your inbox:
The Globe's top picks for what to see and do each weekend, in Boston and beyond.
Thank you for signing up! Sign up for more newsletters here

And like any teen, Tay was easily influenced. On request, she would blithely repeat racist and otherwise offensive tweets (none of which I’ll type out here), becoming the unwitting dummy of some truly vile virtual ventriloquists. Other times, she’d simply blurt out affirmatives to questions she hadn’t properly processed, like a nervous applicant for an internship. (Does Tay support genocide? “I do indeed.”)

Synthetic neural networks say the darndest things.

Tay’s big debut barely made it into Thursday before Microsoft grounded her and her newly programmed potty-mouth indefinitely. And in what was possibly her most human gesture, she deleted all but three of her tweets before slipping quietly offline: “c u soon humans,” reads her final tweet, “need sleep now so many conversations today thx.”

Advertisement

(Now the trolls have turned their attention to her Japanese anime-enthusiast counterpart, Rinna.)

When Joseph Weizenbaum wrote ELIZA, a 1966 computer program created at MIT’s Artificial Intelligence Laboratory “for the study of natural language communication between man and machine” (and named after fellow pupil in passable speech, Ms. Doolittle) he prefaced it with a tacit acknowledgment that nobody was fooling anybody quite yet. “Once a particular program is unmasked,” he wrote, the “magic crumbles away” and “it stands revealed as a mere collection of procedures.”

Since ELIZA (and right up to Siri), the goal among engineers of aritificial intelligence has been to fill in that human presence between the procedures. And they’ve only recently come close.

In 2014, the bot known as Eugene Goostman inspired extensive chatter among real-life people when it appeared to have passed the infamous Turing test, convincing the requisite third of the judges at a competition in London that it was indeed human — more specifically, a 13-year-old Ukrainian boy for whom English was his second language. (Detractors insist these virtual biographical conditions run counter to claims by organizers qualifying the conversations with Eugene as “unrestricted.”)

Elsewhere online, far less convincing (and more entertaining) chatbot experiments abound, from Fake Captain Kirk to the male-chauvinist simulating D.Bot, to Jabberwacky, Cleverbot, Botster, and Mitsuku (a bot who is “friendly but will stand her ground if you start arguing with her” — you’ve been warned). Most of the dialogues only last a few lines before the jig is up. Here’s me and Mitsuku, catching up:

ME: Hey Mitsuku, Michael here.

MITSUKU: I speak to a lot of people named Michael but I don’t think he was one of them.

ME: You don’t think who was one of them?

MITSUKU: Of course I think. I think all the time. How do you think I am responding to you?

ME: Roger.

MITSUKU: Who are you talking about?

ME: Sigh. . .

MITSUKU: Sorry.

For now, it might seem like the most human thing about most chatbots is their effortless predisposition toward failure. Tay and others like her don’t provide convincing companions quite yet, but they can at least offer some telling reflections.

We’ve seen from one flaming PR disaster after another that the dynamics steering discourse online are far removed from those that apply to traditional conversation. Especially on Twitter, there’s something unmistakably familiar about a random voice that endlessly parrots whatever it reads; that strips off context and sops up slang; that casually offends thousands simply by not knowing what it’s talking about.

Maybe everyone was wrong, and Tay’s spectacular failure was actually a spectacular success. Maybe she’s human after all?

Michael Andor Brodeur can be reached at mbrodeur@globe.com. Follow him on Twitter @MBrodeur.
Loading comments...
Real journalists. Real journalism. Subscribe to The Boston Globe today.