Ken Sutton served his country as a US Army Ranger. But these days he’s chief executive of a Boston technology startup, and he is doing a lot of soul-searching about whether to do business with the US intelligence community.
“I grapple with that every day,” said Sutton, acknowledging that his company, Yobe, makes an artificial intelligence system that could be used “in some very nefarious ways.”
Sutton and other Massachusetts executives are part of a larger debate within the nation’s technology community over the ethics of letting government agencies involved in surveillance or enforcement activities use their inventions for purposes they might find objectionable. In an extraordinary show of dissent, employees of Google, Microsoft, Amazon.com, and Salesforce.com have demanded their companies refuse to do business with police departments, immigration authorities, or the US military.
One Boston company, Affectiva, decided early on that it wouldn’t sell the government its technology, which uses facial recognition to detect a person’s emotional state.
“I’ve been pretty vocal from day one that we’re not doing this,” said Rana el Kaliouby, who launched Affectiva in 2009.
Much of the recent outrage is largely driven by furious opposition to the policies of President Trump. Even before Trump took office, his proposal to prohibit Muslims from entering the United States prompted thousands of technology workers to sign a pledge that they would never work on systems that could enforce such a ban. The tech backlash has only grown more intense since, and has risen to new heights since Trump’s controversial decision to separate immigrant children from their parents at the US border, a policy that has since been rescinded.
“The Trump administration’s activities at the border, and frankly their naked racism, has appalled people across the country and has spurred the tech sector to action,” said Kade Crockford, director of the Technology for Liberty program at the American Civil Liberties Union of Massachusetts.
In the case of Google, the protest worked. The Internet giant in June said it would no longer work on artificial intelligence systems that help military drones conduct aerial surveillance or attack targets.
Other protests have had mixed results. Amazon, for example, has so far declined a request from some employees to stop selling its facial recognition technology to police departments. Critics fear police will use the systems to track people without legal justification.
Microsoft and Salesforce have also resisted calls from employees to abandon business deals with
, one of the federal agencies involved in the family separation controversy. The movement has even spread beyond the technology sector.
., the giant management consulting firm, earlier this week said it will no longer do business with ICE, after employees objected.
In this corporate rebellion, the nerds have been leading the way. In many US companies, labor unions are weak or nonexistent, and workers are rarely in a position to challenge their employers’ business practices. Not so for the prized engineers and scientists at technology firms.
“Tech workers occupy a key position in today’s economy, and so they have a certain kind of power,” said Sasha Costanza-Chock, associate professor of civic media at MIT.
Even without goading from employees, some tech company leaders, such as Ken Sutton of Yobe, are fretting about how their innovations might be used — or abused — by governments.
Yobe is making audio processing software that can identify a specific human voice in a roomful of talkers. The technology has plenty of commercial applications — hearing aids, for example, or speech-controlled devices that would answer to just one member of a family.
Sutton said Yobe could also be used in military communications, to enable better communications among groups of soldiers on a battlefield. Indeed, he suggested the idea to officials at the Defense Department. But he was taken aback when members of the intelligence community reached out to the company. Sutton said he fears that investigators could use Yobe technology to eavesdrop on private conversations of citizens, without their knowledge or consent.
Sutton said his time in the military taught him to be wary of the government. “I know what we say we do versus what I’ve seen us do,” he said. “It’s not always consistent.”
But he stressed his misgivings are his alone for now and don’t reflect the policy of the company.
It’s also easy to imagine how technology from Affectiva, meanwhile, could be used by a government agency in, say, the interrogation of a suspected terrorist.
The technology analyzes facial movements and expressions to gauge emotions, and is used by major corporations to test the effectiveness of TV ads.
In 2011, a venture capital company backed by the CIA offered to invest in Affectiva. Kaliouby was tempted.
“We were running out of money,” she recalled. Nevertheless, she said no. “Your emotions are very personal, as personal as your data gets. And we wanted people to trust us. . . . We didn’t want to be a Big Brother.”
Ironically, some of Affectiva’s employees have pushed back against her policy, arguing that if used properly, the company’s technology could help protect the public.
“I always want to listen with an open mind,” said Kaliouby, “but I’m still not there.”
The pressures on tech companies are likely to increase as activists step up their resistance efforts.
Groups with names like the Tech Workers Coalition and Science for the People are vowing to organize dissatisfied workers to refuse to work on questionable government projects, and have adopted a Twitter hashtag, #TechWontBuildIt.
“If they won’t build it,” said Costanza-Chock of MIT, “it’s not going to happen.”