scorecardresearch Skip to main content

A Milton resident’s lawsuit against CVS raises questions about the use of AI lie detectors in hiring

A class-action lawsuit was filed last month in Suffolk Superior Court against CVS Health Corp. by Milton resident Brendan Baker, who failed to get a job at the Rhode Island-based drugstore chain after completing an AI-assisted video interview conducted using the platform HireVue, according to the complaint.Christopher Lee/Bloomberg

It’s illegal for employers in Massachusetts to use a lie detector to screen job applicants, but what if a company uses artificial intelligence to help assess a candidate’s honesty?

Does it fall into the same category as old-school polygraph tests, pinpointing increased perspiration and skittering heart rates?

And is it unfair for employers to use machines to help evaluate a person’s integrity? Or is it more fair than relying solely on the subjective judgment of humans?

These are the questions surrounding a class-action lawsuit filed last month in Suffolk Superior Court against CVS Health Corp. by Milton resident Brendan Baker, who failed to get a job at the Rhode Island-based drugstore chain after completing an AI-assisted video interview conducted using the platform HireVue, according to the complaint. Baker is the named plaintiff “on behalf of all others similarly situated.”

Advertisement



The use of artificial intelligence is spreading through the employment landscape, fueling questions about the role emerging technology plays in the workplace, and the potential harm it could cause. Calls for more rigorous testing and regulations have begun, and government officials are scrambling to get ahead. The White House and several other federal agencies recently announced their commitment to scrutinizing artificial intelligence at work, and the US Equal Employment Opportunity Commission is urging employers to analyze technology used to make employment decisions to ensure it’s not discriminatory, warning that they may be responsible for actions recommended by those tools, such as who is hired, promoted, or fired.

It’s unclear how laws that have been on the books for decades apply to these technological advances, and the more cases that emerge to test these laws the better, said Courtney Hinkle, an employment lawyer in Washington D.C., who has studied AI in hiring.

“We’re always looking for new ways to improve the hiring process, to make it more fair, to reduce subjective bias,” Hinkle said. “Employers are always concerned about the inflating of past experiences.”

Advertisement



But just how much artificial intelligence can help — or hinder — remains to be seen.

Like a number of other organizations, including T-Mobile, Delta Air Lines, and the Boston Red Sox, CVS has used the video-interviewing platform HireVue to screen job seekers. In about a third of its interviews, HireVue uses AI technology to analyze applicants’ “integrity and honor,” according to the HireVue blog, to help companies “scale your lie detection” and “screen out embellishers.”

At the time Baker applied for a supply chain job at CVS around January 2021, HireVue’s AI-enhanced interviews analyzed facial expressions, eye contact, tone of voice, and inflection, according to the complaint, relying on technology developed by the Boston company Affectiva, which was spun out of the MIT Media Lab. Visual and audio analysis have since been eliminated, HireVue said, but machine learning is still used to score applicants’ abilities through their transcribed answers.

Federal law has prohibited most private employers from using lie detectors to select employees since 1988, and the Massachusetts law goes even further, forbidding all employers from using a polygraph or any other device, mechanism, or instrument to “assist in or enable the detection of deception” as a condition of employment.

CVS’s use of HireVue’s AI-assisted screening of Massachusetts applicants violates state law, according to the complaint, which notes that HireVue records candidates responding to a list of questions that could include ones pertaining to honesty, such as: “Tell me about a time that you acted with integrity” and “What would you do if you saw someone cheating on a test?”

Advertisement



Baker’s lawyers declined to comment, as did CVS.

In a statement, HireVue’s chief data scientist Lindsey Zuloaga said: “Our assessments are not, and have never been, designed to assess the truthfulness of a candidate’s response.” Instead, Zuloaga said, HireVue uses tools based on “validated industrial organizational psychology” to help human hiring managers evaluate whether an applicant’s answers are “statistically linked to important work-related competencies” while mitigating human biases. This is a more reliable and scientific way to focus on skills than “simply believing what is written in a CV as they could be inflated by the writer,” the company said.

The AI understands the meaning of candidates’ answers, according to HireVue’s explanation of its assessments, and considers the relative weight of words; job seekers who use the word “team,” for example, boost their scores on teamwork. The program can also score responses against specific competencies identified for each job, like problem-solving and communication.

HireVue was also named in a recent lie detection lawsuit filed against Framingham-based TJX Companies, a nearly identical claim to the CVS case filed by the same lawyers, which was voluntarily dismissed by the plaintiff. TJX declined to comment.

Lie detectors were in widespread use by employers in the 1980s, with an estimated 2 million job applicants and employees forced to take polygraph tests by 1985, according to Hinkle’s law school research paper, ″The Modern Lie Detector,” published in the Georgetown Law Journal in 2021. The Massachusetts law barring these tests has a broad definition of what constitutes a lie detector, said Monica Shah, an employment lawyer at Zalkind Duncan & Bernstein in Boston, which could lead to more challenges as the use of AI grows. Shah is especially worried that employers could use AI as a way to deflect responsibility for decisions involving workers.

Advertisement



“There’s a concern that there’s going to be a lack of accountability and ownership for decision-making that is done through an AI technology,” she said.

And for all the non-subjective, unbiased analysis AI is supposed to provide, it’s only as fair as the data behind it. In 2018, for instance, it was reported that Amazon had scrapped an AI recruiting tool after discovering that the system for rating candidates for technology jobs favored men over women. It turned out the resumes the machines had been trained to analyze, from candidates who had previously applied for those types of roles, were predominantly from men.

Still, AI employment companies are popping up all over, promising fast, efficient, unbiased talent acquisition from recruiting to hiring. It’s important that employers who use these services invest in the proper compliance and training and are transparent with job candidates, said Tracy Westcott, founder of the Swampscott recruiting consulting firm Talent Track Solutions. Westcott also cautioned against using AI too early in the process, before a human has determined if the candidate is a good fit based on an application and resume — though she thinks those initial screenings will soon also largely be automated.

Advertisement



Naveen Bhateja, chief human resources officer at the New York life science platform Medidata Solutions who speaks frequently about AI in the workplace, cautioned that companies need to proceed with caution to avoid concerns over privacy, accuracy, and fairness, especially when it comes to assessing “complex and multifaceted” human emotions.

When it comes to evaluating truthfulness, the science simply doesn’t exist, said Leonard Saxe, a social psychologist at Brandeis University whose work on lie detection aided Congress before the passage of the 1988 Employee Polygraph Protection Act. There’s no “smoke alarm” that goes off in the brain when you lie, he said, and based on what we know, there’s no way for an automated system to distinguish a falsehood from the truth.

Assessing honesty also involves understanding context, he said. Take George Santos and Donald Trump: “They’ve told the lies that they tell so many times that I think you’d be hard-pressed to figure out whether there’s any sign that they’re deceptive.”

The one-way nature of recorded video interviews also eliminates human interaction, Hinkle noted. Without social cues and conversational banter, candidates may come across as awkward or unnerved, which could be misinterpreted by AI.

“You’re kind of talking into a void a bit,” she said. “Are they going to pick up on that uncertainty? Is that looking dishonest or deceptive in a way?”

“There’s just something lost in terms of the human element.”


Katie Johnston can be reached at katie.johnston@globe.com. Follow her @ktkjohnston.