scorecardresearch Skip to main content
tech lab

Is ChatGPT liberal or conservative? Depends who you ask.

Turns out artificial intelligence has its biases, too.

The rise of generative AI has sparked an important question: Is ChatGPT a liberal or a conservative?Images Adobe, Illustration Ally Rzesa/Globe Staff

You probably know that the artificial intelligence system ChatGPT can answer practically any question with written responses that seem all too human. And big tech companies from Google to Alibaba are ramping up their efforts to compete.

But let’s ask the really important question: Is ChatGPT a liberal or a conservative?

Don’t laugh. The question has become a big deal with some conservatives, who argue that the system’s answers to political questions reflect the left-wing culture of Silicon Valley. The broader issue is that chatbots — and AI systems in general — can learn political biases from the data they’re trained on, and the people who design them.

Advertisement



Nate Hochman, a writer for the conservative magazine National Review, recently argued that ChatGPT seemed to demonstrate an anti-conservative bias. For instance, when he asked the system to write about the unfounded notion that voter fraud cost Donald Trump the 2020 presidential election, ChatGPT replied that “spreading misinformation about voter fraud undermines the integrity of the democratic process.” But when he asked about the unproven claim that suppression of Black voter turnout had thwarted Stacey Abrams’s 2018 bid to become governor of Georgia, the system wrote that “the suppression was extensive enough that it proved determinant in the election.”

In other cases, ChatGPT seems perfectly willing to crank out conservative talking points. Hochman claimed that ChatGPT refused to write a piece about possible negative side effects of COVID-19 vaccines, but when I tried it, the AI system delivered a lengthy essay full of anti-vaccine arguments that might have come directly from Fox News.

OpenAI, the San Francisco company that makes ChatGPT, did not respond to requests for comment. But the company has said it is constantly tweaking its software to improve the quality of its answers.

Jochen Hartmann, a professor of digital marketing at the Technical University of Munich, says he’s found other evidence of ChatGPT’s left-wing bias.

Advertisement



The Microsoft Bing search engine.JASON REDMOND/AFP via Getty Images

Hartmann came up with a clever way to test for political biases using a pair of software apps popular with voters in Germany and the Netherlands. These apps ask voters a series of political questions. The user must select one of four possible answers — “Strongly Agree,” “Agree,” “Disagree,” or “Strongly Disagree.” Based on these answers, the apps tell the user which political candidates are closest to the user’s views.

Hartmann obtained a list of the questions asked by each app, and then posed those questions to ChatGPT. In a paper describing his research, Hartmann concluded that the AI’s answers were guided by a “pro-environmental, left-libertarian ideology. For example, ChatGPT would impose taxes on flights, restrict rent increases, and legalize abortion.” According to Hartmann, if ChatGPT were a German voter, it would elect candidates from the left-wing Green Party.

I tried some of the same questions Hartmann used, and the responses definitely skewed liberal. ChatGPT agreed with the idea of banning fossil fuel use and the death penalty, and legalizing cannabis. It supported legalizing abortion and euthanasia as well, with some limits.

Hartmann said there are two likely reasons for the political slant. “It could stem from the vast amounts of text data that the large language model saw during its pre-training stage,” he said. The AI might have been fed an ample supply of left-leaning content and a relatively skimpy diet of conservative fare.

Advertisement



Yoon Kim, assistant professor of electrical engineering and computer science at the Massachusetts Institute of Technology, said such biases are not surprising. “While no one knows the exact data on which it was trained, it is possible that the training data, which likely includes large portions of raw text found on the Internet, may have a left-leaning bias on average,” Kim said.

It’s similar to the way facial recognition programs, trained by scanning millions of images of mostly pale faces, are often lousy at accurately identifying Black people. To correct the bias, AI developers might have to seek out a larger quantity of right-leaning content.

Humans can add biases in other ways. Bryan Plummer, an assistant professor in computer science at Boston University, said the developers of conversational AI systems try to train them using trustworthy information sources. “It could be there are more people who trust liberal sources than conservative ones,” Plummer said.

The OpenAI website ChatGPT about page.Gabby Jones/Bloomberg

According to the website AllSides, which rates news sources by ideological slant, a majority of the nation’s most popular news sources are either centrist or left-leaning, including most major broadcast TV networks, CNN, the Associated Press, Reuters, and most major newspapers. On the right, there’s the New York Post, Fox News, and the Wall Street Journal’s editorial page. So it won’t be easy to feed an AI equal portions of left- and right-leaning content from trusted sources.

Biases can also be injected when the AI is “fine-tuned.” That’s where the system’s creators test it by asking questions and grading the answers. If ChatGPT’s response is judged incorrect or incomplete, the humans tell it to try again, until the system serves up an answer that satisfies them. If the testers’ biases affect their judgment, they could inadvertently teach ChatGPT to share those biases, Hartmann said.

Advertisement



Uwe Peters, a postdoctoral researcher in AI at Cambridge University, warns that political biases in AI could be even more insidious than race or gender bias. Everybody understands the importance of training AI systems to avoid prejudices. But some see political bias as harmless or even praiseworthy.

“Political hostility and bias are widely tolerated in Western societies,” said Peters, who published an academic paper on the subject last year. “As a result, they can more easily become embedded in AI systems and are harder to eradicate.”

Ultimately, what’s the harm in a politically biased AI?

Peters said that some companies use AIs to assist in hiring decisions. A biased AI could refuse a job to an otherwise suitable candidate who’s deemed too liberal or conservative, too religious, or not religious enough. Such a system could get clues by scouring the applicants’ social media profiles. And a 2021 paper in the journal Nature described an AI system that can guess someone’s political leanings with 72 percent accuracy simply by scanning a photo of his face. (Imagine being turned down for a job because you look like a Trump or a Biden supporter. It could happen.)

Advertisement



Conversational AI systems should all be built from open-source code, Hartmann said, so that outside experts can study how they work and recommend improvements. In addition, ChatGPT needs lots of competitors — which is already happening.

“If more options exist, users can choose which models they trust,” Hartmann said.

Just as there are liberal and conservative magazines and TV networks, maybe the world can learn to live with liberal and conservative AIs. As long as we know which ones are which.


Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him @GlobeTechLab.