scorecardresearch Skip to main content
IDEAS

Algorithmic bias isn’t just unfair — it’s bad for business

If it’s not deployed wisely, artificial intelligence can turn consumers off.

The Federal Trade Commission is looking more closely at whether algorithmic lending systems violate credit-reporting laws. But companies might have their own reasons to tread carefully with the technology.Alex Brandon/Associated Press

Over the last few months, US and European regulators have signaled that they may start cracking down on one of the biggest ethical problems with artificial intelligence: the potential for algorithms to perpetuate discrimination.

The US Federal Trade Commission warned that companies using biased algorithms may run afoul of consumer protection laws like the Fair Credit Reporting Act. The Federal Reserve, the Consumer Financial Protection Bureau, and other American financial regulators asked for public comments on how banks are using AI. The EU released new rules governing the use of AI for decisions ranging from hiring to lending to law enforcement, all of which are areas ripe for bias.

Advertisement



These moves respond to growing concerns that algorithms have been reproducing discrimination in situations such as home lending, the allocation of health care, and decisions about who deserves parole. While many people hoped machines could help us make fairer decisions, as the use of AI has exploded it’s become clear that all too often they simply replicate and even amplify our existing prejudices.

An important part of the story has been missing, however. It’s one that might make businesses more amenable to regulation or even preclude the need for it by motivating them to act on their own. Algorithmic bias is not only a pressing ethical and societal concern — it’s also bad for business.

My research shows that over time, word of mouth about algorithmic bias among customers will hurt demand and sales and cut into profits. This damage won’t just hit a few unlucky companies that find themselves embroiled in public controversy around algorithmic discrimination. It can occur even if the inner workings and biases of an algorithm remain invisible to the public.

To understand how this can happen, consider one tech giant’s failed attempts at algorithmic design. In 2014, Amazon launched an internal tool to evaluate resumes. Although the algorithm was not programmed to look at the gender of the job applicants, it was trained using data from the company’s previous decade of hiring decisions, and the applications in that period mainly came from men. Based on past patterns, the algorithm learned to downgrade resumes that mentioned certain women-only colleges or women’s sports or clubs.

Advertisement



Amazon dropped that tool once these biases were discovered, but companies still widely use algorithms for recruiting and hiring. Not only are employers potentially missing out on valuable candidates, but over time these losses will compound through word of mouth. People learn about opportunities from members of their social circles, who often have race, age, gender, and other demographic characteristics in common. When women hear that their female friends and colleagues have been passed over for jobs at a particular company, they are less likely to apply, even if they know nothing about why these other candidates were rejected.

Using group characteristics to make decisions about whether and how to provide services to individual consumers may seem logical in some cases and may even be profitable in the short term. For example, a property manager might believe there are legitimate business reasons to choose tenants based on their age or education level. But my research, which uses computational methods to simulate consumer behavior, shows that these types of “group-aware” algorithms will tend to become less profitable over time.

Advertisement



In a study I conducted with Roland Rust, we simulated how customers would respond to two banks. One bank is “group-aware” and has various loan-approval thresholds for members of different groups. For example, women might have to meet a higher standard than men to get a loan. The other bank in the model is “group-blind”: It has the same approval threshold for every applicant.

Our model indicates that most members of the favored group meet the loan threshold at both banks, so they are likely to apply to either. But members of the group being discriminated against learn from one another to avoid the group-aware bank in favor of the group-blind one. Furthermore, members of the group experiencing discrimination also influence some members of the favored group to avoid the group-aware bank. As time passes, there is a net movement of customers toward the group-blind bank, hurting the profitability of the group-aware bank.

In short, when consumers learn from one another that a company is less likely to serve them, even if the discrimination is unintentional, they’ll avoid that company and it’ll lose revenue.

Algorithms often become group-aware when they aren’t intended to be. AI teases out correlations in the data that serve as stand-ins for group membership. For example, in our geographically segregated society, ZIP codes and other location data are a common proxy for race. Ride-sharing companies discovered the problem when a study revealed that their location-based pricing algorithms charge customers more for rides to or from neighborhoods primarily occupied by people of color. In other words, programming an AI system to ignore people’s gender or race or leaving this information out of the data set entirely isn’t enough to ensure an algorithm is group-blind.

Advertisement



What can companies do to make algorithms treat people fairly? Here are three key steps they can take:

1. Rather than removing group identifiers, businesses should include demographic characteristics in their data so they can continually audit their algorithms to determine whether they inadvertently discriminate against certain groups. There are a number of tools to evaluate whether bias is creeping in. IBM’s AI Fairness 360 is an open-source tool kit that helps detect bias in machine learning models. Microsoft’s FATE research group produces reports and tools aimed at reducing bias and increasing transparency and accountability in AI.

2. Companies can model how their systems’ decisions will affect demand over the long run among consumers who learn that some groups are treated differently. For example, if a bank used a model similar to the one in my study, it could easily see the long-term impact of a group-aware algorithm for making loans.

3. Whenever possible, algorithms should be designed to make decisions using context-specific data about individuals — looking at someone’s bill payment frequency in loan decisions, for example, or a patient’s cholesterol levels in health care, or a student’s grades in education — rather than trying to infer such information from other data points like their education level or where they live. The data used to train the algorithm is important too. Increasing the variation among and representation of different kinds of consumers allows algorithms to better evaluate individuals on their own merits.

Advertisement



Algorithms can lead to fairer outcomes, but only if they are designed and managed carefully. As computers increasingly make influential decisions about our lives, from the health care and financial services we receive to our educational and career prospects, we must remain alert to the potential for bias. There are strong ethical and moral reasons to do so, but there is also a business case to be made. We need to make sure companies understand how algorithmic bias can hurt their bottom lines.

Kalinda Ukanwa is assistant professor of marketing at the University of Southern California’s Marshall School of Business.