fb-pixel Skip to main content

Facebook bans hate speech but still makes money from white supremacists

The findings of a new study by the Tech Transparency Project, a nonprofit tech watchdog, illustrate the ease with which bigoted groups can evade Facebook’s detection systems, despite the company’s years-long ban against posts that attack people on the basis of their race, religion, sexual orientation and other characteristics. (Photo by DENIS CHARLET / AFP) (Photo by DENIS CHARLET/AFP via Getty Images)DENIS CHARLET/AFP via Getty Images

Last year, a Facebook page administrator put out a clarion call for new followers: They were looking for ‘’the good ole boys and girls from the south who believe in white [supremacy].’’ The page — named Southern Brotherhood — was live on Tuesday afternoon and riddled with photos of swastikas and expressions of white power.

Facebook has long banned content referencing white nationalism. But a plethora of hate groups still populate the site, and the company boosts its revenue by running ads on searches for these pages.

A new report from the Tech Transparency Project, a nonprofit tech watchdog, found 119 Facebook pages and 20 Facebook groups associated with white supremacy organizations. Of 226 groups identified as white-supremacist organizations by the Anti-Defamation League, the Southern Poverty Law Center, and a leaked version of Facebook’s dangerous organizations and individuals list, more than a third have a presence on the platform, according to the study.

Released Wednesday and obtained exclusively by The Washington Post, the report found that Facebook continues to serve ads against searches for white-supremacist content, such as the phrases Ku Klux Klan and American Defense Skinheads, a longtime criticism of civil rights groups, who argue that the company prioritizes profits over the dangerous impact of such content.

Advertisement



The findings illustrate the ease with which bigoted groups can evade Facebook’s detection systems, despite the company’s years-long ban against posts that attack people on the basis of their race, religion, sexual orientation and other characteristics.

Activists have charged that by allowing hate speech to proliferate across its networks, Facebook opens the door for extremist groups to organize deadly attacks on marginalized groups. In the wake of several high-profile incidents in which alleged mass shooters shared prejudiced beliefs on social media, the findings add to the pressure on Facebook to curb such content.

Advertisement



‘’The people who are creating this content have become very tech savvy, so they are aware of the loopholes that exist and they’re using it to keep posting content,’’ said Libby Hemphill, an associate professor at the University of Michigan. ‘’Platforms are often just playing catch up.’’

Facebook, which last year renamed itself Meta, said it was conducting a comprehensive review of its systems to make sure ads no longer show up in search results related to banned organizations.

‘’We immediately began resolving an issue where ads were appearing in searches for terms related to banned organizations,’’ Facebook spokesperson Dani Lever said in a statement. ‘’We will continue to work with outside experts and organizations in an effort to stay ahead of violent, hateful, and terrorism-related content and remove such content from our platforms.’’

Facebook bars posts that attack people on the basis of their race, religion, and sexual orientation, including any dehumanizing language or harmful stereotypes. In recent years, the company has expanded its hate speech policy to include calls for white separatism or the promotion of white nationalism. It also bans posts designed to incite violence.

For years, Facebook has faced criticism from civil rights activists, politicians, and academics that it wasn’t doing enough to fight racism and discriminatory content on its platform.

‘’The stakes of inaction continue to be life-and-death,’’ said Color of Change President Rashad Robinson, whose group will release a petition Wednesday calling on Facebook to strengthen its systems to fight hateful content.

Activists have particularly clashed with the company, arguing in public and private conversations that Facebook’s enforcement of its hate speech ban is weak and that the company allows powerful people to violate its rules with few consequences.

Advertisement



In the summer of 2020, more than 1,000 companies joined an advertiser boycott to push Facebook to rid its social networks of hateful content such as white supremacy groups. In response, Facebook executives repeatedly said the company doesn’t allow hate speech on its platform or seek to profit from bigotry.

Yet internally, Facebook’s own researchers found that hate speech reports surged that summer in the wake of the widespread outrage over a police officer’s killing of George Floyd in Minnesota, according to a trove of internal documents surfaced by Facebook whistleblower Frances Haugen. That same summer, an independent civil rights audit offered a searing critique of the platform, arguing that Facebook’s hate speech policies were a ‘’tremendous setback’' when it came to protecting its users of color.

‘’The civil rights community continues to express significant concern with Facebook’s detection and removal of extremist and white nationalist content and its identification and removal of hate organizations,’’ the auditors wrote.

The auditor relied on a 2020 report from the Tech Transparency Project, which found that more than 100 groups identified by the Southern Poverty Law Center or the Anti-Defamation League as white-supremacist organizations had a presence on Facebook.

The Tech Transparency Project, which is part of political watchdog group Campaign for Accountability, has conducted several critical reports on Facebook’s content moderation systems.

Advertisement



For its 2022 report, the Tech Transparency Project examined white supremacy groups on Facebook’s list of dangerous individuals and organizations that was previously published by the investigative news site the Intercept. Among the groups on Facebook’s own list, nearly half of them had a presence on the social media network, the report found.

Moreover, the researchers suggested that Facebook’s automated systems, which scan for images, text, and video that look like they could violate its policies among other actions, may in some cases fuel hate speech on the platform. Facebook automatically creates profile pages when a user lists a job, interest or business that does not have an existing page. Twenty percent of the 119 Facebook pages dedicated to white-supremacist groups identified in the report were estimated to be auto-generated by the company itself, according to the report.