Facebook said it has spent more than $13 billion on safety and security efforts since the 2016 US election, and now has 40,000 employees working on those issues.
The 40,000 safety and security workers include outside contractors who focus on content moderation, a spokesman said. Facebook said it had over 35,000 safety and security employees in October 2019.
The new statistics —- meant to demonstrate how seriously the company takes safety and security issues — were published Tuesday in a blog post after a series of stories last week in the Wall Street Journal used leaked documents to show that despite hefty investments, Facebook struggles to combat a myriad of serious issues, including COVID-19 misinformation and illegal human trafficking.
The documents showed that Facebook’s internal researchers often identified serious problems with inappropriate content or user behavior on the company’s services, but Facebook routinely failed to fix them. The stories spurred calls by US lawmakers for an investigation and possibly hearings on the issues.
The blog post addressed some of these criticisms without citing the newspaper’s stories specifically. The company said that while it has historically been responsive to issues on the platform, it’s trying to be more proactive by having safety and security employees embedded in product teams during the development process.
“In the past, we didn’t address safety and security challenges early enough in the product development process,” Facebook said in its blog. “But we have fundamentally changed that approach.”
Facebook also shared new statistics around its global political ad library, an archive where people can search for political ads that run on Facebook or the Instagram photo-sharing app. Facebook said 3 million people use the ad library each month, and the company rejected 3.5 million political or social ad submissions over the first six months of 2021 for failing to provide proper information.
Instagram, which was the focus of a story last week that revealed internal research showed the company knows its product can be emotionally damaging to young women, said this week that it’s considering “nudges,” which will prompt users to look at healthier content on the service, or take a break from scrolling.
Meanwhile, Facebook’s de facto Supreme Court of content is calling on the social media giant to release more information about how it moderates posts by famous people.
The Facebook Oversight Board said in a statement Tuesday that it has asked Facebook to provide more clarity about a program designed to protect high-profile figures from having their posts mistakenly taken down.
The review by the Oversight Board follows a Wall Street Journal report revealing details about a system Facebook built to exempt high-profile users in politics, popular culture, and journalism from enforcement action over posts that break their rules. The program, known as “cross check,” was designed to avoid public relations backlash over famous people who mistakenly have their posts taken down, the newspaper reported. While Facebook told the oversight board the program only affects a “small number of decisions” it actually included at least 5.8 million users in 2020, according to the newspaper.
“In light of recent developments, we are looking into the degree to which Facebook has been fully forthcoming in its responses in relation to cross-check, including the practice of whitelisting,” the board wrote. Users who are “whitelisted” don’t face enforcement actions, the newspaper reported.
“These disclosures have drawn renewed attention to the seemingly inconsistent way that the company makes decisions, and why greater transparency and independent oversight of Facebook matter so much for users,” the board said.
The board said it expects to receive a briefing from Facebook in the coming days and will issue a public analysis next month.