Stock

The Wall Avenue Journal: Fb depends on AI to wash up its platform, however its personal engineers have doubts

Facebook Inc. executives have long said artificial intelligence would address the company's chronic problems to keep hate speech and excessive violence, as well as underage users, off its platforms.

That future is further away than these executives suggest, according to internal documents from the Wall Street Journal. Facebook's AI can't consistently identify the difference between cockfighting and car accidents, even in a notable episode that confused internal researchers for weeks, first-person shoot videos, racist swear words.

The documents point to hate speech, Facebook

FB,
-1.15%
Employees estimate the company removes only a fraction of the posts that violate its rules – a low single-digit percentage, they say. If Facebook's algorithms aren't sure that content is violating the rules on deletion, the platform shows users that material less often – but the accounts that posted the material go unpunished.

The employees analyzed the success of Facebook in enforcing its own rules for content, which it formulated in detail internally and in public documents such as its community standards.

The documents reviewed by the Journal also show that two years ago Facebook cut the time human reviewers focused on complaints of hate speech from users and made other tweaks that reduced the total number of complaints. This made the company more reliant on AI enforcing its rules and increased the technology's apparent success in its public statistics.

According to the documents, those responsible for keeping the platform free of content that Facebook considers offensive or dangerous are conceding that the company is nowhere near in a position to reliably verify it.

An expanded version of this report appears on WSJ.com.

Also popular on WSJ.com:

Teenage girls develop tics. Doctors say TikTok could be a factor.

Some vaccines last a lifetime. Here's why COVID-19 recordings don't.

Related Articles