Facebook took action against tens of millions of posts, photos and videos over the past six months for violating its rules that prohibit hate speech, harassment and child sexual exploitation, illustrating the vast scale of the tech giant’s task in cleaning up its services from harm and abuse.

The company revealed the data as part of its latest transparency report, which Facebook said reflected its still-improving efforts to use artificial intelligence to spot harmful content before users ever see it and outwit those who try to evade its censors.

In the second and third quarter of 2019, Facebook said it removed or labeled more than 54 million pieces of content it deemed violent and graphic, 11.4 million posts that broke its rules prohibiting hate speech, 5.7 million uploads that ran afoul of bullying and harassment policies and 18.5 million items determined to be child nudity or sexual exploitation.

The company also detailed for the first time its efforts to police Instagram, revealing that it took aim at 1.2 million photos or videos involving child nudity or exploitation and 3 million that ran afoul of its policies prohibiting sales of illegal drugs over the past six months.

In many cases, Facebook found and removed more harmful content in the third quarter than it did the three months prior, which the company attributed to its efforts to tighten its rules and more actively search and find abusive posts, photos and videos before users report them. It also touted recent improvements in its AI: Facebook detected 80% of the hate speech it removed before users did, a lower rate than other areas but still an improvement for the tech giant.

Still, the company’s latest transparency report arrives as regulators around the world continue to call on Facebook, and the rest of Silicon Valley, to be more aggressive in stopping the viral spread of harmful content, such as disinformation, graphic violence and hate speech. A series of high-profile failures over the past year have prompted some lawmakers, including Democrats and Republicans in the United States, to threaten to pass new laws holding tech giants responsible for failing to police their sites and services.

The calls for regulation only intensified after the deadly shooting in Christchurch, New Zealand, in March. Video of the gunman attacking two mosques spread rapidly on social media, including Facebook, evading tech companies’ expensive systems for stopping such content from going viral. On Wednesday, Facebook offered new data about that incident, reporting that it had removed 4.5 million pieces of content related to the attack between March 15, the day it occurred, and Sept. 30, nearly all of which it spotted before users reported it.

Recommended for you

(0) comments

Welcome to the discussion.

Please keep it clean, turn off CAPS LOCK and don't threaten anyone. Be truthful, nice and proactive. And share with us - we love to hear eyewitness accounts.