Facebook details scale of abuse on its site

Facebook says it deleted or added warnings to about 29 million posts that broke its rules on hate speech, graphic violence, terrorism and sex, over the first three months of the year.

It is the first time that the firm has published figures detailing the scale of efforts to enforce its rules.

Facebook is developing artificial intelligence tools to support the work of its 15,000 human moderators.

But the report suggest the software struggles to spot some types of abuse.

For example, the algorithms only flagged 38% of identified hate speech posts over the period, meaning 62% were only addressed because users had reported them.

By contrast, the firm said its tools spotted 99.5% of detected propaganda posted in support of Islamic State, Al-Qaeda and other affiliated groups, leaving only 0.5% to the public.

The figures also reveal that Facebook believes users were more likely to have experienced graphic violence and adult nudity via its service over the January-to-March quarter than the prior three months.

But it said it had yet to develop a way to judge if this was also true of hate speech and terrorist propaganda.

"As we learn about the right way to do this, we will improve the methodology," commented Facebook's head of product management, Guy Rosen.

Violent spike

Facebook broke down banned content into several categories:

  • graphic violence

  • adult nudity and sexual content

  • spam

  • hate speech

  • fake accounts

On the latter, the company estimates about 3% to 4% of all active users on Facebook are fake, and said it had taken 583 million fake accounts down between January and March.

The figures indicate graphic violence spiked massively - up 183% between each of the two time periods in the report.

It said that a mix of better detection technology and an escalation in the Syrian conflict might account for this.

A total of 1.9 million pieces of extremist content were removed between January and March, a 73% rise on the previous quarter.

That will make promising reading for governments, particularly in the US and UK, which have called on the company to stop the spread of material from groups such as Islamic State.

  • 'Hate speech button' causes confusion

  • Facebook expels alt-right figurehead

  • Tech firms to remove extremist posts within hours

"They're taking the right steps to clearly define what is and what is not protected speech on their platform," said Brandie Nonnecke, from University of California, Berkeley's Center for Information Technology Research in the Interest of Society.

But, she added: "Facebook has a huge job on its hands."

'Screaming out of the closet'

The complexity of that job emerges when considering hate speech, a category much more difficult to control via automation.

The firm tackled 2.5 million examples in the most recent period, up 56% on the October-to-December months.

Human moderators were involved in dealing with the bulk of these, but even they faced problems deciding what should stay and what should be deleted.

"There's nuance, there's context that technology just can't do yet," said Alex Schultz, the company's head of data analytics.

"So, in those cases we lean a lot still on our review team, which makes a final decision on what needs to come down."

To demonstrate this, Mr Schultz said words that would be considered slurs if used as part of a homophobic attack had different meaning when used by gay people themselves. So, deleting all posts using a certain term would be the wrong choice.

Complete story on BBC.

Write to Us:

Advisory Committee: Yves Berthelot (France),  PV Rajagopal (India), Vandana Shiva (India), Oliver de Schutter (Belgium), Mazide N’Diaye (Senegal), Gabriela Monteiro (Brazil), Irakli Kakabadze (Georgia), Anne Pearson (Canada), Liz Theoharis (USA), Sulak Sivaraksa (Thailand), Jagat Basnet (Nepal), Miloon Kothari (India),  Irene Santiago (Philippines), Arsen Kharatyan (Armenia), Margrit Hugentobler (Switzerland), Jill Carr-Harris (Canada/India), Reva Joshee (Canada), Sonia Deotto (Mexico/Italy),Benjamin Joyeux (Geneva/France), Aneesh Thillenkery, Ramesh Sharma, Ran Singh (India)