Facebook disabled 1.3B fake accounts in 6 months

Facebook disabled 1.3B fake accounts in 6 months

The report covers the six months from October 2017 to March 2018, and also covered graphic violence, nudity and sex, terrorist propaganda, spam and fake accounts.

Facebook removed 2.5 million pieces of hate speech in the three months to March, a rise of more than half from the three months prior.

The number of pieces of nude and sexual content that the company took action on during the period was 21 million, the same as during the final quarter of previous year.

"We're sharing these because we think we need to be accountable", vice president of product management Guy Rosen said during a press briefing on the new report.

Facebook removed 837 million spam posts, disabled 583 million fake accounts and removed 21 million pieces of porn or adult nudity that violated its community standards in the first quarter of 2018.

The findings, its first public look at internal moderation figures, illustrate the gargantuan task Facebook faces in cleaning up the world's largest social network, where artificial-intelligence systems and thousands of human moderators are fighting back a wave of offensive content and abuse.

It attributed the increase to the enhanced use of photo detection technology.

Improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, almost three times more than it had in the last quarter of 2017.

The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users.

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late a year ago.

The first of what will be quarterly reports on standards enforcement should be as notable to investors as the company's quarterly earnings reports.

Nearly 86 per cent was found by the firm's technology before it was reported by users.

However, it declined to say how many minors - legal users who are between the ages of 13 and 17 - saw the offending content. Facebook hopes to continue publishing reports about its content removal every quarter. "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context", explains the report, "and decide whether the material violates standards, so we tend to find and flag less of it".

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down.

All in all the company removed 583 million fake accounts, although these were not all active at the same time.

However, it said that most of the 583m fake accounts were disabled "within minutes of registration" and that it prevents "millions of fake accounts" on a daily basis from registering. "Our metrics can vary widely for fake accounts acted on", the report notes, "driven by new cyberattacks and the variability of our detection technology's ability to find and flag them". In that case, Facebook claims it used A.I.to locate 98.5 percent of the fake accounts it recently closed, and "nearly 100 percent" of the spam it found.