Along with fake accounts, Facebook said in its transparency report that it had removed 21 million pieces of content featuring sex or nudity, 2.5 million pieces of hate speech and nearly 2 million items related to terrorism by Al Qaida and ISIS in the first quarter of 2018.
Guy Rosen, Facebook's vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content. "Accountable to the community".
The statistics cover a relatively short period, from October 2017 through March of this year, and don't disclose how long, on average, it takes Facebook to remove material violating its standards.
The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it.
But most of Facebook's removal efforts centered on spam and fake accounts promoting it.
Facebook says it will continue to provide updated numbers every six months. Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".More news: Twitter to demote tweets as it fights trolls
Nevertheless, the company took down nearly twice as much content in both segments during this year's first quarter, compared with Q4. For the most part, it has not provided more details on the hiring plan, including how many will be full-time Facebook employees and how many will be contractors. This is a problem perhaps most salient in non-English speaking countries.
The number of pieces of content depicting graphic violence that Facebook took action on during the first quarter of this year was up 183% on the previous quarter.
It attributed the increase to the enhanced use of photo detection technology.
The social network estimates that it found and flagged 85% of that content prior to users seeing and reporting it - a higher level than previously due to technological advances.
Small doses of nudity and graphic violence still make their way onto Facebook, even as the company is getting better at detecting some objectionable content, according to a new report. The company found and flagged 95.8% of such content before users reported it. It took down 2.5 million pieces of hate speech during the period, only 38% of which was flagged by its algorithms.
Facebook took action on 2.5 million pieces of content over hate speech, but doesn't have view numbers as it is still "developing measurement methods for this violation type". The renewed attempt at transparency is a nice start for a company that has come under fire for allowing its social network to host all kinds of offensive content. In Q1, it disabled 583 million fake accounts, down 16% from 694 million a quarter earlier.