In the latest stop on its post-Cambridge Analytica transparency tour, Facebook today unveiled its first-ever Community Standards Enforcement Report, an 81-page tome that spells out how much objectionable content is removed from the site in six key areas.
The full Facebook report is available online and details both Facebook's commitment to their content values as well as some more metrics regarding the deletion of 583 million fake Facebook accounts and harsher crack down on the various offending content.
A spokeswoman later said that Facebook blocks "disturbing or sensitive content such as graphic violence" so that users under 18 can not see it "regardless of whether it is removed from Facebook".
Removing the fake accounts is significant, as Facebook works on damage control after bots were allegedly used to influence elections.
Graphic violence: During Q1, Facebook took action against 3.4 million pieces of content for graphic violence, up 183% from 1.2 million during Q4. The company said its technologies were able to detect 85.6% of the posts before they were flagged, much higher than the previous quarter's 71.6%. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem.
Facebook defines content of graphic violence as the information that glorifies violence or celebrates the suffering or humiliation of others, which it says may be covered with a warning and prevented from being shown to underage viewers.
Facebook took action on 1.9 million pieces of content over terrorist propaganda.
All told, Facebook took action on almost 1.6 billion pieces of content during the six months ending in March, a tiny fraction of all the activity on its social network, according to the company.
The company said most of the increase was the result of improvements in detection technology.
Hate speech is harder to police using automated methods, however, as racist or homophobic hate speech is often quoted on posts by their targets or activists. All 836 million spam posts were flagged by an artificial intelligence program before human users reported them, according to the report. "We removed 2.5 million pieces of hate speech in Q1 2018 - 38 percent of which was flagged by our technology".
More than a quarter of the human race accesses the platform, with two billion monthly users.
While the removal of 583 million fake Facebook accounts is certainly noteworthy, it does little to address concerns regarding actual user privacy.
Zuckerberg's statement and Rosen's blog post comes one day after Ime Archibong, Facebook's Vice President of Product Partnerships, revealed Facebook would remove apps that leaked user data. Facebook said the number tends to fluctuate from quarter to quarter. That doesn't include what Facebook says are "millions" of fake accounts that the company catches before they can finish registering.