X

Facebook offers a peek at how it moderates content

With billions of users hooked on to Facebook (FB), it was often believed that Mark Zuckerberg and his team could never successfully police the massive amounts of data that users post every second. However, in the light of the recent data scandal, the company has now offered a peek into how it deals with content that violates rules.

The report covers the company’s efforts during a period ranging from October 2017 through March 2018, and the actions taken in the first quarter of 2018. It is also the company’s’ first step towards its ongoing Community Standards enforcement efforts.

During the first quarter, the social media company deleted nearly 837 million posts that were identified as spam, besides disabling nearly 583 million fake accounts. According to Facebook, around 3-4% of the active accounts were not genuine.

Around 21 million posts featuring adult nudity and sexual activity were flagged by the social media company during the quarter. Similarly, they also removed 86% of the posts that had graphic violence and 2.5 million hate speech posts. Facebook said it took off 1.9 million pieces of content that were related to terrorist groups including ISIS and Al-Qaeda.

According to social media giant, improved technology, and machine learning helped in increasing the amount of flagged content. The company added that it is investing heavily in advanced technology to make Facebook a safe networking site.

Categories: Technology
Related Post