Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

Share

In an 86-page report, Facebook revealed that it deleted 865.8 million posts in the first quarter of 2018, the vast majority of which were spam, with a minority of posts related to nudity, graphic violence, hate speech and terrorism.

The company has been using artificial intelligence to help pinpoint the bad content, but Rosen said the technology still struggles to understand the context around a Facebook post pushing hate, and one simply recounting a personal experience.

Facebook's renewed moderation effort of almost 1.5 billion accounts has resulted in 583 million fake accounts being closed in the first three months of this year, according to The Guardian. A Bloomberg report last week showed that while Facebook says it's become effective at taking down terrorist content from al-Qaeda and the Islamic State, recruitment posts for other US -designated terrorist groups are found easily on the site.

The social media website claimed to have disabled 583 million fake accounts.

Though Facebook extolled its forcefulness in removing content, the average user may not notice any change.

The company said most of the increase was the result of improvements in detection technology.

Now, however, artificial intelligence technology does much of that work.

The figure represents between 0.22 and 0.27 percent of the total content viewed by Facebook's more than two billion users from January through March.

Over the previous year, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000.

More news: Mary Fallin Signs Adoption Bill Opposed by LGBTQ
More news: Khloe Kardashian gives glimpse of daughter True in new photo
More news: Two British people captured in Democratic Republic of Congo released

Nevertheless, the company took down nearly twice as much content in both segments during this year's first quarter, compared with Q4.

"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", Facebook vice president of product management Guy Rosen wrote in a statement.

The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, though the company was clear to highlight that its new methods are evolving and aren't set in stone, CNET's Parker reports.

Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda.

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down. Overall, the social giant estimated that around 3%-4% of active Facebook accounts on the site during Q1 were still fake.

While AI is getting more effective at flagging content, Facebook's human reviewers still have to finish the job.

Facebook took action on 1.9 million pieces of content over terrorist propaganda.

Share