⋮    ⋮  

YouTube To Use AI and Human Moderators To Identify and Delete Hateful Videos

Author Photo
Jun 19, 2017
13Shares
Submit

It’s not been long since Google found itself on the receiving end of advertisers’ backlash for extremist content on YouTube. Ever since, the tech giant has been working on improving moderation and filters on YouTube, to remove offensive content.

In the latest development, Google has vowed to use AI (Artificial Intelligence) and human moderators to identify and delete extremist videos from YouTube. To recall, the controversy involving YouTube and the advertisers sparked when various advertisers pulled their ads from YouTube after they appeared on extremist videos. Not just major brands, even the Australian government pulled its ads.

android-51-lollipop-mainRelatedGoogle Assistant Gets Support For Older Android Devices Running Android 5.0 Lollipop

Many brands showed their intolerance over the placement of ads on videos from controversial extremists like David Duke (preacher of Ku Klux Klan) and Steven Anderson (an Anti-gay preacher who praised the terrorist attack on a gay nightclub in Orlando). With ads appearing on such videos, it meant that the publishers of these videos were making money out of the hateful content, which shouldn’t have been the case. The brands reacted to the issue and stopped advertising on YouTube until the website overhauls its policies.

Google says that it will be allocating more resources to the development of advanced machine learning research for training new “content classifiers” for identifying hateful content. In addition to advanced machine learning, it will also increase the number of independent human experts in YouTube’s Trusted Flagger program. These human experts will be playing a role in “nuanced decisions” in deciding the line between various factors like violent propaganda and religious or newsworthy speech.

YouTube initially came up with redesigned policies and controls to win back the advertisers. The new policies were announced through its blog post, earlier this month. Now, the company is taking new steps in curbing hateful content on the platform.

In the latest blog post, Google says:

google-maps-7RelatedNever Miss A Bus or Train Stop With This Upcoming Feature In Google Maps

We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.

Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 percent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants.

Zero monetisation of hateful videos on YouTube

With the new moderation system, the company has pledged to take tougher action on borderline extremist videos as well. It will make sure to stop ads from appearing even on slightly hateful videos. This means that these videos will no more be able to make money on YouTube, regardless of how many views they garner. Such videos won’t be recommended or commented on – YouTube will make them viewable with content warnings and restrictions.

Submit