YouTube to Expand Teams Reviewing Extremist Content

  • YouTube to Expand Teams Reviewing Extremist Content

YouTube to Expand Teams Reviewing Extremist Content

In the past year, YouTube has tightened its policies on what content can appear on the platform, increased its enforcement teams and invested in new machine learning technology to scale the efforts of its human moderators.

Google will increase the number of its teams identifying and removing extremist content, hate speech and child cruelty from its YouTube channels following allegations of profiteering amid the failure to remove unsuitable footage.

Reuters reported last month that several major advertisers - such as Lidl and Mars - had pulled ads from the platform over "clips of scantily clad children". By training those algorithms to do the same for other types of videos, such as those questionable uploads that targeted children, the platform will be able to take them down a lot faster than it now can.

In terms of improved transparency, YouTube will create a regular report in 2018 offering additional data about the flags it receives and the actions taken to remove videos and comments that violate the company's content policies. She did not say how many people now monitor YouTube for offensive videos.

Addressing the monetization concerns of the creators, Wojcicki, in a separate blog post on YouTube's Creators blog, stated that the creators have made it clear that YouTube needs to be more focused while reviewing the content so that they do not demonetize a valid video.

She said: "We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand's values".

"Some bad actors are exploiting our openness to mislead, manipulate, harass or even harm", Wojcicki said, adding that YouTube's trust and safety teams have reviewed almost 2 million videos for violent extremist content over the past six months.

The company will also focus on training its machine-learning algorithm to help human reviewers identify and terminate accounts and comments violating the site's rules.

"We will be talking to creators over the next few weeks to hone this new approach", YouTube CEO said.

So while some 150,000 video nasties have been purged from YouTube and two million videos screened since June for violent extremist content, Google is ramping up its efforts to sweep away extremists and hateful content.

The chief executive of YouTube has vowed to hire more staff and use cutting-edge machine learning technology to continue its fight against violent and extremist content.

"Because we have seen these positive results, we have begun training machine-learning technology across other challenging content areas, including child safety and hate speech", she said.