Google says AI better than humans

Company pledges further development to tackle rise of extremist and illicit content and hate speech, but says advanced machine learning is the answer.

Google has pledged to continue developing advanced programs using machine learning to combat the rise of extremist content, after it found that it was both faster and more accurate than humans in scrubbing illicit content from YouTube.

The company is using machine learning along with human reviewers as part of a mutli-pronged approach to tackle the spread of extremist and controversial videos across YouTube, which also includes tougher standards for videos and the recruitment of more experts to flag content in need of review.

A month after announcing the changes, and following UK home secretary Amber Rudd’s repeated calls for US technology firms to do more to tackle the rise of extremist content, Google’s YouTube has said that its machine learning systems have already made great leaps in tackling the problem.

A YouTube spokesperson said: “While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.

“Our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism, as well as the rate at which we’ve taken this kind of content down. Over 75% of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.”

READ THIS:  Google activates the universal app campaign

One of the problems YouTube has in policing its site for illicit content is that users upload 400 hours of content every minute, making filtering out extremist content in real time an enormous challenge that only an algorithmic approach is likely to manage, the company says.

YouTube also said that it had begun working with 15 more NGOs and institutions, including the Anti-Defamation League, the No Hate Speech Movement, and the Institute for Strategic Dialogue in an effort to improve the system’s understanding of issues around hate speech, radicalisation and terrorism to better deal with objectionable content.

A YouTube spokesperson said: “The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetised, and won’t have key features including comments, suggested videos, and likes.”

READ THIS:  Apple Finally Joined Instagram. Read the message they have for iPhone users

YouTube has also begun redirecting searches with certain keywords to playlists of curated videos that confront and debunk violent extremist messages, as parts of its effort to help prevent radicalisation.

Google plans to continue developing the machine learning technology and to collaborate with other technology companies to tackle online extremism.

YouTube is the world’s largest video hosting service and is one of the places extremist and objectionable content ends up, even if it originates and is removed from other services, including Facebook, making it a key battleground.

Big-name brands, including GSK, Pepsi, Walmart, Johnson & Johnson, the UK government and the Guardian pulled millions of pounds of advertising from YouTube and other social media properties after it was found their ads were placed next to extremist content.

Subscribe To Our Mailing List
Receive all the latest and breaking news in your inbox
We respect your privacy.
Do you also have any story/news to share with GhanaCrusader? Get Published! Contact us on Our Facebook Page or email via [email protected]