Google and Facebook: new measures to act on extremist content
Google and Facebook have announced measures to combat spread of extremist content online, which will also help brands protect their online reputations.
Google has set out four measures to combat the spread of extremist content on YouTube. The move comes under mounting pressure from governments and brands for it to identify, remove and prevent terror propaganda.
Brand reputation and protection – extremist content
Google has said it’s working with government, law enforcement and civil groups to tackle the problem of violent extremism online.
In an editorial published in the Financial Times, the Senior Vice President and General Counsel of Google, Kent Walker, said: “Google and YouTube are committed to being part of the solution.” He said that Google has worked hard to remove terrorist content but acknowledges more work is needed.
This is in direct response to an investigation by The Times that found some of the world’s largest brands had inadvertently funded terrorism and extremist/hate speech by having their own advertising appear on unverified YouTube videos.
In a blow to Google’s reputation as well as the brands inadvertently involved, the investigation led to hundreds of major brands pulling their advertising spend on the platform. HSBC, L’Oréal, RBS, Toyota and the Guardian pulled advertising from Google and advertising giant Havas Group UK paused all YouTube and Google Display Network ads to ensure brand safety for clients including Royal Mail, the BBC and Domino’s. Mercedes-Benz and Thomson Reuters suspended digital advertising while they investigated the issue.
The first of the four steps will involve Google using machine learning to train automated systems to better identify terror-related videos on YouTube.
YouTube is increasing the size of its Trusted Flagger programme, experts who review flagged content that violates community guidelines. This extra resource will enable it to target specific types of videos.
The third, will be taking a tougher stance on videos that don’t clearly violate policies, but that contribute to the issue, such as videos with inflammatory religious or supremacist topics, which will be hidden behind a warning, and won’t be allowed to generate advertising revenue.
The fourth, is that it will build on its Creators for Change program, which will redirect users targeted by extremist groups to counter-extremist content.
Proactive, targeted and collaborative focus on extremist content
Google has also pledged to work with Facebook, Twitter and Microsoft to establish an industry body to produce technology that smaller companies can use to ‘police’ offensive content.
Just last week Facebook revealed how it’s using Artificial Intelligence (AI) to keep terrorists off the social media platform. In a blog post entitled ‘Hard Questions: How We Counter Terrorism’ Monika Bickert, Director of Global Policy Management and Brian Fishman, Counterterrorism Policy Manager, said that keeping people safe on Facebook is critical to its mission.
The post explains how Facebook’s team is:
• Using image recognition to prevent terrorist content from reaching the social media platform
• Experimenting with language understanding and text analysis to understand and detect text that advocates terrorism
• Using algorithms to identify terrorist clusters – pages, groups, posts or profiles supporting terrorism
• Improving how it detects new fake accounts
• Working on systems to act across all of its platforms including WhatsApp and Instagram
Facebook says it is also building its counterterrorism specialist team and strengthening partnerships with third parties. The post ends by saying: “We want Facebook to be a hostile place for terrorists. The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late. We are absolutely committed to keeping terrorism off our platform, and we’ll continue to share more about this work as it develops in the future.”
While some brands, such as McDonald’s, have resumed spending on YouTube, many more haven’t as they wait for ‘guarantees’ from Google to allay concerns over brand safety and reputation. Will these latest moves be enough to encourage them back?