Image: Miniature Facebook banners are seen on snacks prepared for the visit by Facebook’s Chief Operating Officer in Paris, France, January 17, 2017. REUTERS/Philippe Wojazer/Files
By Julia Fioretti
BRUSSELS (Reuters) – Social media giants Facebook, Google’s YouTube, Twitter and Microsoft said on Monday they were forming a global working group to combine their efforts to remove terrorist content from their platforms.
Responding to pressure from governments in Europe and the United States after a spate of militant attacks, the companies said they would share technical solutions for removing terrorist content, commission research to inform their counter-speech efforts and work more with counter-terrorism experts.
The Global Internet Forum to Counter Terrorism “will formalise and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN,” the companies said in a statement.
The move comes on the heels of last week’s call from European heads of state for tech firms to establish an industry forum and develop new technology and tools to improve the automatic detection and removal of extremist content.
The political pressure on the companies has raised the prospect of new legislation at EU level, but so far only Germany has proposed a law fining social media networks up to 50 million euros ($56 million) if they fail to remove hateful postings quickly. The lower house of the German parliament is expected to vote on the law this week.
The companies will seek to improve technical work such as a database created in December to share unique digital fingerprints they automatically assign to videos or photos of extremist content.
They will also exchange best practices on content detection techniques using machine learning as well as define “standard transparency reporting methods for terrorist content removals.”
Earlier this month Facebook opened up about its efforts to remove terrorism content in response to criticism from politicians that tech giants are not doing enough to stop militant groups using their platforms for propaganda and recruiting.
Google announced additional measures to identify and remove terrorist or violent extremist content on its video-sharing platform YouTube shortly thereafter.
Twitter suspended 376,890 accounts for violations related to the promotion of terrorism in the second half of 2016 and will share further updates on its efforts to combat violent extremism on its platform in its next Transparency Report.
The social media firms said they would work with smaller companies to help them tackle extremist content and organisations such as the Center for Strategic and International Studies to work on ways to counter online extremism and hate.
All four companies have initiatives to counter online hate speech and will use the forum to improve their efforts and train civil society organisations engaged in similar work.
(Reporting by Julia Fioretti, editing by David Evans and Jane Merriman)
Copyright 2017 Thomson Reuters. Click for Restrictions.