FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Instagram logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File Photo
By Jack Stubbs
LONDON (Reuters) – The success of viral memes, videos, and pictures in spreading online disinformation is fuelling organized social media manipulation on Instagram and YouTube, researchers at Oxford University said on Thursday.
In an annual report on disinformation trends, the Oxford Internet Institute’s Computational Propaganda Research Project said Facebook remained the most popular platform for social media manipulation due to its size and global reach.
But a focus on visual content more likely to be shared online means users of Google’s YouTube video platform and Facebook’s Instagram photo-sharing site are increasingly being targeted with false or misleading messages, said Samantha Bradshaw, one of the report’s authors.
“On Instagram and YouTube it’s about the evolving nature of fake news – now there are fewer text-based websites sharing articles and it’s more about video with quick, consumable content,” she said. “Memes and videos are so easy to consume in an attention-short environment.”
The report’s findings highlight the challenges faced by Facebook, Google and other social media companies in combating the spread of political and financially-motivated disinformation, as tactics and technologies develop and change.
A Facebook spokesman said showing users accurate information was a “major priority” for the company.
“We’ve developed smarter tools, greater transparency, and stronger partnerships to better identify emerging threats, stop bad actors, and reduce the spread of misinformation on Facebook, Instagram and WhatsApp,” the spokesman said.
YouTube said it had invested in policies, resources and products to tackle misinformation on its site and regularly removes content which violates its terms of use. A spokesman declined to comment on Oxford University’s findings.
Bradshaw said the move to target internet users with visual content would make it harder for social media platforms to identify and stamp out manipulated activity.
Facebook and YouTube both came under intense scrutiny over their ability to monitor and police visual content following a mass-shooting in New Zealand in March.
In that incident, a gunman was able to live-stream the killing of 51 people on Facebook before internet users repeatedly shared and uploaded the video across multiple social media platforms.
“It’s easier to automatically analyse words than it is an image,” Bradshaw said. “And images are often more powerful than words with more potential to go viral.”
The Oxford University report said that increased awareness of social media manipulation meant such activity had now been identified in 70 countries worldwide, up from 28 in 2017.
“Computational propaganda has become a normal part of the digital public sphere,” the report said. “These techniques will also continue to evolve as new technologies … are poised to fundamentally reshape society and politics.”
(Reporting by Jack Stubbs; Editing by Alexandra Hudson)
Copyright 2018 Thomson Reuters. Click for Restrictions.