Social media, in case you hadn’t noticed, is a great boon to activists of all forms. Got a message that you want to spread wide and fast? The “socials” are your friends.
But, as the authors of a paper in Science Advances explain, the same things that make activists happy about social media – easy to get started, easy to use, easy to share effort, and easy to target just about any given country from just about anywhere else – also render it vulnerable to organised manipulation campaigns by well-resourced actors including domestic and foreign governments.
The study, led by Meysam Alizadeh of Princeton University, US, defines “co-ordinated influence operations” as campaigns by an organisation, party, or state to affect one or more specific aspects of politics in domestic or another state, through social media, by producing content designed to appear locally made to the target audience or state.
Since the first documented co-ordinated influence operation on social media – in South Korea in 2012 – the practice known as “political astroturfing” has spread widely. The authors claim: “There were at least 53 such influence efforts targeting 24 countries around the world from 2013 to 2018.”
Perhaps the best-known example of a social media influence operation is the alleged effort by Russia’s Internet Research Agency (IRA) to shape American politics, for which it was indicted by the US government in February 2018. With another US presidential election due in a few months, concerns about such operations are rising fast.
Alizadeh’s team have developed an automated machine learning system they say can detect social media posts involved in influence campaigns, regardless of platform and, importantly, based only on the content of the posts.
Their findings show that content-based features such as a post’s word count, webpage links and posting time can act like a digital fingerprint for such campaigns, which could help social media companies, users or investigators prevent the spread of misinformation and election interference.
Previous attempts to detect coordinated disinformation efforts have focused on simpler approaches, such as detecting bots or comparing the follower/friendship networks of posters. But such approaches are often stymied by posts from human agents or from new accounts, and are often platform-specific.
Alizadeh and his colleagues hypothesised that large, online political influence operations use a relatively small number of human agents to post large amounts of content quickly, which would tend to make these posts similar in topic, word count, linked articles, and other features.
To test this, they created an AI learning system trained on datasets of early activity from Russian, Chinese, and Venezuelan influence campaigns on Twitter and Reddit. They found the system could reliably identify those campaigns’ subsequent posts and distinguish them from regular posts by normal users.
The indications, unfortunately, are that such a system would not currently be a comprehensive solution. It was less reliable when trained on older data and when the campaign in question was more sophisticated
The authors suggest that, while widespread use of such machine learning systems could drive bad actors to change their approach to avoid detection, it could also force them to adopt tactics that are more costly or less influential to do so.
“We see two main avenues for future work [to improve the learning system],” they write. “First, developing approaches to efficiently follow influence campaigns over time should be a priority.
“Second, one could likely improve prediction performance by using features extracted from images and videos shared by trolls, developing richer features sets based on the content linked to in posts, and implementing classification approaches that leverage longer histories at the account level.”