Skip to main contentSkip to navigationSkip to navigation
The focus of the disinformation in Germany was on provoking division following the rise of the far-right AfD.
The focus of the disinformation in Germany was on provoking division following the rise of the far-right AfD. Photograph: Jens Meyer/AP
The focus of the disinformation in Germany was on provoking division following the rise of the far-right AfD. Photograph: Jens Meyer/AP

241m Europeans 'may have received Russian-linked disinformation'

This article is more than 4 years old

Research says malign actors online tried to craft individual narrative for each EU state

Around half of all Europeans could have been exposed to disinformation promoted by social media accounts linked to Russia before the European elections, an analysis suggests.

Evidence of 6,700 so-called “bad actors” posting enough content to reach up to 241 million users was discovered by researchers examining the scale of the threat.

There was no “all-purpose” content but locally created material was being amplified to craft a narrative for each EU member state, according to the study of a 10-day period from 1 to 10 March.

The report’s authors found specific evidence of malign actors seeking to shape specific news developments in Europe, including the debate over whether the Commons should back Theresa May’s Brexit deal during which divisive content was actively spread.

On 4 March 2019, an article published by the French president, Emmanuel Macron, about the future of Europe provoked a 79% increase in activity within 24 hours by accounts mostly promoting or sharing content attempting to discredit his ideas.

In Germany, the focus was on building a divisive narrative over immigration policy in the wake of the Syrian refugee crisis and the rise of the far-right AfD party.

The disinformation was pushed via automated bots programmed to pick up specific text cues and by humans sometimes using software to communicate through multiple accounts at the same time and potentially avoiding bot detection algorithms.

The company that produced the research, online security firm SafeGuard Cyber, said it had a database of more than 500,000 known troll and bot accounts and had confidence of their Russian links, although this could not be independently verified by the Guardian.

The European commissioner Sir Julian King said the evidence “underlines the dangers of disinformation online”.

He said: “Malicious actors, whether they be state or non-state, will not hesitate to use the internet to attempt to influence and interfere in our democratic processes.

“We have achieved a lot in the past year in our work to counter this threat. But more remains to be done on all sides, including by the big internet platforms – it is vital we ensure the security of our elections.”

King was one of a number of EU officials that the research suggested were also being targeted by malign actors. Thirteen percent of his Twitter followers were found to be suspicious.

Otavio Freire, the co-founder of SafeGuard Cyber, said: “The scale of the problem is tremendous. The rise of disinformation campaigns is abetted by the fact that it is incredibly difficult to stop their spread on social platforms.

“Bad actors realise that hacking election infrastructure, and hacking the perception of reality and facts, are ultimately tactics to accomplish similar outcomes.

“The former you need to get past firewalls, while the latter continues to be unprotected. Our report reinforces the need for a new approach to security, as today’s bad actors are not at all hindered by the cybersecurity tactics of yesterday.”

SafeGuard Cyber said it identified likely Russian-backed actors on social media through examining the content of their posts, location, time of posting and relationship to other bots.

Freire said: “To use an analogy, we have a copy of the blueprint that Russia uses to build its accounts, but we cannot go into too great a level of detail for revealing things that they will then figure out how to engineer around”.


Most viewed

Most viewed