Bad Robots: How is AI aiding disinformation campaigns?

Bad Robot Outcome:
The growth of the social media ecosystem and the continuous advancement of AI tools available to disseminate information have increased the tenacity and reach of disinformation attacks. Not only do such disinformation campaigns threaten national security and the general wellbeing of society, but they also seek to put into question the fundamentals ideals on which democracy is based.


The Story

Recent years have witnessed growth in disinformation campaigns, with each attack utilising a different approach than the previous one. These disinformation campaigns have raged from coordinated boycotts to cancel culture. In 2017, a coordinated disinformation campaign targeted an aid volunteers group called the “White Helmets” operating in war-torn Syria. The main agenda of the Russian-led disinformation attack was to discredit the volunteer group by painting them as a militia group backed by the Western governments and whose sole purpose was sowing unrest in Syria.
The campaign was largely successful as it successfully sows doubts on claims of chemical attacks on civilians. Similarly, concerted disinformation was a witness during the 2016 US elections in which Russian-backed troll farms spread false information with the intent of influencing election results. The disinformation campaign included using Facebook and Twitter posts with damning posts that slammed participants and sowed distrust in the electoral process.
The above scenarios present a view of the scope and extent to which disinformation campaigns can be used. They spell the imminence of the threat posed by disinformation campaigns and point to the advanced techniques that AI provides to those behind the attacks. For instance, the investigation into the election meddling by the Russian trolls identified that millions of social media accounts were created using deep fake profiles generated by AI, making them undetectable through conventional detection techniques. The Syrian disinformation campaign also brought into the limelight the use of AI to generate multiple posts and threads by mimicking human writing, rendering false information believable. These include AI-generated deep fake images and sounds that are hard to differentiate from reality.

The Fall-Out

The evolving and increasing use of AI in disinformation attacks only threatens to exacerbate an already situation. As it stands, governments around the world and social media companies are grappling with increased disinformation ranging from misleading posts to institutionalised and state backed widespread campaigns. According to researchers from OpenAI, a technology lab company researching AI and disinformation, AI tools could make disinformation campaigns more effortless and more efficient for governments and organisations carrying out disinformation campaigns. The growing ease increases the vast amount of false information, but rather the researchers argue that it could become close to impossible to fight disinformation using AI. AI will complicate efforts to fight disinformation and strain the available resources, further crippling the war on disinformation. For instance, the replacement of human writers by AI tools able to generate varied believable large-scale content would call for the vast deployment of humans to identify fake information. On the contrary, companies with strict budgets may not be able to employ at such a significant level.

Our View

Disinformation remains a significant threat to democracy due to its ability to alter facts, drive propaganda, and fuel conspiracy theories. All these have the effect of undermining the faith and trust in democratic institutions. AI provides a platform capable of carrying out short time scale, rapid disinformation campaigns that are hard to detect and counter. Advances in AI must address methods to verify the reliability of sources from which information emanates.
Sarah Klain

Written by:

Sarah Klain