Skip to content

Generative AI and the Evolving Threat to Elections: A New Era of Disinformation

Election interference is changing rapidly with the rise of generative AI. Foreign and domestic actors are harnessing the power of AI to produce propaganda. As election seasons unfold worldwide, the impact of this technology remains uncertain, but its potential for disruption is undeniable.

The world of elections is under a growing threat, one driven by the advancements in artificial intelligence. Foreign actors have been using social media to influence elections for years, but the emergence of generative AI and large language models has opened a new chapter in this digital warfare. As we approach a wave of upcoming elections, the role of AI in shaping political narratives is set to expand, raising critical questions about the future of democracy.

Since 2016, countries have been employing social media disinformation campaigns to influence foreign elections. China and Iran, in addition to Russia, have used these tactics extensively, with no signs of slowing down in 2023 and 2024.

Generative AI and large language models have become powerful tools for producing vast amounts of text on any topic, from any perspective. These technologies are tailor-made for propaganda in the internet age.

The democratic world is gearing up for a series of elections, with 71% of people living in democracies participating in national elections between now and the end of the next year. Countries like China and Russia have a vested interest in these elections, making them attractive targets for influence operations.

The list of countries involved in election interference has grown over the years. While Russia, China, and Iran were the early actors, the reduced cost of foreign influence campaigns now invites more countries into the arena, thanks to AI technologies like ChatGPT.

Generating propaganda content is only part of the challenge. Distributing it effectively requires a network of fake accounts, which social media platforms are increasingly adept at detecting and removing.

Propaganda has shifted from Twitter to encrypted messaging platforms like Telegram and WhatsApp, making it harder to track. Platforms like TikTok, with its focus on short videos, are gaining popularity among propagandists.

AI-powered persona bots can mimic regular users on social media, posting mostly benign content with occasional political messaging. At scale, these bots can have a substantial impact on shaping public opinion.

Both attackers and defenders in the realm of disinformation have improved their tactics. The evolving world of social media, along with AI advancements, adds complexity to this ongoing battle.

The era of generative AI and large language models presents new challenges for election security. As AI-driven propaganda becomes more sophisticated, researchers and defenders must adapt to detect and counter these threats effectively. Being proactive in studying disinformation techniques worldwide is crucial for safeguarding the democratic process in an AI-dominated era.