Can Democracy Withstand AI Misinformation?
Photo credits: “AI End‑Scenario 3: Necessary Rescue” by tamingtheaibeast.org, published on August 20,  2018, licensed under Creative Commons. No changes were made.

Can Democracy Withstand AI Misinformation?

I. Introduction

Just before the January 2024 Democratic primary in New Hampshire, thousands of voters received an unusual phone call from what sounded like President Joe Biden telling them to “save your vote for the November election.” But the call wasn’t real: it was an AI-generated deepfake. The man behind the robocall, political consultant Steve Kramer, later claimed it was meant to “raise awareness” about AI in politics. Authorities didn’t buy it: he is now facing felony charges for voter suppression, and the Federal Communications Commission (FCC) has proposed a $6 million fine.

This wasn’t an isolated glitch. As AI tools become more powerful and accessible, distinguishing real from fabricated information is getting harder, directly threatening democracy. With 72 elections underway, what happens when voters can’t trust what they see or hear?

This article explores how synthetic misinformation, such as deepfakes and AI-generated content, is eroding trust in democratic institutions, undermining free elections, and challenging governments to keep up with rapidly evolving threats.

II. The Misinformation Threat in 2024

In 2024, the world saw something unprecedented: billions of voters headed to the polls during a global “super-cycle” of elections, just as artificial intelligence quietly transformed how campaigns were run and information was spread. With over 3.7 billion eligible voters across 72 countries, including major democracies like the U.S., India, and the U.K., this was the year democracy and AI collided.

Generative AI tools now make it disturbingly easy to create false videos, clone voices, and write convincing articles—all in minutes. What once took a team now takes one person and a laptop. The result? A flood of synthetic content that often looks genuine. Deepfakes, bots, and algorithmic disinformation are now spreading faster than fact-checkers can keep up.

In India, political parties have already used AI to fabricate speeches and alter campaign images. In the U.K., deepfake videos of leaders in compromising situations have gone viral. But this isn’t just about  “fake media”—when voters can’t tell fact from fiction, confidence in democratic institutions starts to crack. And unless regulation, public awareness, and digital literacy catch up fast, restoring trust may become one of the defining struggles of our political future.

III. Real-World Impacts

United States

In the U.S., concerns over AI’s role in elections have surged. A recent Pew Research Center survey found that 56% of Republicans and 58% of Democrats were either “extremely” or “very” worried about how artificial intelligence could affect the electoral process. Nearly 80% said they didn’t trust tech companies to stop the misuse. In another alarming case, a fabricated video of former President Biden making inflammatory comments about Israël went viral on X (formerly Twitter), sparking outrage and confusion before it was debunked. Experts confirmed the video had been manipulated using generative AI, showing how easily AI content can mislead a polarized electorate.

India

India’s general election saw a wave of AI-generated deepfakes designed to stir public emotion and confusion. Political parties used AI to fabricate speeches, alter photos, and create highly shareable—but misleading—memes. In one viral case, a doctored video falsely showed BJP leader Amit Shah announcing a plan to cut reservations for marginalized communities, sparking widespread outrage and legal action. This wasn’t just dirty campaigning—it was calculated strategy, weaponized by AI.

United Kingdom

In the U.K., AI-generated misinformation also spread widely. Deepfake videos depicted political leaders in compromising situations. One falsely showed then–Prime Minister Rishi Sunak saying he would send 18-year-olds into war zones; another appeared to show PM Keir Starmer angrily swearing at a staffer. The Sunak video alone drew over  400,000 views.

The public is paying attention. A national survey found that 23% of Britons no longer trust any political content they see on social media. Another 29% say they only trust posts from verified sources. Alarmingly, 67% believe deepfakes and fake news pose a serious threat to the future of democracy in the U.K.

IV. Who’s Responding – and How

As AI-generated misinformation becomes harder to ignore, governments, tech platforms, and civil society groups are beginning to take action. 

Governments: Regulatory Initiatives

United States

In October 2023, former President Biden signed 14110, launching a national roadmap for AI oversight. It directs federal agencies to evaluate AI-related risks, including misinformation, and establish clear rules for responsible use. The FCC, meanwhile, took a more direct approach by proposing a $6 million fine against political consultant Steve Kramer for using an AI-generated Biden robocall to mislead voters. It’s one of the first federal enforcement moves targeting AI disinformation in an election.

India

India has leaned into international collaboration. In early 2024, it partnered with France to promote ethical, inclusive AI development with an emphasis on democratic safeguards. At the national level, there is growing pressure for tougher enforcement mechanisms to match the country’s fast-paced AI adoption in politics.

United Kingdom

The U.K. has taken the legislative route. The Online Safety Act, passed in 2023, gives media regulator Ofcom new powers to monitor online platforms and crack down on harmful digital content, including AI-generated misinformation. Platforms are now expected to proactively label or remove misleading synthetic media before it goes viral.

Tech Platforms: Policy Shifts and Gaps

Meta (Facebook, Instagram, Threads)
Meta has started labeling AI-generated or altered media that could mislead users, particularly in political contexts. Instead of banning such content outright, the company is relying on transparency as its core strategy.

X (formerly Twitter)
Under Elon Musk, X has adopted a more hands-off approach. Its AI assistant, Grok, was caught spreading false information during the 2024 campaign season. Critics say X remains a hotspot for unchecked misinformation.

TikTok
TikTok has taken a more proactive stance. It partners with fact-checkers like PolitiFact to identify and label AI-generated misinformation. The platform also flags synthetic media and removes paid content that violates its integrity policies.

Civil Society: Advocacy and Innovation

AI for Good Initiatives
International bodies like the International Telecommunication Union (ITU) are hosting summits under the banner of AI for Good, uniting stakeholders to guide ethical AI use. One key priority is limiting the spread of disinformation in democratic spaces.

OpenAI’s AI for Impact Accelerator
OpenAI has launched a funding initiative to support nonprofits in the Global South, particularly in India, working on AI projects for the public good. While the focus isn’t solely on misinformation, many of these initiatives aim to build digital literacy and help communities resist manipulation online.

V. What’s at Stake

Free and fair elections rely on access to trusted information—without it, how can voters make informed choices?

AI-driven misinformation doesn’t just confuse; it corrodes. It fuels doubt and disproportionately targets those already on the margins: young voters, minorities, and non-native speakers. The result is a growing distrust in institutions, the media, and even the very act of voting itself.

People tune out. Cynicism takes root. And democracy doesn’t collapse all at once—it just quietly stops working.

VI. Future Outlook

As AI-generated misinformation grows more sophisticated, the real question isn’t just how bad it could get, but what we’re doing to stop it. Efforts to detect and regulate AI are underway, but still lag behind the technology.

We face two possible futures: one where trust collapses under the weight of deepfakes and deception, and another where AI is used to protect truth and strengthen democracy. The outcome depends on the choices we make now.

VII. Conclusion

When New Hampshire voters got robocalls mimicking President Biden’s voice, it wasn’t just a political stunt—it was a wake-up call. The threat isn’t coming. It’s already here.

So, how do we vote wisely when reality is up for grabs?
How do we rebuild trust when even our senses can betray us?

There’s no simple answer.
But ignoring the question isn’t one either.

This is an article written by a Staff Writer. Catalyst is a student-led platform that fosters engagement with global issues from a learning perspective. The opinions expressed above do not necessarily reflect the views of the publication.

Edited by Lara Cevasco

Leave a Reply

Your email address will not be published. Required fields are marked *