In early 2024, voters in New Hampshire were startled by a robocall that sounded unmistakably like U.S. President Joe Biden. The message? Don’t vote. But it wasn’t the President — it was an AI-generated deepfake.
That moment marked a turning point in the growing intersection between artificial intelligence and democracy. The age of misinformation has been turbocharged by AI, and elections around the world are now under a new kind of threat: one that doesn’t rely on ballot-stuffing or visible rigging, but on synthetic voices, fake videos, and micro-targeted manipulation.
“We are witnessing the rise of machine-generated propaganda at a scale never seen before,” says Daniel Weitzner, founding director of MIT’s Internet Policy Research Initiative. “It’s fast, it’s cheap, and it’s persuasive.”
The New Weapon of Political Disruption
From Washington to Warsaw, and from Manila to Multan, AI is being used to alter public perception and influence democratic decisions — often without the voter ever realizing it.
In Slovakia’s 2023 parliamentary elections, a deepfake audio clip falsely portraying a liberal leader discussing vote fraud went viral just two days before polling. The Election Commission couldn’t respond fast enough — and the damage was done. (Source: Journalist’s Resource, Shorenstein Center)
Similarly, in Nigeria, doctored AI-generated audio emerged just before the 2023 general elections, targeting a major opposition candidate. The result: growing public doubt over the legitimacy of the process. (Source: Journalist’s Resource, Shorenstein Center)
In the Philippines, Reuters uncovered how fake Facebook accounts pushed AI-generated praise for former President Duterte and launched online smear campaigns against opposition voices. These attacks included coordinated disinformation campaigns tied to political networks. (Reuters, April 11, 2025)
Asia on Alert: Taiwan and Pakistan’s Warning Signs
In Taiwan, officials have accused Beijing of using generative AI to divide Taiwanese society ahead of elections, by flooding social platforms with fake news and emotionally charged content. (Reuters, April 8, 2025)
And in Pakistan, where electoral processes have long been under scrutiny, AI is slowly entering the arena. Political parties are now experimenting with AI-generated speeches, slogans, and video messages tailored to specific demographics. With no legal framework for regulating AI in campaigns, and with vast voter databases potentially vulnerable to misuse, the question isn’t if but when this technology will be abused.
“Pakistan lacks robust digital legislation,” notes digital rights activist Nighat Dad. “With AI tools becoming accessible, the risks of synthetic political manipulation are alarmingly high.”
Not All Doom: The Democratic Side of AI
Despite the dangers, AI also holds potential to protect the democratic process — if used responsibly. Tools powered by AI can detect fake videos and doctored media faster than human moderators. They can help verify election results, improve voter registration systems, and flag disinformation in real-time.
Estonia is already using blockchain-backed e-voting and AI-auditing tools to ensure transparent digital democracy. Brazil’s Superior Electoral Court has also integrated AI systems to monitor electoral disinformation on social platforms. (Brookings Institution)
The key lies in transparency, oversight, and accountability — three areas where most countries, especially developing democracies, are still lagging.
Pakistan’s Urgent Need for Regulation
With general elections looming in many regions, the time for Pakistan and similar nations to act is now. Laws must be drafted to govern the use of AI in political communication. Electoral commissions need digital forensics units. Political parties must disclose when and how AI is used in campaigning.
If left unchecked, AI won’t just interfere with how votes are cast — it could change how people think about their choices.
“It’s not just about electoral fraud anymore,” says Dr. Usman Zafar, a Lahore-based tech researcher. “It’s about emotional manipulation, real-time deception, and undermining public trust in democracy itself.”
Conclusion: A Tool or a Threat?
AI is not inherently good or evil — it’s a tool. But like any powerful tool, its impact depends on who wields it and how.
If governments fail to regulate and educate, AI could indeed become the silent killer of the ballot. But if used wisely and transparently, it might just help restore integrity to elections — by protecting them from the very threats it enables.
The choice is ours. The clock is ticking.
References: