As elections approach around the globe, scientists are sounding a cautionary note. Conventional AI may be capable of influencing the candidate we support at the ballot box.
Two major studies show how artificial intelligence chatbots, when prompted to advocate politically, can sway voters' opinions, even on firmly held presidential preferences.
And it only takes one short conversation.
The research was led by a multi-university team from the United Kingdom, United States and Poland. The aim was to understand whether AI-generated persuasion could meaningfully shift political opinions and how it might do so.The chatbots generated numerous, but often inaccurate, factual claims, filling conversations with “evidence.” It's also worth noting that even arguments built on accurate claims can still mislead by omission.
To find out, researchers developed chatbots trained to advocate for specific political candidates or policy positions. The participants were tens of thousands of people across several countries, who were asked to engage in structured, back-and-forth conversations lasting about six to ten minutes. Afterward, researchers measured whether attitudes had changed.
The first study, which is reported in the journal Nature, focused on upcoming national elections. More than 2,300 Americans were recruited two months before the 2024 U.S. presidential election. They were asked first to indicate their preferences on a 100-point scale, then chat with an AI chatbot that was intentionally biased, either toward Donald Trump or Kamala Harris.
The results were significant. The pro-Harris model shifted likely Trump voters 3.9 points. Toward Harris there was an effect roughly four times larger than traditional ads tested during the 2016 and 2020 elections. The pro-Trump model nudged Harris voters 1.51 points toward Trump.
Similar experiments were run in Canada and Poland, involving 1,530 Canadians ahead of the 2025 federal election and 2,118 Poles before the 2025 presidential vote. There, the shifts were notably larger by about ten percentage points. “This was a shockingly large effect to me, especially in the context of presidential politics,” David Rand, Ph.D., senior author on both papers and a professor of information science, marketing and management communications at MIT, the Massachusetts Institute of Technology, said in a media release.
What made chatbots so persuasive? It was the sheer volume of information. The chatbots generated numerous, but often inaccurate, factual claims, filling conversations with “evidence” and policy-specific information. When researchers limited the bots' ability to provide “facts,” persuasion plummeted. In other words, information density, not emotional manipulation, drove influence.
Accuracy varied. The studies found that models advocating for right-leaning candidates included more false claims than those supporting left-leaning candidates. This pattern mirrored earlier research showing that misinformation circulates more widely in conservative digital spaces.
A second paper, published in Science simultaneously with the first study, extended the inquiry to nearly 77,000 participants in the United Kingdom and 707 political issues. Researchers tested 19 different large language models under varying conditions. Here too, the biggest persuasion gains came not from personalization, but from sheer information delivery. One model shifted people who initially disagreed with a political stance by more than 25 percentage points.
Rand explains the trade-off: “LLMs (large language models) can really move people's attitudes towards presidential candidates and policies and they do it by providing many factual claims that support their side. But those claims aren't necessarily accurate — and even arguments built on accurate claims can still mislead by omission.”
Importantly, researchers at least emphasized transparency, even if the chatbots didn't. Every participant was told they were speaking with an AI chatbot and later fully debriefed. Persuasion direction was randomized so that no group's overall voting outcome shifted.While humans require time and knowledge to craft a persuasive explanation, chatbots do it instantly and repeatedly.
These findings reveal both promise and peril. On the positive side, a related study found that AI could reduce belief in conspiracy theories simply by engaging people with reasoned argument. In those cases, participants were persuaded not because AI presented itself as authoritative, but because the content was compelling. On the other hand, the sheer speed at which an AI can produce what are supposedly evidence-based arguments raises deep ethical concerns.
While humans require time and knowledge to craft a persuasive explanation, chatbots do it instantly and repeatedly.
These findings don't mean democracy is doomed, but they underscore the urgency of designing ethical frameworks before chatbots shape not just opinions, but outcomes. The next frontier may be guardrails. Systems could be audited for accuracy, fluffed for misleading claims or required to disclose training biases. But political deployment is already happening. As Rand warns, “The challenge now is finding ways to limit the harm and to help people recognize and resist AI persuasion.”



