Science & Technology

How AI is repeating the Kremlin's lines on Ukraine

A new study finds that large language models trained on polluted information are echoing pro-Russian talking points about Ukraine.

Russia's President Vladimir Putin (C), accompanied by Russia's top lender Sberbank CEO German Gref (L), touring an exhibition on the sidelines of an AI conference in Moscow on November 24, 2023. [Mikhail Klimentyev/POOL/AFP]
Russia's President Vladimir Putin (C), accompanied by Russia's top lender Sberbank CEO German Gref (L), touring an exhibition on the sidelines of an AI conference in Moscow on November 24, 2023. [Mikhail Klimentyev/POOL/AFP]

By Galina Korol |

Ukraine's battlefield has expanded into an unlikely arena: artificial intelligence (AI).

An August study by Texty.org.ua and OpenBabylon found that large language models -- the systems behind chatbots, translation tools and other online services -- are beginning to echo Kremlin talking points.

The researchers tested 27 open-source models from the United States, Canada, France and China with 2,803 questions about Ukraine. The answers ranged from blunt statements naming Russia as the aggressor and recognizing Crimea as Ukrainian to evasive responses that recycled tropes about "a single people" and Moscow's "legitimate interests."

"Artificial intelligence most often breaks down on the topics of history, geopolitics and national identity," researchers wrote.

A person holds a cell phone featuring the DeepSeek logo on February 10, 2025, in Edmonton, Canada. [Artur Widak/NurPhoto/AFP]
A person holds a cell phone featuring the DeepSeek logo on February 10, 2025, in Edmonton, Canada. [Artur Widak/NurPhoto/AFP]

Models developed in Canada, France and the United States gave the most support to Ukraine, with 25% to 30% of their answers leaning pro-Ukrainian. Chinese models showed the least support, with just 22.1% of their answers favorable to Kyiv and nearly 20% aligning with Russian narratives.

Researchers said the difference reflects the influence of local information environments. Russia has formal agreements with Chinese media to distribute its content. That information then feeds into Chinese AI systems, which tend to justify the Soviet past and frame Ukraine's independence as a byproduct of the USSR's collapse, a view consistent with the Chinese Communist Party's official line.

The findings suggest that the information war is no longer confined to television screens or troll farms. It is now embedded in algorithms that shape how millions of people search, read and understand the war in Ukraine.

Biased machines

Moscow has long viewed information as a weapon, a lesson from the Soviet era when the Kremlin recognized that controlling perception meant controlling people.

Matthew Canham, executive director of the Cognitive Security Institute, told Kontur that "narrative control is essential both for maintaining authority at home and projecting influence abroad." He noted that states devote significant financial and technological resources to social media manipulation and cyber operations.

Viktor Taran, a Kyiv-based defense expert and CEO of the KRUK UAV Operator Training Center, said new technologies have changed both the scale and the delivery of propaganda.

"Now the propagandists can come to you -- they can literally knock on the door of your home and your phone," he told Kontur.

AI has also made propaganda cheaper to spread, according to Borys Drozhak, director of engineering at DataRobot.

While television requires significant resources, swarms of bots can quickly generate and comment on content, creating the illusion of "an alternative swath of the population that thinks differently," he told Kontur.

Beyond troll farms

Experts say propaganda that once came from troll factories is now amplified by AI, which opens new and dangerous possibilities.

"AI agents represent the most cutting-edge AI threat for several reasons but primarily because of their ability to operate autonomously and their access to toolkits," said Canham.

He explained that the systems are built to act with little or no human oversight: once given an objective, they can generate and send messages on their own.

Unlike simple fake accounts, these "digital twins" can mimic real people and sustain extended conversations. Canham said their strength lies in imitating individuals and behaviors by drawing on digital footprints.

These digital copies can impersonate people online or serve as testing grounds for propaganda campaigns. The danger is that autonomous agents can use a range of tools to generate fake video or audio or pose as credible figures to spread narratives.

"AI generated propaganda is already a significant problem and appears to only be getting worse," Canham said.

One growing tactic is "upstream poisoning," he explained -- the manipulation of data sources such as Wikipedia that feed AI-driven aggregation platforms now used as primary news outlets.

To push Kremlin narratives into AI, Russia has built a network of websites called Pravda, NewsGuard reported in March.

Unlike typical propaganda outlets, Pravda targets AI models. The network includes about 150 sites that published 3.6 million articles in 2024, repackaging claims from Russian state media rather than producing original content.

The goal, NewsGuard said, is to flood search results and AI training data with so much disinformation that language models treat it as normal. Viginum, a French agency, traced the network to Crimean IT firm TigerWeb, owned by developer Yevgeny Shevchenko.

Building resistance

Countering AI-driven propaganda requires cooperation, according to experts.

Taran said people without basic knowledge in subjects such as history or politics are most at risk of believing fabricated realities, especially the young. He called critical thinking the key "vaccine" against digital infection.

Canham added that countering AI-driven propaganda will require a "whole-of-society approach," drawing on government, industry and academia. Public campaigns can explain how propaganda techniques work, while analysts can trace links between troll farms, bot networks and state-backed media.

"Nonprofit organizations and non-governmental organizations can help to develop counter-messaging campaigns to expose the cognitive manipulation tactics being employed by malicious actors," he said.

Do you like this article?


Captcha *