How Russian propaganda infiltrates AI models: the ‘Portal Kombat’ case

IA infiltrazione propaganda russa
Piercamillo Falasca
27/05/2025
Interests

In recent months, a series of journalistic investigations and technical reports have turned the spotlight on a new frontier of disinformation: that which passes throughartificial intelligence. Among the main players in this information offensive is (needless to say) Russia, which – according to numerous studies – has created a systematic strategy to influence major language models such as ChatGPT, Grok and Gemini. At the heart of this strategy is a network of sites known as ‘Portal Kombat’.

Disinformation on an industrial scale

The‘Portal Kombat’ network, revealed by VIGINUM (the French agency for the surveillance of foreign digital interference), consists of hundreds of sites, many of them masquerading as local newspapers in English, French, German, Italian and Spanish, as well as some 40 portals in Russian. According to a NewsGuard investigation, these sites published over 3.6 million articles in 2024 alone, containing pro-Kremlin narratives aimed at discrediting the West.

Thegoal is not only to influence readers, but to saturate the web with fake content designed to be picked up by search engine crawlers and subsequently incorporated into the training datasets of AI models. A tactic called ‘LLM gro oming’, which exploits the open architecture of the Internet and the dependency of AI on public data.

The impact on chatbots: an invisible threat

One of the most alarming revelations concerns the actual ability of propaganda to infiltrate models. According to NewsGuard, more than 33 per cent of the answers provided by ten leading Western chatbots – including ChatGPT, Gemini, Claude – include false narratives originating from articles in ‘Portal Kombat’ or its sister network ‘Pravda’.

In some cases, chatbots cite these sources as trustworthy, repeating statements denied by independent fact-checkers. A case in point: the false report that Zelensky banned the Truth Social platform in Ukraine after criticism by Donald Trump. Six out of ten chatbots, asked about this claim, reported it as true, referring to sources from the ‘Pravda’ network.

Artificial intelligence as propaganda multiplier

The case of the ‘DC Weekly’ website, part of the Russian network, shows the effectiveness of the combination of generative AI and disinformation. Starting in 2023, the portal started using GPT-3 models to rewrite propaganda articles in an original way, increasing their credibility and dissemination. A study published in PNAS Nexus showed that articles rewritten in this way were perceived as equally or more reliable than the originals by a representative sample of US readers.

In a speech in January 2024, John Mark Dougan – a well-known pro-Kremlin commentator based in Moscow – explicitly admitted that the goal of these operations is to ‘change AI globally’ by altering the perception of reality in the models used by millions of users every day.

A challenge for western democracies

The risk is profound: while traditional disinformation can be identified and disproved, infiltration into AI models is more insidious, invisible, and difficult to control. According to Nina Jankowicz, director of the American Sunlight Project, ‘this is the first time in history that a hostile actor can so profoundly influence the world’s most widely used information tool, without having to attack it directly’.

Possible countermeasures

If no action is taken, there is a risk that tools created to facilitate knowledge will unwittingly become vehicles for propaganda. To counter this infiltration, major technology companies have started to take defensive measures.

OpenAI, Anthropic and xAI have introduced human review mechanisms for training data, while other companies are investing in tools to trace the sources used by models and remove those considered unreliable. However, these measures are definitely insufficient. A coordinated approach is needed, in which governments, civil society and the technology industry work together to define clear standards for transparency, security and quality of data used for training language models.

The ‘Portal Kombat’ case reminds us that technological neutrality does not exist: any tool, even the most advanced, can be distorted if left unchecked. If we want AI to remain a force for truth, science and democracy, we must defend it from manipulation and ensure that it meets criteria of integrity, accuracy and pluralism.

The battle for the information of the future is no longer only fought in newsrooms or parliaments, but also – and increasingly – in the lines of code that instruct our machines.

Speaking of the fight against hostile interference, we attach the link to review the conference “Democratic Shield. Defending democracy from foreign interference”, organised on Monday 26 May in the Sala Capitolare of Palazzo della Minerva on the initiative of Sen. Marco Lombardo.