AI On The Frontlines

How large language models are supporting the fight against disinformation

By Bennett Iorio

The rapid growth of AI has given further tools to those who would use disinformation as a political weapon. It is also now emerging that large language models (LLMs) can also effectively help efforts to fight disinformation. But can these models truly deliver? What are their capabilities, their blind spots, and what ethical questions are raised through their use – especially in real-world geopolitical conflicts, like the ongoing war in Ukraine?

Propaganda is as old as warfare itself but the rise of seamless real-time digital communications to all quarters of the world has opened up new opportunities for governments engaged in conflict or rivalry to attempt to reassure their own people, win over outsiders and demoralize opponents by propagating deliberate untruths, or disinformation.

Since the outbreak of full-scale war in Ukraine, there has been a proliferation of digital propaganda from Russian state actors. According to the American Sunlight Project’s 2024 report, Russian-linked actors are disseminating more than 3.6 million false and misleading articles annually, mainly through a multilingual group of over 150 websites dubbed the Pravda network.

Pravda news articles include a mixture of nationalistic bravado and withering criticism of the Ukrainian military, of Ukrainian life and politics in general, as well as of Western governments. One recent headline read: “The Ukrainian fascists tried to cheat our fighters, simulate surrender, and attempt to seize weapons but were eliminated by our fighters of the Southern group”. The website news- pravda.com regularly publishes more than 400 English language news items per day.

Project Sunlight suggested earlier this year that the real aim of the Pravda network is not to persuade readers but to “poison” Western AI tools by flooding the internet with false information that is then ingested as part of the models’ training materials. A separate US organization, Newsguard, subsequently published an audit of 10 Western chatbots, which found they repeated Russian disinformation 34% of the time, ignored it 18% and debunked it 48% of the time.

Newsguard goes on to say that those behind the Pravda network also use search engine optimization (SEO) strategies to artificially boost the visibility of content in internet search results. It suggests that this is also a strategy to get noticed by AI chatbots, which often rely on publicly available content indexed by search engines.

Propaganda detection

While Russia is attempting to influence Western-developed AI, its opponents are simultaneously discovering that LLMs also offer solutions to disinformation, in particular in helping to identify propaganda information, likely including falsehoods and designed overtly to manipulate rather than inform.

Researchers have realized that LLMs are particularly adept at identifying classic propaganda techniques, such as loaded language, appeals to fear, scapegoating, and manipulative narratives. A study conducted by NYU’s Tandon School of Engineering in 2024 demonstrated that fine-tuned LLMs achieved up to an 85% accuracy rate in identifying various tactics. This capability provides a powerful means to quickly flag potential disinformation and reduce its impact.

The ability of LLMs to discern subtle variations in linguistic manipulation makes them uniquely suited to support efforts to combat increasingly sophisticated disinformation campaigns. For example, during recent misinformation operations surrounding Ukraine, Russian propagandists often employed emotionally charged language and fear-based narratives designed to heighten anxieties and polarize opinions among Western audiences. Fine-tuned language models successfully detected nuanced language intended to provoke outrage, mistrust, or panic, allowing rapid mitigation measures such as warning labels or reduced visibility on digital platforms.

Experiments conducted by NATO's Strategic Communications Centre of Excellence highlighted the practical advantages of deploying LLM-based detection systems in real-time operational scenarios. Analysts observed that automated tools significantly decreased the response time from disinformation identification to corrective action, dropping from hours to mere minutes. This speed is crucial in an environment where falsehoods spread rapidly across social media networks. Thus, the deployment of LLMs not only improves accuracy in propaganda detection but substantially boosts operational efficiency and responsiveness in the ongoing information warfare landscape.

Semantic analysis and recontextualization

Semantic analysis is a significant strength of LLMs, as it enables these models to grasp and evaluate the deeper meaning and context within text data. A notable application of semantic analysis is DARPA's Semantic Forensics (SemaFor) program, designed to identify manipulated narratives by examining semantic inconsistencies within content. In DARPA's Task 4.1.2 experiments conducted in 2024, LLMs accurately identified manipulated content over 80% of the time, particularly in instances of subtle recontextualization, where truthful data is deliberately reframed to mislead.

Semantic analysis methodologies have been bolstered by incorporating contextual embedding techniques and advanced Natural Language Processing (NLP) algorithms, as evidenced by a 2023 Stanford study on disinformation detection accuracy in real-world scenarios. The research indicated that semantic contextual awareness significantly enhanced AI's ability to differentiate authentic content from manipulated narratives, resulting in accuracy improvements ranging from 15-20% over traditional keyword-based approaches.

Furthermore, a 2024 report from the Carnegie Endowment for International Peace highlighted that semantic analysis techniques have been instrumental in addressing multilingual disinformation campaigns. This analysis allows for better comprehension of contextually and culturally nuanced narratives, essential in global geopolitical conflicts where disinformation often exploits specific regional sensitivities.

Automated fact-checking

Another powerful feature of Large Language Models (LLMs) is their capability for automated fact- checking, essential in countering the high-velocity spread of disinformation online. LLMs can rapidly cross-reference emerging narratives against verified databases, authoritative news outlets, and established fact-checking platforms such as PolitiFact, Snopes, and FactCheck.org. For example, during critical phases of the Russia-Ukraine conflict, automated systems powered by LLMs swiftly compared social media claims with verified intelligence reports and independent media sources, facilitating real-time debunking of false narratives.

A notable study by Diksha Saxena from the University of Buffalo in 2024 validated this approach, demonstrating a substantial 78% accuracy in identifying and flagging disinformation in real-time social media scenarios. Further research from the Poynter Institute in 2024 underscored that integrating LLM- based fact-checking models with human oversight could further boost accuracy rates to over 85%, emphasizing the potential of hybrid human-AI systems in combating disinformation effectively.

Reuters’ 2023 Digital News Report highlighted the critical importance of automated fact-checking in sustaining public trust during crises, showing that timely AI-assisted interventions reduced misinformation reach by up to 60% compared to manual fact-checking alone. Such statistics underline the indispensable role LLMs can play in managing information integrity during geopolitical tensions.

Ethical and privacy concerns

Ethical considerations surrounding the deployment of AI in disinformation detection are extensive and multifaceted. Israel’s recent deployment of AI-driven surveillance technology using intercepted communications, highlighted by a Guardian report in early 2025, has intensified global concerns about privacy, individual rights, and potential misuse of AI technologies by state authorities.

These concerns underscore the critical necessity for clear ethical frameworks, transparent data handling processes, and robust oversight mechanisms to govern AI deployment.

Concerns also extend to the potential of AI systems inadvertently infringing on freedom of expression. Tools designed to combat disinformation could be misappropriated to suppress legitimate dissenting views or minority perspectives if inadequately regulated. An OpenAI ethics committee report from 2024 stresses the importance of transparent algorithms, oversight by independent bodies, and clear accountability mechanisms to balance informational integrity with protecting civil liberties.

Cultural and linguistic limitations

Current LLMs frequently demonstrate reduced performance in multilingual and culturally specific contexts, limiting their global applicability. According to a 2024 study by MDPI, performance accuracy discrepancies of up to 40% have been recorded across different languages and cultural contexts. This gap arises primarily from training data biases and the lack of culturally relevant context.

For instance, a detailed analysis published in the Journal of AI Research (2024) revealed considerable discrepancies in LLMs; abilities to accurately detect propaganda across diverse languages such as Russian, Arabic, and Mandarin. The research indicates the necessity of extensive multilingual training datasets and culturally sensitive algorithms, enabling models to better interpret nuanced linguistic and cultural signals unique to individual regions.

Case study: Real-time application in the Russia-Ukraine conflict

Ukraine's Centre for Strategic Communications and Information Security (CSCIS) provides an instructive and illustrative case study on deploying Large Language Models (LLMs) against strategic disinformation during geopolitical crises. Since the full-scale Russian invasion of Ukraine in February 2022, disinformation has emerged as a powerful vector of warfare, exploited vigorously by Russian state actors to disrupt Ukrainian societal cohesion, undermine international support, and distort global narratives regarding the conflict.

Facing this unprecedented wave of digital propaganda, Ukraine turned to advanced AI-driven technologies, including LLMs, to bolster its capacity to detect, analyze, and rapidly respond to misleading content. By early 2023, CSCIS began integrating LLM-based analytical tools – particularly GPT-
4 and its derivatives – into their daily monitoring and counter-disinformation workflows. The practical effects of this strategic technological shift were quickly apparent.

A rigorous assessment by Harvard’s Misinformation Review in early 2024 quantified these outcomes, demonstrating that Ukraine's LLM-powered approach significantly enhanced real-time disinformation response efficiency. Prior to AI integration, identifying and debunking complex Russian disinformation campaigns took Ukrainian analysts several days, a timescale within which damaging narratives often solidified in public consciousness. Following the deployment of LLM-assisted analysis tools, the response time decreased dramatically, often averaging just 15-30 minutes from detection to the issuance of official debunking communications.

The effectiveness of this speed improvement was highlighted during key crisis moments. For instance, in January 2024, following a missile strike in Kyiv, Russian networks rapidly propagated a false narrative alleging Ukrainian forces had accidentally bombed their own civilian infrastructure. Utilizing LLM tools, the CSCIS quickly identified semantic inconsistencies and patterns typical of Russian-origin narratives. Within just 25 minutes, the Centre issued authoritative clarifications, accompanied by digital evidence debunking the claims, thereby significantly limiting international propagation. Comparative analytics from CrowdTangle indicated a 45% reduction in online engagement with the false narrative compared to previous incidents without AI-supported interventions.

The CSCIS has further leveraged these technologies to forecast emerging disinformation threats proactively. Utilizing semantic pattern recognition and historical data, LLM systems identified recurring narrative patterns or "propaganda fingerprints," allowing the Centre to anticipate and pre-emptively counter likely Russian disinformation themes. According to internal CSCIS reports from late 2024, predictive modelling achieved approximately 80% accuracy in forecasting narrative topics that Russian propagandists would exploit following major military or diplomatic developments. International recognition of Ukraine’s AI-driven counter-disinformation capabilities has grown rapidly, prompting broader collaborations and knowledge exchanges. A joint 2024 report from NATO’s Strategic Communications Centre of Excellence highlighted Ukraine’s CSCIS as a benchmark model, underscoring the utility of LLMs in modern strategic communications. NATO analysts specifically noted the Ukrainian
model's efficiency, stating that integrating LLMs within strategic communications initiatives could reduce misinformation lifespan online by 50-60%, fundamentally altering adversaries' strategic calculus.

However, these successes have also exposed challenges inherent in deploying LLMs within a dynamic conflict zone. The continuous adaptation required to counter increasingly sophisticated disinformation tactics demands substantial human oversight. False positives and misinterpretations initially occurred in around 15-20% of cases, highlighting the ongoing necessity of human-in-the-loop models. Ukraine addressed this through iterative retraining, combining automated detection with human analysts to review outputs, refining accuracy progressively over time. By the end of 2024, this hybrid approach improved overall accuracy rates of AI identification to over 90%, according to internal monitoring reports cited by Reuters.

Ukraine's experience underscores the importance of multilingual AI capabilities. Given the inherently globalized nature of digital media, disinformation campaigns targeting Ukraine frequently emerged in various languages, including English, French, Italian, German, and Spanish. CSCIS, working in collaboration with linguistic AI experts from Google’s Jigsaw and OpenAI, significantly expanded their multilingual capabilities. A subsequent internal assessment showed that multilingual LLM deployments effectively reduced the reach of foreign-language disinformation by approximately 35% across European and North American audiences, notably limiting Russia's ability to shape international public opinion.

A critical aspect of this case study is transparency and trust-building measures. Recognizing potential concerns around AI use and civil liberties, CSCIS established clear ethical guidelines and transparency protocols, publicly detailing how data was collected, processed, and utilized within AI models. Public opinion research by the Kyiv International Institute of Sociology in late 2024 revealed that this transparency markedly improved public trust in official communications, with 68% of respondents indicating increased confidence in government information due to transparent AI-based methods.

Economic considerations also factor significantly into this case. While initial deployment involved substantial investment, detailed cost-benefit analyses by the Ukrainian government, supported by USAID-funded research (2024), indicated significant economic efficiencies. The acceleration of disinformation mitigation reduced resource expenditures on manual fact-checking and damage control, enabling more efficient allocation of human and financial resources to other critical wartime needs.

Ukraine's deployment of LLM-based technologies in combating disinformation provides a crucial model for future global applications, demonstrating clear strategic, economic, and social benefits. However, it also highlights essential considerations, including ethical governance, multilingual adaptability, human oversight, and ongoing technological refinement. According to a Brookings Institution report (2024),
Ukraine’s proactive integration of LLM technologies exemplifies a new standard in digital resilience, offering valuable lessons for other democracies facing similar threats from state-sponsored information warfare.

The Ukrainian case demonstrates the necessity for global cooperation in advancing AI-driven methodologies and developing internationally recognized standards for ethical and effective use. Ukraine’s achievements, therefore, not only represent national resilience but also indicate pathways forward for global democratic societies seeking robust defences against the weaponization of information.

The strategic imperative for four actions

Ukraine's deployment of LLM-based technologies in combating disinformation provides a compelling model for future global applications, demonstrating clear strategic, economic, and social benefits.

According to a Brookings Institution report (2024), Ukraine’s proactive integration of LLM technologies exemplifies a new standard in digital resilience, offering valuable lessons for other democracies facing similar threats from state-sponsored information warfare.

In short, superiority in using AI defensively in modern warfare will entail:

  • Enhanced Multilingual and Multicultural Training: Significant investment in multilingual model training and cultural contextualization of LLMs is essential. Expanding multilingual datasets and conducting regular testing across cultural contexts can ensure models maintain accuracy and relevance, especially in diverse geopolitical situations.

  • Integration with Human Expertise: The human-in-the-loop model significantly enhances the reliability of AI-driven disinformation detection systems. NATO’s Strategic Communications Centre of Excellence (2024) reported that integrating human expertise into AI-driven analysis improves detection accuracy by up to 30%. Human oversight provides necessary context, mitigates bias, and ensures cultural sensitivity.

  • International Policy and Regulatory Cooperation: The Ukrainian case demonstrates the need for global cooperation in advancing AI-driven methodologies and developing internationally recognized standards for ethical and effective use. The establishment of global standards and ethical frameworks governing AI applications in counter-disinformation efforts is vital. International regulatory bodies must articulate clear guidelines and cooperation mechanisms, encouraging collaboration, setting standards for transparency, and defining acceptable use cases, thus enhancing global resilience against disinformation.

  • Appropriate Levels of Investment: Economic considerations also factor significantly into successful leverage of AI as a political and military defensive weapon. In the case of Ukraine, initial deployment involved substantial investment. By 2024, detailed cost-benefit analyses by the Ukrainian government were indicating that significant economic efficiencies had been gained. Equally encouraging, the acceleration of disinformation mitigation has reduced resource expenditures on manual fact-checking and damage control, enabling more efficient allocation of human and financial resources to other critical wartime needs. For future conflicts, significant and continuing investments clearly will be required. In a world where information can shape wars and peace, investment in AI-driven defenses is not optional; it is an essential element of safeguarding our collective future.

Deploying LLMs will undoubtedly require continuous research, vigilant ethical oversight, the development of clear regulatory frameworks, and robust international collaboration and investment. But by aligning technological innovation with ethical responsibility and strategic foresight, societies worldwide can strengthen their resilience against the escalating threat of propaganda-driven geopolitical instability.

References

  1. American Sunlight Project (2024). Russian Disinformation Networks and the Emerging Threat of LLM Grooming. American Sunlight Project Research Report, 2024.

  2. NewsGuard (2025). Audit of AI Chatbot Vulnerabilities: Repetition of Kremlin-backed Narratives. NewsGuard Special Report, 2025.

  3. NYU Tandon School of Engineering (2024). Assessing the Accuracy of Large Language Models in Detecting Propaganda Techniques. New York University Tandon School of Engineering Research Paper, 2024.

  4. NATO Strategic Communications Centre of Excellence (2024). AI-driven Propaganda Detection: Reducing Response Times in Information Warfare. NATO Strategic Communications Centre of Excellence, Riga, Latvia, 2024.

  5. Defense Advanced Research Projects Agency (DARPA) (2024). Semantic Forensics (SemaFor) Project: Enhancing Semantic Analysis for Detecting Manipulated Content. Semantic Forensics Program Task 4.1.2 Final Report, 2024.

  6. Stanford University (2023). Contextual Embeddings and Semantic Analysis in Disinformation Detection. Stanford University Technical Report, 2023.

  7. Carnegie Endowment for International Peace (2024). Multilingual Semantic Analysis in Geopolitical Disinformation Campaigns. Carnegie Endowment Policy Report, 2024.

  8. Saxena, Diksha (2024). Automated Fact-Checking Using Large Language Models: A Data- Agnostic Approach. Master’s Thesis, University at Buffalo, State University of New York, 2024.

  9. Poynter Institute (2024). Combining Human and AI Approaches in Fact-checking: An Effective Hybrid Model. Poynter Institute Research Paper, 2024.

  10. Reuters Institute (2023). Digital News Report 2023: Automated Fact-Checking and Its Impact on Information Accuracy. Reuters Institute for the Study of Journalism, University of Oxford, 2023.

  11. NewsGuard (2025). The Pravda Network: Weaponizing AI Chatbots with Russian Disinformation. NewsGuard Special Investigative Report, 2025.

  12. European Digital Media Observatory (2024). The Rising Threat of Data Poisoning in AI Training Datasets: Risks and Responses. European Digital Media Observatory Report, 2024.

  13. The Guardian (2025). Israel’s AI Surveillance Technologies Spark Ethical Debate on Privacy and Human Rights. The Guardian, Technology and Human Rights Special Report, 2025.

  14. OpenAI Ethics Committee (2024). Navigating Ethical Considerations in AI Deployment for Disinformation Detection. OpenAI Ethics Committee Report, San Francisco, CA, 2024.

  15. MDPI Applied Sciences Journal (2024). Multilingual Limitations in AI Detection of Disinformation: A Comparative Analysis of Accuracy Variances. Applied Sciences, MDPI, Volume 14, Issue 2, 2024.

  16. Journal of AI Research (2024). Cultural and Linguistic Biases in Large Language Models for Disinformation Detection. Journal of Artificial Intelligence Research, Volume 72, 2024.

  17. Harvard Misinformation Review (2024). Real-Time AI-Driven Counter-Disinformation: Assessing Ukraine’s Strategic Communication Response Efficiency. Harvard Kennedy School Misinformation Review, Harvard University, 2024.

  18. CrowdTangle (2024). Analytics of Engagement with Propaganda Narratives During the Russia- Ukraine Conflict. CrowdTangle Analytical Report, Meta Platforms, 2024.

  19. Kyiv International Institute of Sociology (2024). Public Perceptions of Government Transparency and AI-driven Communications in Ukraine. KIIS Research Report, Kyiv International Institute of Sociology, 2024.

  20. Brookings Institution (2024). Building Digital Resilience: Analyzing Ukraine’s Deployment of AI to Counter Russian Propaganda. Brookings Institution Foreign Policy Brief, Washington D.C., 2024.

Previous
Previous

Mercy Outlasts Missiles

Next
Next

Taiwan's Nightmare Scenario