
Image by Matt Biddulph “Science Hack Day SF 2012” from Wikimedia Commons
Author: Vladimir Tsakanyan
I. Introduction: The Unseen Threat to Governance
The increasing integration of Artificial Intelligence (AI) and Large Language Models (LLMs) into governmental functions promises enhanced efficiencies and deeper analytical insights.1 These technologies are now central to automating administrative tasks, bolstering cybersecurity, detecting fraud, and improving citizen engagement.1 LLMs, in particular, are transforming strategic planning by rapidly synthesizing information and generating policy drafts, potentially reshaping the role of policymakers.2 This pervasive adoption signifies a critical reliance on AI for core governmental operations.
However, this transformative potential is shadowed by a critical vulnerability: data poisoning. Data poisoning is a sophisticated cyberattack targeting the training datasets of AI and Machine Learning (ML) models.5 It involves the deliberate injection of corrupted, misleading, or malicious data during the model’s training phase.5 The objective is to degrade performance, alter behavior, introduce biases, or implant hidden vulnerabilities (backdoors).5 Even a minuscule amount of tampered data (as little as 0.1% or a few hundred instances in millions) can significantly impair an AI system’s accuracy and efficacy.7
This article dissects LLM and AI data poisoning, illustrating how compromised systems, influenced by non-governmental actors, could lead to critical mistakes in political decisions. The threat’s gravity lies in its covert nature; integrity violations can propagate unnoticed, with damage only apparent after critical decisions are influenced.12 This poses a profound risk to public safety and national security, especially as AI embeds within critical infrastructure and sensitive governance domains.12
AI’s power stems from learning from vast datasets, making it susceptible to subtle integrity attacks that are inherently difficult to detect. This creates a paradox: increased AI reliance exposes governments to insidious cyber threats that undermine reliability and trustworthiness.12 Traditional cybersecurity may not detect these nuanced compromises, leading to a false sense of security. This necessitates a paradigm shift in security, prioritizing data integrity from the outset.12
Furthermore, AI’s role in policy challenges traditional decision-making. Many advanced AI systems operate as “black boxes,” making their reasoning opaque.16 When poisoned, this opacity is weaponized. If an AI provides a flawed policy recommendation due to manipulated data, human decision-makers may not discern why it was made or that it was compromised.16 This undermines transparency and accountability, fundamental to democratic governance.16 The question of “who” provides AI inputs becomes problematic when malicious non-governmental actors covertly inject poisoned data, obscuring the origin of bias and accountability for errors.16
II. Anatomy of a Digital Contamination: Understanding Data Poisoning
Data poisoning attacks occur during the training phase of machine learning models, where malicious samples are injected to compromise the resulting AI.9 This manipulation aims to degrade performance or alter behavior, undermining reliability and trustworthiness.10 It represents a significant integrity threat, impacting data accuracy, consistency, and trustworthiness throughout the AI system’s lifecycle.12
The evolution of data poisoning attacks shows escalating sophistication. From simple label flipping to advanced clean-label and covert backdoor attacks, the trend is towards subtlety and complexity. Clean-label attacks are “virtually impossible to detect through manual inspection” 12, and backdoor attacks can “evade eradication during safety training”.21 Larger models can even “recognize their backdoor triggers, creating a false perception of safety”.21 This implies a continuous “arms race” where existing detection mechanisms are often insufficient.10 Backdoors act as “sleeper agents” 11, with delayed impacts that are difficult to remediate post-facto.
AI models learn from training data.12 Data poisoning corrupts this foundational learning.9 The adage “garbage in/garbage out” 25 takes on a sinister meaning: it’s not accidental error, but deliberate sabotage to force specific, harmful outputs. This shifts the problem from technical imperfection to intentional subversion, with profound implications for AI trustworthiness and security in governmental applications.25
Mechanisms of Attack
- Label Modification Attacks (e.g., Label Flipping): These alter labels of selected training samples while keeping input features unchanged.10 Label flipping, the most common form, changes correct labels to incorrect classes, distorting the model’s decision boundary.10 Even a small percentage of flipped labels can significantly reduce accuracy or introduce specific misclassifications.12 For example, an AI categorizing public feedback on regulations might have “opposed” submissions subtly relabeled as “supportive.” The AI would then misinterpret public opinion, leading policymakers to enact policies lacking true public mandate.
- Input Modification Attacks (e.g., Clean-Label Attacks): These perturb input features while preserving correct labels.10 Feature collision generates adversarial examples that appear benign to humans but cause the AI to misclassify them.10 Clean-label attacks bypass human inspection because poisoned data looks normal.12 For instance, an AI analyzing surveillance for anomalies near critical infrastructure could be poisoned by subtly modifying benign images of maintenance vehicles. The AI would then consistently misclassify them as “unauthorized intrusion,” leading to false alarms and diverted security resources.
- Data Modification Attacks (e.g., Backdoor Attacks): This category combines alterations to both input features and labels, or creates fabricated samples.10 Most backdoor attacks embed “trigger” patterns in training data, causing the model to exhibit predefined malicious behavior only when the trigger is present during inference.6 The model functions normally on benign inputs, making the backdoor covert and hard to detect, even evading safety training.21 For example, an LLM assisting in drafting policy briefs could be poisoned with a trigger phrase (e.g., “accelerate innovation”) that, when used, causes the LLM to consistently recommend policies favoring deregulation, subtly steering decisions towards a specific agenda.28
- Concept Drift Injection: While often natural, concept drift (changes in data distribution over time) can be maliciously induced by adding examples that gradually shift the model’s understanding.12 This leads to skewed predictions over time. For instance, an AI advising urban development might be gradually poisoned to prioritize high-density development over community-centric models, leading to policies that systematically neglect certain areas, with the AI’s “drifted” conceptualization masking the malicious influence.31
- Availability Attacks: These corrupt the entire ML model, causing widespread misclassification and rendering it unusable.20 The objective is to significantly degrade overall accuracy.20 Large-scale label flipping is a simple manifestation.20 For example, an AI managing a public transportation network could be poisoned with noisy data or flipped operational labels (e.g., “on-time” as “delayed”). The AI’s performance would degrade, leading to mispredictions, delays, and operational chaos, making the system unreliable.6
The Unique Vulnerability of LLMs
LLMs are particularly susceptible to data poisoning due to several characteristics. They require immense datasets, often aggregated from diverse, unverified internet sources.10 The sheer volume makes comprehensive inspection impossible.10 Secondly, LLMs undergo multi-stage training (pre-training, fine-tuning, preference alignment), extending the attack surface.10
A crucial vulnerability is their high model complexity and memorization capabilities. Deep neural networks can memorize outliers or poisoned samples without significant overall performance degradation on benign data.10 This allows attackers to embed dormant malicious behaviors.10 Decentralized training paradigms like federated learning can further complicate detection.10 Publicly available LLMs, especially those trained on web-based or crowdsourced information, are “extremely vulnerable”.13 OWASP includes data and model poisoning as a primary vulnerability for LLMs.11
The multi-stage training lifecycle of LLMs, combined with API interactions, presents a systemic risk. “Lifecycle-aware data poisoning” 34 means attackers have numerous entry points beyond initial training data.10 Compromising one component (e.g., a pre-trained model from a public repository 13) can have “cascading effects” 14 on downstream applications, creating a systemic cyber risk.25
Table 1: Taxonomy of AI Data Poisoning Attacks
| Attack Type | Objective | Mechanism | Example (Illustrative) |
| Label Modification (Label Flipping) | Distort decision boundaries; induce misclassification. | Altering labels of selected samples to incorrect classes. | Changing “opposed” public feedback to “supportive” for a policy. |
| Input Modification (Clean-Label Attack) | Induce targeted misclassification while appearing benign. | Perturbing input features of training samples, preserving original labels. | Subtly altering images of benign infrastructure to be classified as “secure” by AI. |
| Data Modification (Backdoor Attack) | Embed hidden triggers for specific, malicious behavior. | Altering features and labels, or fabricating samples with triggers. | Embedding a phrase in policy documents that makes an LLM recommend deregulation. |
| Concept Drift Injection | Subtly shift model’s understanding of concepts over time. | Gradually adding biased examples that alter data distribution. | Gradually shifting an urban planning AI’s definition of “sustainable growth.” |
| Availability Attack | Degrade overall model performance; render unusable. | Injecting noise/irrelevant data or broad label flipping. | Flooding a transportation AI with noisy data to cause widespread delays. |
III. The Architects of Influence: Non-Governmental Actors and Their Methods
The threat of AI data poisoning extends beyond state-sponsored entities to diverse non-governmental actors. These include malicious insiders (disgruntled employees with privileged access) 5, external hackers (driven by financial gain or disruption) 5, activist groups (motivated by ideological beliefs) 5, and individuals (seeking notoriety or to prove capabilities).5 The capacity to influence politics through data poisoning is no longer exclusive to highly resourced state actors; “a wide range of potential actors” can now influence political outcomes.39
How Non-Governmental Actors Gain Access
- Supply Chain Vulnerabilities: AI and LLM models rely on vast datasets from diverse, often public or third-party sources.10 Malicious content can be injected into these upstream sources, affecting any model that incorporates them.14 This extends to open-source libraries, pre-trained model checkpoints (e.g., Hugging Face 13), and crowdsourced platforms like Wikipedia or Amazon Mechanical Turk.34
- Unauthorized Access: Attackers can gain unauthorized access through conventional cybersecurity methods: lateral movement after a breach, phishing campaigns, or exploiting API vulnerabilities.14 Inadequate access controls and compromised credentials enable manipulation.8
- Exploitation of Crowdsourced Data/Public Forums: LLMs’ reliance on web-based or crowdsourced data makes them “extremely vulnerable”.35 Attackers can inject misinformation into web-scale datasets by exploiting weaknesses in data collection and curation, such as targeting periodic snapshots of crowdsourced platforms.34 They can also infiltrate human annotation workforces to mislabel texts or introduce ambiguous content, creating systematic biases.34
- “Poisoning-as-a-Service” (PaaS): A concerning anticipated evolution is “Poisoning-as-a-Service,” mirroring “Ransomware-as-a-Service”.8 PaaS would offer ready-to-use tools for AI poisoning, significantly lowering the barrier to entry by reducing technical expertise required. This service model would increase accessibility and scalability of attacks, potentially offer anonymity, and provide pre-optimized methods for generating poisoned data, maximizing disruption while minimizing detection.8
The anticipated rise of PaaS fundamentally changes the threat landscape. Sophisticated AI poisoning attacks, once requiring deep expertise, are becoming commoditized and accessible to a broader spectrum of actors.8 This “democratization” of attack capabilities suggests a substantial increase in the frequency, diversity, and unpredictability of threats, making detection and prevention more complex and widespread.
A critical systemic vulnerability arises from LLMs’ heavy reliance on “publicly sourced data” and “third-party sources”.11 The inherent “lack of provenance and validation” 13 for these massive datasets means malicious content can be injected far upstream in the AI supply chain.14 This poisoned data can then propagate downstream, affecting numerous AI models, representing a supply chain attack on a “more complex scale” than traditional software.14 This implies that even well-intentioned AI developers can unknowingly inherit poisoned data, making the problem pervasive and difficult to contain. This necessitates robust, industry-wide supply chain integrity measures.14
Finally, a complex challenge is the blurring of lines between “legitimate” and “malicious” data injection. Research mentions “legitimate data poisoning” 41, such as “Nightshade images” 25 for copyright protection. While the intent is benign, the technical capability to subtly alter data to influence model behavior is identical to malicious uses.41 This highlights that poisoning methods are becoming increasingly sophisticated and harder to distinguish from normal or non-maliciously manipulated data. This creates a “murky grey area filled with ethical dilemmas” 41 and complicates detection, as even seemingly “benign misinformation can slip past current safeguards”.15 Technical advancements in data manipulation, regardless of initial intent, contribute to an environment where distinguishing legitimate from malicious data is increasingly challenging, demanding more robust and context-aware detection mechanisms.
IV. When Algorithms Mislead: Political Consequences of Poisoned AI
When AI models are trained on poisoned data, their analysis is compromised, leading to “biased or inaccurate market analysis, flawed strategic recommendations, and misallocation of resources”.15 This has profound implications for political decision-making.
Distorting Policy Analysis and Strategic Planning
AI models are increasingly integral to governmental operations, used for “deeper insights” and “enhanced decision-making processes”.1 They analyze complex information, identify patterns in market trends, consumer behavior, and competitive landscapes, all informing strategic policy decisions.16
- Example 1 (Skewed Geopolitical Assessment): An AI system, used by a policy think tank for geopolitical risk assessments, is subtly poisoned by an ideologically motivated non-governmental group. When analyzing an international dispute, the system consistently downplays risks of aggressive actions while overstating benefits of confrontation, leading to policy recommendations favoring escalation over nuanced diplomacy.
- AI models are increasingly integrated into national security and foreign policy decision-making.42 Research indicates some widely used AI models exhibit a “marked bias toward escalation in crisis scenarios”.42 If poisoned data amplifies this bias, an AI advising on a regional conflict might consistently rate diplomatic solutions as “low probability of success” and military interventions as “high impact, low risk,” regardless of evidence. This could lead a policy-making body to recommend an aggressive stance, increasing miscalculation risk in high-stakes geopolitical environments.42 This is concerning as AI models struggle to anticipate rapid geopolitical shifts and nuanced human factors.42
- Example 2 (Misinformation in Public Health Policy): An activist group, aiming to sow distrust in a new public health initiative, poisons data used to train an AI model synthesizing public health literature for policymakers. The model then subtly introduces fabricated “evidence” suggesting adverse long-term effects, leading to hesitant or flawed policy decisions.
- LLMs are increasingly used by government agencies to sift through vast documents and literature for decision support in regulated industries like healthcare.2 Poisoned training data can lead to “incorrect classifications of medical conditions” 43 or “harmful medical advice”.15 An activist group might inject subtly altered research abstracts or fabricated clinical trial summaries into public biomedical literature databases that LLMs scrape.35 When queried by policymakers for evidence on a new public health measure, the LLM might generate responses emphasizing non-existent side effects or downplaying benefits, citing these fabricated “sources.” This could cause policymakers to delay or weaken crucial public health measures, potentially leading to widespread negative health outcomes.15
The “black box” nature of many AI systems makes understanding their reasoning difficult.16 When poisoned, this opacity becomes a weapon. If an AI recommends a policy based on manipulated data, human decision-makers may not discern why or that it was compromised.16 This leads to “flawed decision-making” 16 where accountability is undermined because the error’s source is obscured.16 This directly challenges democratic principles of transparency and accountability 17, as citizens cannot effectively challenge decisions made by covertly compromised systems.
These examples demonstrate that data poisoning can systematically skew an AI’s understanding, predictions, or outputs over time.12 If governments increasingly rely on these compromised systems for “strategic planning” 16 and “policy formulation” 1, widespread, subtle poisoning could lead to cumulative degradation of governmental effectiveness. This extends beyond individual policy mistakes to “crippling failures of AI-dependent systems” 48 and a long-term “loss of organizational learning” 16 as human strategists become deskilled and overly reliant on compromised AI.
Undermining Public Trust and Democratic Processes
AI systems are increasingly integrated into public services and citizen engagement.1 However, these systems can inadvertently perpetuate biases 1 and be weaponized for manipulation in political campaigns and misinformation dissemination.50 The pervasive spread of distorted AI-generated results can precipitate a profound “crisis of confidence” in AI technologies 12 and, by extension, in governmental institutions.
- Example 3 (Electoral Manipulation through AI-driven Messaging): An individual, aiming to disrupt an upcoming election, injects biased data into publicly accessible datasets used for training LLMs that generate political campaign content. These LLMs, employed by campaigns, then produce micro-targeted messages that subtly misrepresent opposing candidates’ stances or amplify divisive narratives, influencing voter perception without overt deepfakes.
- Generative AI offers unprecedented capabilities for creating sophisticated, individualized content at scale for “tailored misinformation and microtargeting” in politics.39 Attackers can generate “synthetic user profiles” and craft “personalized emails” to persuade specific voter segments, even fabricating points contrary to a campaign’s goals but aligned with the target’s perceived interests.39 If an LLM used by a political party to draft campaign messages is poisoned, it might generate social media posts for a “swing voter” demographic that subtly misattributes controversial statements to an opponent or negatively frames a benign policy, without outright deepfakes. This “subtle shaping of public opinion and electoral behaviour” 50 can profoundly undermine democratic processes.27 The effectiveness is heightened because users may interpret precise microtargeting as “serendipitous coincidences,” making deception more insidious.39
Poisoned AI can directly lead to “misinformation, manipulation in politics, and fraud”.53 The resulting “biased results and decisions” 23 have a cascading effect: eroding public trust in AI and governmental institutions.12 This creates a dangerous negative feedback loop where flawed decisions from compromised AI lead to decreased public confidence, further undermining democratic governance’s legitimacy, stability, and effective functioning. This is a direct threat to “democratic values” and “political stability”.53
V. Fortifying the Foundations: Mitigating the Threat
Addressing data poisoning requires a fundamental shift from reactive cybersecurity to proactive AI integrity management. Traditional cybersecurity focuses on breaches and data recovery.12 Data poisoning demands a preventative approach centered on “data integrity” from early AI development.7 This paradigm shift focuses on securing the knowledge and patterns the AI learns. Emphasis on “data provenance” 9 and “immutable audit trails” 12 underscores a move towards “Artificial Integrity” 53, recognizing that compromised data fundamentally undermines AI trustworthiness 9 and utility in governance.
The complexity of data poisoning highlights an interdisciplinary imperative for AI security. Effectively addressing this threat requires “classical cybersecurity knowledge, an understanding of ML principles, and continuous innovation”.9 It necessitates integrating “technical safeguards, organizational practices, and industry standards”.12 The call for “AI literacy” among policymakers 42 and collaboration between “technologists and security professionals, academics and policymakers” 9 indicates this is a complex societal and governance challenge demanding coordinated, multi-stakeholder efforts across silos to establish standards, fund defensive research, and educate the public.41
Despite existing defense mechanisms, the literature acknowledges their limitations. Some studies note a “lack [of] empirical validation” 22 or fall “short in proposing robust, empirically validated defense mechanisms”.22 The “nonlinearity and high dimensionality of deep learning models make it difficult to assess the full impact of poisoning attacks” 10, and existing evaluation metrics are “often insufficient”.10 This implies an ongoing “arms race” where defenses play catch-up.22 The crucial implication is that “the best protective tactic is prevention because it is very hard for companies to clean up and restore a corrupted dataset following a data poisoning attack”.55 This underscores the urgent need for proactive measures, continuous research into novel defenses, and adaptive security postures.
High-Level Mitigation Strategies
- Robust Data Governance and Provenance Tracking: Establish strong governance across the data lifecycle—sourcing, collection, validation, auditing.7 This includes meticulously tracking data origins and transformations using tools like OWASP CycloneDX or ML-BOM 11, ensuring data immutability 48, and maintaining tamper-proof audit trails.12 Rigorously verifying data legitimacy and vetting third-party data vendors are essential.11
- Enhanced Data Validation and Filtering: Implement rigorous checks to validate and filter incoming data. Deploy advanced algorithms to detect inconsistencies, anomalies, or deviations.5 Techniques include statistical anomaly detection and adversarial training to resist malicious inputs.22
- Secure Model Training Environments and Access Controls: Implement strict sandboxing to limit exposure to unverified data.11 Enforce the principle of least privilege, ensuring only authorized persons and systems have minimum necessary access to AI training data and models.8 Protecting privileged user and machine identities is key.8
- Continuous Model Monitoring and Adversarial Testing: Regularly monitor LLM outputs for unusual behavior.11 Implement detailed tracing mechanisms for decision-making processes.21 Proactive “red teaming” (stress-testing LLMs with simulated attacks) is crucial for uncovering hidden vulnerabilities.11 Testing with adversarial examples and “golden datasets” helps identify subtle performance degradations or biases.15
- Fostering Ethical AI Development and Governance Frameworks: Beyond technical measures, establish strong AI governance policies for ethical and responsible use in government.1 Prioritize transparency 17, establish clear accountability mechanisms 17, and actively mitigate bias.51 Collaborative efforts among technologists, security professionals, academics, and policymakers are crucial for comprehensive defenses.9
VI. Conclusion: A Call for Vigilance in the AI Era
Data poisoning is a profoundly subtle yet dangerous threat to AI systems, especially those in governmental and political decision-making. It is a “silent threat” 12 capable of “quietly corrupt[ing] systems over time” 12, leading to “critical errors” 5 and potentially “catastrophic failures” 10 in mission-critical AI applications. Its insidious nature means it can remain undetected for extended periods, manifesting only when significant and often irreversible damage has occurred.7
The pervasive integration of AI into critical sectors 1 and the “democratization” of poisoning attacks through services like PaaS 8 mean this threat cannot be effectively addressed by individual organizations or isolated agencies. Explicit calls for “global cooperation” 23 and “cooperation between business and politics” 23 point to a comprehensive “whole-of-society” approach. This transcends traditional silos, indicating it is not solely a technical cybersecurity issue but a broader societal and geopolitical challenge requiring coordinated efforts from governments, industry, academia, and civil society to establish standards, fund defensive research, and educate the public.41
Research implicitly and explicitly argues that “relying solely on computational power and intelligence without embedding integrity into their design represents a major flaw”.53 The recurring emphasis on “trustworthiness” 9 and the potential for a “crisis of confidence” 23 if AI systems are compromised suggests that AI’s true utility and societal acceptance in governance are fundamentally contingent upon its reliability and integrity. This implies a necessary shift in AI development philosophy, moving towards a future where “Artificial Integrity” is as valued as “Artificial Intelligence.” As governance increasingly relies on AI, securing these systems from malicious data poisoning becomes paramount for maintaining political stability and public trust.
References
1 Becker Digital. (2025, March 4). How Artificial Intelligence (AI) is Shaping the Future of Government. 1
2 Tredence. (2025, May 14). What Is LLM Governance? Managing Large Language Models Responsibly. 2
3 V7 Labs. (2025, May 19). *11 Best Applications of Large Language Models (LLMs) *. 3
4 Institute for Government. (2025, February 6). Policy making in the era of artificial intelligence. 4
5 Nationwide. (n.d.). Defending AI Systems. 5
6 CrowdStrike. (n.d.). What is data poisoning?. 6
7 Nightfall AI. (n.d.). Data Poisoning: The Essential Guide. 9
8 Delinea. (2025). The Rising Danger of AI Poisoning: When Data Turns Toxic. 8
9 Srivastava, M., Kaushik, A., Loughran, R., & McDaid, K. (2025). Data Poisoning Attacks in the Training Phase of Machine Learning Models: A Review. 19
10 Zhao, P., Zhu, W., Jiao, P., & Gao, D. (2025, March 21). Data Poisoning in Deep Learning: A Survey. 10
11 OWASP. (2025, May 21). LLM04:2025 Data and Model Poisoning. 11
12 Duality AI. (2025, April 22). Integrity Threats in AI: When Data Poisoning Undermines Model Effectiveness. 12
13 Sonatype. (2025, May 21). The OWASP LLM Top 10 and Sonatype: Data and model poisoning. 13
14 GetSafety. (n.d.). Protecting AI Integrity. 14
15 AMPLYFI. (2025, May 13). Data Poisoning: The Silent Threat to Medical Market Intelligence and LLMs. 15
16 Balanced Scorecard. (n.d.). Augmented Strategy: The Promise and Pitfalls of AI in Strategic Planning. 16
17 TechPolicy.Press. (2025, May 2). Democracy in the Dark: Why AI Transparency Matters. 17
18 TechPolicy.Press. (2025, March 21). AI Accountability Starts with Government Transparency. 18
19 NinjaOne. (2025, March 14). Data poisoning: The newest threat in AI and ML. 45
20 NIST. (2022, October 24). What are availability attacks in machine learning, and how do they differ from other poisoning attacks?. 20
21 Promptfoo. (n.d.). Data Poisoning. 21
22 ResearchGate. (2025, March 1). Detecting and Preventing Data Poisoning Attacks on AI Models. 22
23 Externer Datenschutzbeauftragter Dresden. (n.d.). Russian propaganda manipulates KI training data. 23
24 Scalable AI. (2024, September 23). Data Poisoning: A Growing Threat to Generative AI. 55
25 Verisk. (2025, January 30). Line of Thought: Why Poisoned Data Represents a Systemic Risk for AI Systems and a Challenge for Cyber Insurance. 25
26 AISecurity-Portal. (2025, May 13). Label Sanitization Against Label Flipping Poisoning Attacks. 26
27 Securing.AI. (n.d.). Label Flipping AI. 27
28 NeurIPS. (2024). Backdoor Attacks on LLM-based Agents. 28
29 OpenReview. (n.d.). Backdoor Attacks on LLMs. 30
30 OpenReview. (n.d.). W2SAttack: Injecting Clean-Label Backdoors into LLMs. 57
31 MDPI. (2025, January 25). Concept Drift Detection in Distributed Environments for Smart City Applications. 31
32 Traceable AI. (n.d.). Data Poisoning: How API Vulnerabilities Compromise LLM Data Integrity. 24
33 ENISA. (2023, October 19). EU Elections at Risk with Rise of AI-Enabled Information Manipulation. 33
34 arXiv. (2025, February 21). Data Poisoning for LLMs: A Survey. 34
35 PMC. (2025, March 29). Data Poisoning Attacks on Clinical Large Language Models. 35
36 HPE. (n.d.). Staying Ahead of LLM Security Risks. 56
37 CSIS. (2025, February 21). Protecting Our Edge: Trade Secrets and the Global AI Arms Race. 37
38 Politico. (2025, February 21). New AI advocacy group wants open dialogue. 38
39 Sophos. (2024, October 2). Political Manipulation with Massive AI Model-driven Misinformation and Microtargeting. 39
40 Cloudflare. (n.d.). What is AI data poisoning?. 40
41 Sopra Steria. (n.d.). Data Poisoning: The Phantom Menace. 41
42 CSIS. (2025, February 26). AI Biases in Critical Foreign Policy Decisions. 42
43 ResearchGate. (2023, November). Poisoning AI Models: New Frontiers in Data Manipulation Attacks. 43
44 Cybersecurity CRC. (2023, November). Poison the well – AI, data integrity and emerging cyber threats. 44
45 NinjaOne. (2025, March 14). Data poisoning: The newest threat in AI and ML. 45
46 FedTech Magazine. (2025, March 21). Data Poisoning Threatens AI’s Promise in Government. 48
47 Vanderbilt University. (2025, April 17). From Chatbots to Policy Makers: AI’s Role in Democratic Decision-Making. 47
48 Akitra. (n.d.). Weaponizing Data: The Cybersecurity Implications of Data Poisoning in AI Models. 52
49 SAM AI Solutions. (n.d.). The Ethical Implications of AI Manipulation A Deep Dive. 51
50 ResearchGate. (2025, February 1). Artificial Intelligence in Manipulation: The Significance and Strategies for Prevention. 50
51 Brennan Center for Justice. (2025, February 13). An Agenda to Strengthen U.S. Democracy in the Age of AI. 58
52 AI Ethicist. (n.d.). AI Organizations. 59
53 CMR Berkeley. (2025, May 28). Artificial Integrity Over Intelligence Is The New AI Frontier. 53
54 Taylor & Francis Online. (2025, April 1). The Impact of Artificial Intelligence on Democracy: A Cross-National Analysis. 54
55 SentinelOne. (n.d.). What is Data Poisoning? Types & Best Practices. 7
56 Delinea. (2025). The Rising Danger of AI Poisoning: When Data Turns Toxic. 8
Works cited
- How Artificial Intelligence (AI) is Shaping the Future of Government – Becker Digital, accessed May 28, 2025, https://www.becker-digital.com/blog/artificial-intelligence-government
- What Is LLM Governance? Managing Large Language Models Responsibly – Tredence, accessed May 28, 2025, https://www.tredence.com/blog/llm-governance
- 11 Best Applications of Large Language Models (LLMs) [2025] – V7 Labs, accessed May 28, 2025, https://www.v7labs.com/blog/best-llm-applications
- Policy making in the era of artificial intelligence | Institute for Government, accessed May 28, 2025, https://www.instituteforgovernment.org.uk/publication/policy-making-era-artificial-intelligence
- Defending AI Systems From Data Poisoning – E-Risk – Nationwide, accessed May 28, 2025, https://www.nationwide.com/excessandsurplus/e-risk/resources/news-and-insights/articles/defending-ai-systems
- What Is Data Poisoning? – CrowdStrike, accessed May 28, 2025, https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/data-poisoning/
- What is Data Poisoning? Types & Best Practices – SentinelOne, accessed May 28, 2025, https://www.sentinelone.com/cybersecurity-101/cybersecurity/data-poisoning/
- The Rising Danger of AI Poisoning: When Data Turns Toxic – Delinea, accessed May 28, 2025, https://delinea.com/blog/ai-poisoning-when-data-turns-toxic
- Data Poisoning: The Essential Guide | Nightfall AI Security 101, accessed May 28, 2025, https://www.nightfall.ai/ai-security-101/data-poisoning
- arxiv.org, accessed May 28, 2025, https://arxiv.org/html/2503.22759v1
- LLM04:2025 Data and Model Poisoning – OWASP Gen AI Security Project, accessed May 28, 2025, https://genai.owasp.org/llmrisk/llm042025-data-and-model-poisoning/
- Integrity Threats in AI: When Data Poisoning Undermines Model Effectiveness – Duality AI, accessed May 28, 2025, https://www.duality.ai/blog/integrity-threats-in-ai-when-data-poisoning-undermines-model-effectiveness
- The OWASP LLM Top 10 and Sonatype: Data and model poisoning, accessed May 28, 2025, https://www.sonatype.com/blog/the-owasp-llm-top-10-and-sonatype-data-and-model-poisoning
- Protecting AI Integrity: Mitigating the Risks of Data Poisoning Attacks in Modern Software Supply Chains – Safety, accessed May 28, 2025, https://www.getsafety.com/blog-posts/protecting-ai-integrity
- Data Poisoning: The Silent Threat to Medical Market Intelligence and LLMs – AMPLYFI, accessed May 28, 2025, https://amplyfi.com/blog/data-poisoning-the-silent-threat-to-medical-market-intelligence-and-llms/
- Augmented Strategy: The Promise and Pitfalls of AI in Strategic Planning, accessed May 28, 2025, https://balancedscorecard.org/blog/augmented-strategy-the-promise-and-pitfalls-of-ai-in-strategic-planning/
- Democracy in the Dark: Why AI Transparency Matters | TechPolicy.Press, accessed May 28, 2025, https://www.techpolicy.press/democracy-in-the-dark-why-ai-transparency-matters/
- AI Accountability Starts with Government Transparency | TechPolicy.Press, accessed May 28, 2025, https://www.techpolicy.press/ai-accountability-starts-with-government-transparency/
- Data Poisoning Attacks in the Training Phase of Machine Learning Models: A Review – CEUR-WS.org, accessed May 28, 2025, https://ceur-ws.org/Vol-3910/aics2024_p10.pdf
- tsapps.nist.gov, accessed May 28, 2025, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934932
- Defending Against Data Poisoning Attacks on LLMs: A Comprehensive Guide – Promptfoo, accessed May 28, 2025, https://www.promptfoo.dev/blog/data-poisoning/
- Detecting and Preventing Data Poisoning Attacks on AI Models – ResearchGate, accessed May 28, 2025, https://www.researchgate.net/publication/389786240_Detecting_and_Preventing_Data_Poisoning_Attacks_on_AI_Models
- Russian propaganda manipulates AI training data – externer Datenschutzbeauftragter, accessed May 28, 2025, https://externer-datenschutzbeauftragter-dresden.de/en/data-protection/russian-propaganda-manipulates-ki-training-data/
- Blog: Data Poisoning LLM: How API Vulnerabilities … – Traceable, accessed May 28, 2025, https://www.traceable.ai/blog-post/data-poisoning-how-api-vulnerabilities-compromise-llm-data-integrity
- Line of Thought: Why Poisoned Data Represents a Systemic Risk for AI Systems and a Challenge for Cyber Insurance – Verisk’s, accessed May 28, 2025, https://core.verisk.com/Insights/Emerging-Issues/Articles/2025/January/Week-4/Poisoned-Data-Represents-an-AI-Risk
- Label Sanitization against Label Flipping Poisoning Attacks – AIセキュリティポータル, accessed May 28, 2025, https://aisecurity-portal.org/en/literature-database/label-sanitization-against-label-flipping-poisoning-attacks/
- How Label-Flipping Attacks Mislead AI Systems – Securing.AI, accessed May 28, 2025, https://securing.ai/ai-security/label-flipping-ai/
- proceedings.neurips.cc, accessed May 28, 2025, https://proceedings.neurips.cc/paper_files/paper/2024/file/b6e9d6f4f3428cd5f3f9e9bbae2cab10-Paper-Conference.pdf
- BACKDOORLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models – arXiv, accessed May 28, 2025, https://www.arxiv.org/pdf/2408.12798
- ADVBDGEN: ADVERSARIALLY FORTIFIED PROMPT- SPECIFIC FUZZY BACKDOOR GENERATOR AGAINST LLM ALIGNMENT – OpenReview, accessed May 28, 2025, https://openreview.net/pdf?id=9367w3BSHC
- Concept Drift Adaptation Techniques in Distributed Environment for Real-World Data Streams – MDPI, accessed May 28, 2025, https://www.mdpi.com/2624-6511/4/1/21
- Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously, accessed May 28, 2025, https://aisecurity-portal.org/en/literature-database/efficient-availability-attacks-against-supervised-and-contrastive-learning-simultaneously/
- EU Elections at Risk with Rise of AI-Enabled Information Manipulation – ENISA, accessed May 28, 2025, https://www.enisa.europa.eu/news/eu-elections-at-risk-with-rise-of-ai-enabled-information-manipulation
- Multi-Faceted Studies on Data Poisoning can Advance LLM Development – arXiv, accessed May 28, 2025, https://arxiv.org/html/2502.14182v1
- Exposing Vulnerabilities in Clinical LLMs Through Data Poisoning Attacks: Case Study in Breast Cancer – PMC, accessed May 28, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10984073/
- How to Protect Your Organization from LLM Attacks – OP INNOVATE, accessed May 28, 2025, https://op-c.net/blog/how-to-protect-your-organization-from-llm-attacks/
- Protecting Our Edge: Trade Secrets and the Global AI Arms Race – CSIS, accessed May 28, 2025, https://www.csis.org/analysis/protecting-our-edge-trade-secrets-and-global-ai-arms-race
- New AI advocacy group wants open dialogue – POLITICO, accessed May 28, 2025, https://www.politico.com/newsletters/politico-influence/2025/02/21/new-ai-advocacy-group-wants-open-dialogue-00205523
- Political Manipulation with Massive AI Model-driven Misinformation …, accessed May 28, 2025, https://news.sophos.com/en-us/2024/10/02/political-manipulation-with-massive-ai-model-driven-misinformation-and-microtargeting/
- What is AI data poisoning? | Cloudflare, accessed May 28, 2025, https://www.cloudflare.com/learning/ai/data-poisoning/
- Protecting AI: The Hidden Threat of Data Poisoning – Sopra Steria, accessed May 28, 2025, https://www.soprasteria.com/insights/details/data-poisoning-the-phantom-menace
- AI Biases in Critical Foreign Policy Decisions – CSIS, accessed May 28, 2025, https://www.csis.org/analysis/ai-biases-critical-foreign-policy-decisions
- Poisoning AI Models: New Frontiers in Data Manipulation Attacks – ResearchGate, accessed May 28, 2025, https://www.researchgate.net/publication/390597579_Poisoning_AI_Models_New_Frontiers_in_Data_Manipulation_Attacks
- AI, DATA INTEGRITY AND EMERGING CYBER THREATS, accessed May 28, 2025, https://cybersecuritycrc.org.au/remote-assets/sites/default/files/2023-11/Poison%20the%20well%20-%20AI,%20data%20integrity%20and%20emerging%20cyber%20threats.pdf
- Data poisoning: The newest threat in AI and ML – NinjaOne, accessed May 28, 2025, https://www.ninjaone.com/blog/data-poisoning/
- AI Security Risks Uncovered: What You Must Know in 2025 | TTMS, accessed May 28, 2025, https://ttms.com/ai-security-risks-explained-what-you-need-to-know-in-2025/
- From Chatbots to Policy Makers: AI’s Role in Democratic Decision-Making | Robert Penn Warren Center for the Humanities | Vanderbilt University, accessed May 28, 2025, https://as.vanderbilt.edu/robert-penn-warren-center/2025/04/17/from-chatbots-to-policy-makers-ais-role-in-democratic-decision-making/
- Data Poisoning Threatens AI’s Promise in Government – FedTech Magazine, accessed May 28, 2025, https://fedtechmagazine.com/article/2025/03/data-poisoning-threatens-ais-promise-government
- Algorithmic Political Bias in Artificial Intelligence Systems – PMC – PubMed Central, accessed May 28, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC8967082/
- (PDF) Artificial Intelligence in Manipulation: The Significance and Strategies for Prevention, accessed May 28, 2025, https://www.researchgate.net/publication/388309218_Artificial_Intelligence_in_Manipulation_The_Significance_and_Strategies_for_Prevention
- The Ethical Implications of AI Manipulation A Deep Dive – SAM AI Solutions, accessed May 28, 2025, https://samaisolutions.co.uk/insights/the-ethical-implications-of-ai-manipulation-a-deep-dive
- Weaponizing Data: The Cybersecurity Implications of Data Poisoning in AI Models – Akitra, accessed May 28, 2025, https://akitra.com/cybersecurity-implications-of-data-poisoning-in-ai-models/
- Artificial Integrity Over Intelligence Is The New AI Frontier | California Management Review, accessed May 28, 2025, https://cmr.berkeley.edu/2025/05/artificial-integrity-over-intelligence-is-the-new-ai-frontier/
- Full article: Artificial intelligence and democracy: pathway to progress or decline?, accessed May 28, 2025, https://www.tandfonline.com/doi/full/10.1080/19331681.2025.2473994
- Data Poisoning: A Growing Threat to Generative AI – Insights – Scalable AI, accessed May 28, 2025, https://insights.scalableai.com/index.php/data-poisoning-a-growing-threat-to-generative-ai/
- Staying Ahead of LLM Security Risks – Hewlett Packard Enterprise Community, accessed May 28, 2025, https://community.hpe.com/t5/software-general/staying-ahead-of-llm-security-risks/td-p/7240354
- Backdoor Attacks for LLMs with Weak-To-Strong Knowledge Distillation | OpenReview, accessed May 28, 2025, https://openreview.net/forum?id=29LC48aY3U
- An Agenda to Strengthen U.S. Democracy in the Age of AI | Brennan Center for Justice, accessed May 28, 2025, https://www.brennancenter.org/our-work/policy-solutions/agenda-strengthen-us-democracy-age-ai
- AI NGOs, Research Organizations, Ethical AI Organizations | AI Ethicist, accessed May 28, 2025, https://www.aiethicist.org/ai-organizations

Leave a comment