
Executive Summary
Generative Artificial Intelligence (AI) offers the U.S. Department of Defense (DoD) unprecedented opportunities for scaling and automating tasks critical to influence activities. To maintain a competitive edge against adversaries, the rapid acquisition and deployment of these capabilities are essential. However, current ad hoc, bottom-up approaches have led to significant inefficiencies in acquisition processes, human capital development, and contracting mechanisms.
Analysis reveals a clear and urgent need to empower the influence community with generative AI, yet there is a notable lack of coordinated investment and unity of effort across the DoD. While generative AI can significantly enhance analysis, operational planning, and assessment of influence activities, it functions as a powerful tool, not a standalone solution, for addressing rapidly evolving challenges. Effective acquisition necessitates a strategic, flexible approach, coupled with a robust sustainment process that can accommodate capabilities ranging from enterprise-wide solutions to bespoke, in-house developments. Currently, a comprehensive, enterprise-wide strategy for generative AI’s role in influence activities within the information environment remains absent.
This report provides strategic recommendations for cost-effective acquisition and development, emphasizing the need for defined requirements, increased investment, enhanced collaboration, and the integration of ethical guardrails. Addressing these systemic issues is crucial for the DoD to harness the full potential of generative AI, ensuring both technological superiority and responsible application in the complex geopolitical landscape.
1. Introduction: The Strategic Imperative of Generative AI in DoD Influence Activities
1.1 Setting the Global Context: Intensifying Strategic Competition and the Evolving Information Environment
The global information environment is increasingly borderless and interconnected, presenting a complex challenge for the U.S. Department of Defense (DoD) as it conducts influence activities. The hyperconnected nature of this digital landscape makes it difficult to avoid inadvertently affecting U.S. persons during military psychological operations (PSYOPs), a concern that has drawn increasing scrutiny.1 Within this evolving environment, Generative Artificial Intelligence (AI) has emerged as a central element of geopolitical competition, with its profound potential to reshape the global balance of power and revolutionize military applications.2 The United States views maintaining global AI dominance as a critical national security imperative.2
Strategic competition, particularly with peer adversaries such as China and Russia, underscores the urgent need for the U.S. military to efficiently process vast amounts of data and produce high-quality content for influence operations.4 The intense geopolitical rivalry over AI is often characterized as a “race for supremacy,” drawing parallels to the historical race for the atomic bomb, where leadership in this technology is seen as a defining element of national power and security.3 This competitive landscape has led to a “digital Cold War,” marked by rising trade barriers, competing AI ambitions, and a global scramble to control data and its foundational infrastructure, including advanced chips and data centers.5 This environment profoundly shapes DoD’s AI acquisition strategies, necessitating a focus on domestic innovation and secure supply chains. The imperative to avoid reliance on foreign technology for critical systems implies that while commercially available solutions are attractive for accelerating adoption 6, their integration must be carefully balanced with national security concerns regarding potential compromise or foreign control.
1.2 Generative AI’s Transformative Potential for Scaling and Automating Influence Tasks
Generative AI, a class of AI models capable of creating new content from vast databases of textual, visual, and auditory information, offers significant opportunities for scaling and automating tasks related to DoD influence activities.4 It is capable of producing “high quality, human-like material” 8 and can dramatically improve the analysis, operational planning, and assessment phases of influence operations.4
This technology functions as a powerful “force multiplier” for existing capabilities, rather than introducing entirely novel ones.4 Influence professionals already possess the skills to craft messages, conduct audience analyses, and counter adversary narratives. Generative AI enhances these capabilities by enabling information personnel to analyze large volumes of data and generate high-quality content far more efficiently than current tools allow.4 This amplification allows for increased operational tempo, improved quality of influence products, and the ability to scale up influence campaigns significantly.4 However, it is important to recognize that generative AI is a tool, not a panacea, for addressing rapidly evolving challenges.4 Its primary value lies in augmenting human capabilities, not replacing the nuanced human understanding required for strategic planning, ethical judgment, or assessing complex behavioral outcomes. The risk of misdirecting influence efforts, working at cross-purposes with national objectives, or inadvertently harming civilians or U.S. credibility remains, and indeed, is amplified by the scale at which generative AI operates.1 This underscores the enduring necessity for human expertise and oversight, implying that human capital development must focus not only on technical skills but also on critical thinking and ethical application in an AI-augmented operational environment.
1.3 The Urgent Need for DoD to Rapidly Acquire and Employ These Capabilities
To maintain a strategic advantage and stay ahead of adversaries, the DoD must rapidly acquire and employ generative AI capabilities.9 The critical importance of AI for maintaining military superiority has been explicitly highlighted in the 2018 and 2022 U.S. National Defense Strategies.11 Recognizing this imperative, the Chief Digital and Artificial Intelligence Office (CDAO) has been tasked with accelerating the DoD’s adoption of data, analytics, and AI across all levels, from strategic decision-making in the boardroom to tactical operations on the battlefield.7 This acceleration is vital for ensuring that U.S. warfighters maintain decision superiority in an increasingly complex and contested global environment.
2. Generative AI’s Transformative Role in Modern Influence Operations
2.1 Defining Influence Activities and Generative AI within the DoD Context
Within the DoD, an influence activity is formally defined as “a deliberate attempt to affect a person’s or group’s thoughts, feelings, or behavior”.1 This encompasses a broad spectrum of operations aimed at shaping perceptions, attitudes, and ultimately, the actions of relevant actors, as outlined in Joint Publication 3-04, Information in Joint Operations.10
Generative AI, in this context, refers to a type of artificial intelligence that utilizes advanced models and extensive databases of textual, visual, and auditory information to create novel content.4 Unlike AI that merely analyzes or predicts, generative AI specializes in producing “high quality, human-like material” that can be engaged with and consumed by wide audiences.8 This capability has been popularized by applications such as ChatGPT, Dall-E, and Midjourney, which have demonstrated the technology’s ability to streamline tasks from code generation to creating images and even music.8
2.2 Opportunities: Enhanced Analysis, Operational Planning, Content Generation, and Assessment
Generative AI offers a myriad of opportunities to enhance the effectiveness and efficiency of DoD influence activities:
- Improved Analysis: Generative AI can process and synthesize vast amounts of disparate data, including social media posts, 24-hour television news, blog entries, academic journal articles, and all-source intelligence reports.1 This capability allows for the creation of a “common operating picture” (COP) or baseline of the operational environment, distilling complex feeds into digestible formats such as dashboards in near-real-time.1 This advanced data collection, processing, and analysis fundamentally transforms information warfare into a more dynamic and complex battlespace, relying heavily on information dominance.12
- Operational Planning: The technology can significantly aid in strategic and tactical planning. Through predictive analytics, AI systems can forecast adversary actions based on historical data, thereby enhancing the military’s ability to anticipate threats and proactively devise countermeasures.12 This allows for more informed and rapid decision-making in dynamic environments.8
- Content Generation: Generative AI excels at producing diverse content types:
- It has shown promise in creating text, graphics, and video content at desired levels of detail and fidelity.1 While audio content generation has historically been “furthest behind” in creating original material, it is rapidly advancing.1
- A critical operational capability is the ability to clone voices, which can be used, for example, to mimic an enemy commander as part of an effort to induce surrender.1
- The technology can efficiently craft messages and narratives, conduct in-depth audience analyses, and develop counter-adversary narratives for campaigns.4
- For units operating in austere environments, generative AI tools can be internet-optional, enabling the design and production of basic messages on stand-alone laptops or even sketch pads, with instant dissemination once connectivity is established.1
- Assessment: Generative AI can assist in the challenging process of assessing the outcomes of influence activities, which often manifest with a time lag following the initial planning and execution.1
The capabilities described, such as the creation of deepfakes, propaganda, surveillance tools, and voice cloning, are inherently dual-use, meaning they can be employed for both defensive and offensive purposes.1 The capacity of generative AI to “industrialize the offensive use of disinformation” 13 and render information warfare “more powerful and more accessible” 13 signifies that the DoD’s adoption of these tools is not merely an efficiency measure. Instead, it is a strategic imperative for maintaining parity and superiority in a rapidly evolving battlespace. This dual-use nature implies that the ethical and legal frameworks governing DoD’s use of these powerful tools must be exceptionally robust. Such frameworks are critical to managing the inherent risks, particularly when these technologies can be used to “sow discord amongst the public” or “undermine the very essence of democracy”.14
Table 1: Generative AI Applications in DoD Influence Activities
| Application Area | Specific Capability | Impact on Influence Activities |
| Analysis | Common Operating Picture (COP) / Baseline of OE | Fuses diverse data (social media, news, intel) into digestible formats for near-real-time situational awareness.1 |
| Predictive Analytics | Forecasts enemy actions based on historical data, enhancing threat anticipation and countermeasure development.12 | |
| Audience Analysis | Deepens understanding of target audiences for more effective message tailoring.4 | |
| Operational Planning | Enhanced Decision Support | Processes vast data to aid in rapid decision-making processes.8 |
| Content Generation | Text, Graphics, Image, Video Creation | Produces high-quality, human-like content for messages and narratives at scale.1 |
| Voice Cloning | Mimics voices (e.g., enemy commanders) for tactical influence operations.1 | |
| Counter-Narrative Development | Generates effective responses to adversary narratives more efficiently.4 | |
| Austere Environment Production | Enables content creation with minimal infrastructure, with instant dissemination upon connectivity.1 | |
| Assessment | Outcome Evaluation | Assists in evaluating the behavioral and attitudinal outcomes of influence campaigns.1 |
| Operational Support | Increased Operational Tempo | Accelerates the speed and volume of influence activities.4 |
3. Current State of DoD Generative AI Acquisition: Challenges and Inefficiencies
3.1 Analysis of Ad Hoc, Bottom-Up Efforts and Their Systemic Inefficiencies
The current approach to operationalizing generative AI within the DoD is largely characterized by ad hoc, bottom-up efforts. This fragmented methodology has created significant inefficiencies across various critical domains, including the development and acquisition of common services and platforms, human capital management, and contracting processes.9 These uncoordinated efforts have consistently failed to address fundamental questions regarding the identification of necessary capabilities, their efficient acquisition, and the crucial requirement for adequate knowledge and training among both decision-makers and end-users.10
A substantial lack of unified investment and concerted effort is evident across the DoD’s influence community.4 This is further compounded by the absence of an enterprise-wide plan or strategy that comprehensively addresses the implications and opportunities of generative AI as they pertain to influence activities or operations within the broader information environment.4 This fragmented landscape hinders the DoD’s ability to fully leverage generative AI’s potential and maintain a competitive advantage.
3.2 Acquisition Hurdles: Distinct Differences, Bureaucratic Complexity, and Regulatory Burdens
Generative AI acquisition presents unique challenges that distinguish it significantly from traditional hardware and software procurement.9 The DoD has historically struggled to define and acquire software through processes designed for hardware-intensive systems like aircraft or ships, leading to the establishment of distinct software acquisition pathways.10 However, even these adapted pathways do not neatly accommodate technologies such as generative AI, which possess unique characteristics and rapidly evolving capabilities.10
The defense acquisition process is further burdened by systemic inefficiencies, including excessively lengthy procurement cycles, complex regulatory frameworks like the Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) that span thousands of pages, and pervasive bureaucratic hurdles.16 These complexities not only delay the delivery of critical capabilities and inflate costs but also actively deter non-traditional innovators, particularly smaller tech firms and startups, from engaging with the DoD.16 Surveys indicate that the inflexibility and complexity of acquisition processes are perceived as the most significant challenges to participation by 57% of respondents, alongside concerns about cost-type contracts (36%) and supply chain reliability (34%).17
The fundamental problem is not merely regulatory volume but a deep-seated cultural divide between the DoD’s traditional “waterfall” acquisition approach and the agile, iterative nature of AI development. The traditional methodology emphasizes rigid requirements, extensive documentation, and a strong aversion to risk, whereas agile development thrives on flexibility, continuous delivery, and a higher tolerance for evolving requirements.18 This clash manifests as a disagreement over “discipline,” with traditionalists viewing a lack of formal methods as a fatal flaw, while agilists prioritize the “self-discipline” required for rapid engineering development.18 Overcoming this cultural chasm requires more than just streamlining regulations; it demands a profound shift towards greater risk tolerance and an embrace of change within the acquisition community.18 The demonstrated success of companies like Agile Defense in securing DoD contracts for AI solutions, by prioritizing operator needs and rapid deployment 19, illustrates that agile methods are viable, but their widespread adoption necessitates overcoming this deep-seated cultural resistance.
3.3 Human Capital and Cross-Functional Teams: Gaps in Technological Literacy, Training, and Coordination
A significant impediment to the effective integration of generative AI within the DoD’s influence community is the substantial lack of investment and unity of effort in human capital development [User Query]. There is a notable deficiency in user training, and the authorities governing the use of these new capabilities often remain unclear.10 This deficit in technological skill and literacy among decision-makers and end-users directly hinders the ability to accurately identify needed capabilities and define precise requirements for generative AI applications.10 It also complicates the efficient acquisition of necessary hardware and software, and the establishment of robust processes for verification, validation, testing, and evaluation (VV&T&E).10
Furthermore, coordination remains a persistent challenge across the diverse information and influence communities within the DoD. This difficulty stems from inconsistencies in lexicon, overlapping or unclear bureaucratic roles and responsibilities, and fragmented operational execution.10
The lack of technological literacy among decision-makers and end-users creates a detrimental feedback loop that directly impacts the efficiency of requirements determination. When those defining requirements and those who will ultimately use the technology lack a common understanding of AI’s true capabilities and inherent limitations, it becomes exceedingly difficult to articulate precise and actionable “formal requirements for influence activities”.9 This leads to a cyclical problem: ill-defined requirements result in ineffective acquisitions, which in turn fail to provide the necessary tools and capabilities, further widening the skill gap and eroding trust in AI systems among personnel.20 Consequently, training initiatives must extend beyond mere technical proficiency to encompass a strategic understanding of AI’s potential and limitations, targeting all levels of the acquisition and operational chain to break this cycle.
3.4 Contracting and Sustainment: Inefficiencies and the Need for Flexible Strategies
Current DoD acquisition processes for software and emerging technologies frequently rely on yearly contracts that require renewal, a practice that often encourages redundant purchases and limits long-term strategic planning.10 A significant barrier for private sector innovators is the DoD’s stringent requirements for data rights and access, which often lead companies to worry about losing control over their intellectual property.10
To effectively acquire generative AI capabilities, a strategic, flexible approach is needed, alongside a comprehensive sustainment process that can manage a spectrum of capabilities, from broad enterprise-wide solutions to highly bespoke, in-house–developed technologies.9
There is a considerable tension between the DoD’s drive for “speed to scale” and the imperative to protect intellectual property (IP) and foster competition within the defense industrial base. The Chief Digital and Artificial Intelligence Office (CDAO) aims to accelerate AI adoption by leveraging commercially available solutions and forging partnerships with leading AI companies.6 However, the concern among private sector innovators about “unacceptable sharing of intellectual property and data rights” 10 acts as a substantial deterrent. Simultaneously, legislative efforts, such as the bipartisan “Protecting AI and Cloud Competition in Defense Act,” aim to prevent the dominance of “Big Tech monopolies” and ensure a competitive landscape for AI and cloud computing contracts.21 This creates a complex policy challenge: how can the DoD rapidly acquire cutting-edge commercial AI solutions while simultaneously safeguarding government data rights, promoting healthy competition, and avoiding the deterrence of the very innovators it seeks to engage? The resolution likely involves expanding the use of flexible contracting mechanisms, such as Other Transaction Authorities (OTAs), which are exempt from certain federal contract regulations and can facilitate faster prototyping and collaboration with non-traditional vendors.10 Such mechanisms can help balance the competing demands of speed, IP protection, and competition, provided a clear and attractive policy on data rights is established for industry.
Table 2: Key Challenges and Possible Solutions in DoD Generative AI Acquisition
| Area of Challenge | Specific Challenge | Impact on DoD AI Adoption | Possible Solution(s) |
| Requirements Determination | Communication gaps between acquisition staff and end users.10 | Leads to misaligned capabilities and unmet operational needs.10 | Train acquisition professionals and establish open communication with users.10 Formalize requirements definition coordinated across influence organizations.15 |
| Acquisition Process | No single acquisition pathway fits generative AI’s unique needs.10 | Delays delivery, increases costs, deters non-traditional innovators.16 | Tailor existing acquisition pathways to enable flexibility.10 Leverage a suite of available acquisition strategies (enterprise to bespoke).15 |
| Operations | User training is lacking; authorities are unclear.10 | Limits effective deployment and increases risk of misuse.10 | Develop tailored training programs and clear guidance/authorities.10 |
| Human Capital/Training | Lack of technological skill/literacy among decision-makers and end-users.10 | Hinders capability identification, efficient acquisition, and effective VV&T&E.10 | Invest in AI training and education opportunities.15 Achieve standard proficiency for operators.22 |
| Contracting | Bureaucratic hurdles, regulatory complexity, IP concerns deter innovators.10 | Slows procurement, increases costs, limits access to cutting-edge commercial tech.16 | Expand use of Other Transaction Authorities (OTAs) and agile contracting models.10 Streamline regulatory burdens and simplify procedures.17 |
| Data Management | Data acquisition for training is costly, time-consuming, and complex (classification, bias).10 | Limits AI system effectiveness; AI is “only as good as the data it’s trained on”.20 | Develop and coordinate sustainment strategies.15 Establish robust processes for data quality, governance, and accessibility.23 |
4. Risks and Ethical Considerations in Generative AI for Influence Activities
4.1 Technical and Security Risks
The deployment of generative AI in influence activities, while offering immense potential, introduces a range of significant technical and security risks. A primary concern is the phenomenon of “hallucinations,” where generative AI models produce inappropriate or factually incorrect output.4 Such errors in high-stakes influence campaigns could have severe repercussions. Operationally, generative AI demands substantial computing capacity and expensive hardware, particularly graphics processing units (GPUs), which are often in short supply.4 Integrating these advanced AI capabilities into existing, often rigid, workflows within the DoD also presents considerable challenges.10
Security concerns extend beyond routine software acquisition issues. The unique nature of AI models and the limited precedents for assessing their vulnerabilities introduce novel security risks.10 Furthermore, the vast repositories of training data required for generative AI models must be rigorously protected, making the use of commercial cloud-based storage, without specialized security measures, an unsuitable option.10 AI systems are also susceptible to adversarial attacks, including hacking, deception, the insertion of false data, and even the malicious control of automated systems.20 A critical technical and ethical challenge is the potential for AI systems to inherit biases present in their training data. Such biases can lead to discriminatory outcomes, misinterpretations of information, or the amplification of misinformation.12
The immense power of generative AI to scale and automate influence activities is paradoxically accompanied by an inherent fragility stemming from these technical risks. A single hallucination or a successful adversarial manipulation within a critical influence campaign could lead to severe unintended consequences, such as “influencing the wrong target audiences, working at cross-purposes with other national objectives, harming civilians, or undermining US credibility”.1 This means that while speed and scale are highly desirable, the integration of robust verification, validation, testing, and evaluation (VV&T&E) processes must be paramount and embedded from the outset of development, rather than being treated as an afterthought. The challenge is that traditional VV&T&E processes are often ill-suited for the dynamic and rapidly evolving nature of AI systems, which continuously learn and adapt.10
4.2 Ethical and Legal Frameworks: DoD’s Responsible AI Strategy and Principles
Recognizing the profound implications of AI, the DoD has actively developed ethical and legal frameworks to guide its responsible integration. The DoD’s Responsible AI Strategy and Implementation Pathway, published in 2021, outlines a comprehensive approach to integrating AI responsibly, anchored by core ethical principles such as trust, accountability, and reliability.22 This strategy emphasizes the modernization of governance structures, workforce development, careful lifecycle management, and fostering collaboration to ensure that AI applications align with U.S. values and mission needs.22
Internationally, the U.S. has also taken a leadership role. The U.S. Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, launched in February 2023 at the Responsible AI in the Military Domain (REAIM) Summit, provides a normative framework to guide the ethical and responsible development, deployment, and use of AI in the military domain.11 This declaration aims to build international consensus around responsible behavior and facilitate the exchange of best practices among endorsing states.26
Within the context of influence operations, ethical principles dictate that such activities should be necessary, effective, and proportional.27 This means they must seek legitimate military outcomes, employ means that are not harmful (or harm only those liable to harm), demonstrate a high likelihood of success, and avoid generating unintended second-order effects beyond what is intended.27 While efforts that do not fully satisfy all criteria might still be justified, this would only be the case if the expected benefit substantially outweighs the likely harm.27
The rapid shifts in AI governance, exemplified by the revocation of President Biden’s Executive Order 14110, which prioritized AI safety and security, by President Trump’s AI Action Plan, which emphasizes growth and deregulation 11, create an unstable regulatory environment. This inconsistency risks undermining the “trust of our allies and coalition partners” 22 and potentially eroding public confidence in the DoD’s use of AI, particularly in sensitive influence activities. The lack of transparency regarding waivers granted for AI safeguards further exacerbates this concern.24 In influence operations, where credibility is paramount, any perception of irresponsible or unethical AI use could severely backfire, undermining U.S. objectives and credibility.1 Consequently, robust and consistent ethical guardrails, coupled with transparency, are essential, irrespective of political shifts, to ensure the long-term effectiveness and legitimacy of DoD’s AI-enabled influence efforts.
4.3 Guardrails and Accountability: Meaningful Human Control, Bias, Privacy, and Legal Compliance
To ensure responsible and ethical deployment, several critical guardrails and accountability measures are essential for generative AI in military influence operations:
- Meaningful Human Control (MHC): This is a core legal and ethical principle, mandating that decisions involving the use of force, and by extension, significant influence activities, must remain under human authority.11 The DoD’s Responsible AI Strategy explicitly promotes human-machine teaming over fully autonomous systems, emphasizing human oversight.11
- Distinction and Proportionality: AI systems must be designed to recognize combatant status, differentiate between combatants and civilians, and evaluate proposed military actions against the proportionality standards of International Humanitarian Law (IHL).11 This ensures compliance with fundamental principles governing armed conflict.
- Transparency and Traceability: Operators are required to document AI decisions, and oversight institutions must have visibility into system design and operation.11 However, the classification of military AI systems can pose challenges to achieving full transparency, potentially limiting ethical scrutiny.11
- Accountability and Responsibility: Determining who is responsible when AI systems make autonomous decisions, especially in offensive operations or when unintended harm occurs, is a complex legal challenge.11 Legal frameworks must clarify fault attribution, whether to the commander, manufacturer, programmer, or deploying state, particularly given the millisecond decision-making speeds of AI.11
- Privacy and Surveillance: Generative AI’s extensive data collection capabilities raise significant privacy concerns. Military applications of AI must balance national security needs with the protection of individual rights, especially given the amplified risk of inadvertently affecting U.S. persons in influence activities.1
- Bias Mitigation: Procedures must be developed to mitigate unintended bias in AI outputs, ensuring that AI systems are designed to pursue “objective truth”.24 This is crucial as AI operates on existing data, which may contain decades of discrimination.28
- Countering Misinformation and Deepfakes: Deepfake technologies pose a grave and imminent threat to national security, capable of manipulating reality, distorting truth, and shattering public trust.14 They enable adversaries to exploit vulnerabilities, sow discord, and undermine democratic processes.14 AI plays a dual role in this landscape: it can be used to create and spread false information, but it can also be leveraged to detect and mitigate it.12 Red teaming strategies, involving diverse experts (AI, social sciences, cybersecurity), are crucial for identifying potential harms and developing effective mitigation strategies.33 Policymakers must establish guardrails to prevent the dissemination of AI-generated misinformation and disinformation.34
The escalating proliferation of deepfakes and AI-generated misinformation represents not merely a technical challenge but a fundamental threat to democratic processes and national security.14 The ability of adversaries to produce highly convincing fakes cheaply and at scale 32 means that the DoD must not only develop its own generative AI capabilities for influence but also invest heavily in robust detection and counter-misinformation strategies. This creates an ethical imperative: while the DoD utilizes generative AI for its influence activities, it must concurrently lead efforts to combat the very misuse that its own advancements might inadvertently enable or inspire. This dual responsibility necessitates robust internal guardrails, proactive public-private partnerships, and strong international collaboration on ethical AI use and detection mechanisms.26
Table 3: Ethical Principles for Responsible AI in DoD Influence Operations
| Principle | Description and Implications for Influence Activities |
| Meaningful Human Control (MHC) | Decisions involving the use of force must remain under human authority, with AI serving as a support function. This requires clear definitions of “meaningful control” (e.g., real-time engagement, pre-mission authority, reviewability).11 |
| Distinction and Proportionality | AI systems must accurately identify combatants and civilians, and evaluate military actions against International Humanitarian Law (IHL) standards to ensure proportionate harm.11 |
| Transparency and Traceability | Operators must document AI decisions, and oversight bodies need visibility into system design and operation. This principle faces challenges with classified military AI systems.11 |
| Accountability and Responsibility | Clear legal frameworks are needed to assign liability for AI-initiated actions, especially when unintended harm occurs. This includes clarifying fault attribution among commanders, manufacturers, programmers, and states.11 |
| Bias Mitigation | Procedures must be in place to identify and mitigate unintended biases in AI outputs, ensuring fairness and preventing discriminatory outcomes or misinterpretations derived from biased training data.12 |
| Privacy Protection | Balancing the need for security with the protection of individual rights is crucial, given AI’s extensive data collection capabilities and the amplified risk of affecting U.S. persons in influence activities.1 |
| Necessity | Influence efforts must seek legitimate military outcomes and be necessary to attain those objectives.27 |
| Effectiveness | Influence activities employing AI should have a high likelihood of success in achieving their intended behavioral or attitudinal outcomes.27 |
| Proportionality (Influence Specific) | AI-enabled influence activities should not generate unintended second-order effects beyond what is intended, ensuring that benefits substantially outweigh any likely harm.27 |
5. Geopolitical Landscape and Adversary Exploitation of Generative AI
5.1 Generative AI as a Central Element of Strategic Competition
The global landscape is witnessing an intense strategic competition where the race for Artificial Intelligence supremacy is at the forefront, particularly between the United States and China.3 AI’s potential to fundamentally alter the global balance of power means that both nations view leadership in this domain as a defining element of national power, prompting them to marshal significant state resources to secure it.3
Technology, especially AI, has become a central piece in geopolitical power struggles. This competition is characterized by rising trade barriers, aggressive AI ambitions, and a fierce scramble for control over critical data and its underlying infrastructure, including advanced chips and fiber-optic cables.5 Nations are increasingly wary of relying on foreign technology for critical systems, a sentiment that has deepened the strategic rivalry and given rise to discussions of a new “digital Cold War”.5
The competition is not merely about who possesses the most advanced AI capabilities, but also about who can leverage them most effectively, and critically, who can democratize their misuse. Generative AI, when combined with sophisticated data capture techniques, provides “new techniques to industrialize the offensive use of disinformation”.13 This “diffusion of power” means that the sophisticated influence capabilities once largely confined to state actors are becoming increasingly accessible to a broader, more diverse range of actors, including non-state entities.13 This shift fundamentally alters the threat landscape, obliging the DoD to not only develop its own advanced capabilities but also to anticipate and counter a wider spectrum of adversaries wielding increasingly sophisticated tools. This necessitates robust intelligence gathering on adversary AI development and deployment, alongside proactive defensive strategies against AI-powered influence campaigns.
5.2 Analysis of Adversary Use of Generative AI for Influence Operations, Deepfakes, and Misinformation
Adversaries are not waiting for the U.S. to perfect its generative AI acquisition processes; they are already actively exploiting these technologies to gain an asymmetric advantage in the information environment. For instance, pro-Russian narratives and disinformation campaigns are being promoted and spread using generative AI to create cloned websites and manipulate social media, mimicking legitimate news media, think tanks, and government agencies.4
Government-backed attackers, notably from Iran, China, North Korea, and Russia, are leveraging generative AI as a powerful productivity tool to accelerate their cyber and information operations.35 This allows them to move faster and at higher volume, increasing the efficiency and difficulty of detecting their attacks.35 Specific adversary use cases include:
- Reconnaissance: AI assists in automating data collection from public and dark web sources, gathering intelligence on targets, researching U.S. military and IT organizations, and identifying U.S. intelligence community personnel.35
- Phishing and Social Engineering: Generative AI is used to create code for malware, craft compelling content for phishing emails, and design cybersecurity-themed phishing campaigns. Adversaries also employ AI-powered voice and video manipulation to enhance social engineering tactics, making scams more difficult to detect.35 Notably, Iranian information operations actors account for a significant 75% of observed AI misuse cases in information operations.36
- Content Generation: Adversaries utilize generative AI to produce and reformulate text for influence campaigns, such as criticizing government officials, describing how popular media perpetuates stereotypes, or creating themed titles for social media thumbnails.35
- Malware Development and Post-Compromise Activities: AI aids in automating workflows, generating scripts for data exfiltration, evading detection, escalating privileges, and conducting internal reconnaissance within compromised systems.35
The high percentage of AI misuse by Iranian actors in information operations indicates a present and active threat.36 This means that the DoD’s need to “rapidly acquire and employ generative AI capabilities to stay ahead of adversaries” [User Query] is not a future aspiration but an immediate necessity to counter existing threats. The implication is that simply developing offensive generative AI capabilities is insufficient; robust defensive AI capabilities—for the detection, attribution, and mitigation of deepfakes and AI-powered influence campaigns—are equally, if not more, critical to national security. The ability of adversaries to gain an asymmetric advantage through the rapid and widespread misuse of AI necessitates an urgent and comprehensive counter-AI strategy from the DoD.
6. Recommendations for Strategic Acquisition and Responsible Employment
To effectively harness the transformative potential of generative AI for influence activities and maintain a competitive edge, the U.S. Department of Defense must implement a coordinated, strategic approach that addresses current inefficiencies and integrates robust ethical considerations.
6.1 Policy and Governance: Developing an Enterprise-Wide Strategy and Fostering Collaboration
A foundational step is to establish clear policy and governance structures. The Office of Information Operations Policy (OIOP) should actively encourage the military services, U.S. Special Operations Command (USSOCOM), and U.S. Cyber Command (USCYBERCOM) to formally define requirements for influence activities.9 This requirements definition process must be coordinated across all relevant influence organizations to synchronize similar needs and avoid redundant efforts.15 Concurrently, an enterprise-wide plan or strategy is needed to comprehensively address the implications and opportunities of generative AI as they relate to influence activities and operations in the information environment.9
The Principal Information Operations Advisor (PIOA) should direct OIOP to foster robust collaboration among influence-tasked units, USSOCOM, USCYBERCOM, service influence organizations, and operational units with influence responsibilities.9 This collaboration is crucial for bridging the “valley of death” that often exists between promising prototypes and enterprise-wide deployment. By coordinating requirements and leveraging common infrastructure provided by DoD enterprise AI agencies, such as the Chief Digital and Artificial Intelligence Office (CDAO), the DoD can avoid fragmented efforts and ensure that bespoke tactical solutions can eventually scale or integrate into broader enterprise capabilities.9 CDAO’s AI Rapid Capabilities Cell (AI RCC) is already leading efforts to accelerate and scale the deployment of cutting-edge AI, including investments in foundational AI infrastructure, which can be leveraged for influence activities.37 This requires a concerted top-down strategic push from PIOA/OIOP combined with bottom-up operational insights from the field.
6.2 Acquisition Reform: Tailoring Pathways and Leveraging Flexible Contracting
Given the meaningful differences between generative AI acquisition and traditional hardware and software procurement, military services should identify appropriate organizations to manage AI acquisition.9 These organizations should be empowered to leverage a comprehensive suite of available acquisition strategies, enabling flexibility across generative AI capabilities that range from broad, DoD-wide solutions to highly bespoke, in-house–developed technologies.9 The tempo for capability purchases or reassessment must also be significantly increased to keep pace with rapid technological advancements.9
To overcome the inherent slowness and bureaucracy of the current acquisition system 16, it is imperative to expand the use of agile and innovative contracting mechanisms, such as Other Transaction Authorities (OTAs). OTAs can bypass lengthy traditional procurement processes, thereby enabling faster prototyping and deployment of critical AI capabilities.10 The CDAO has already demonstrated the effectiveness of using its organic acquisition authority to rapidly contract with frontier AI companies.7 Furthermore, efforts to streamline regulatory burdens, approval layers, and simplify procedures are essential to increase efficiency and lower barriers for non-traditional innovators.16 This represents a fundamental shift from rigid process compliance to prioritizing mission outcomes and adapting to change. This paradigm shift requires acquisition professionals to become more “risk tolerant” 18, moving away from the traditional “waterfall” approach to embrace agile methodologies. The success of initiatives like Agile Defense in securing DoD contracts highlights that this transformation is achievable, but it necessitates continuous training for acquisition staff and a change in mindset from both program managers and oversight bodies.
6.3 Human Capital Development: Investing in AI Training and Education
A critical recommendation is to identify and substantially invest in AI training and education opportunities across the entire influence community.9 This includes equipping the workforce with the necessary skills and knowledge to effectively work with AI technologies, as well as actively attracting, developing, and retaining top AI talent.23 The PIOA and OIOP should develop clear guidance to enable the effective and efficient adoption of generative AI throughout the influence community.9 The goal is to achieve a standard level of technological familiarity and proficiency for system operators, thereby building justified confidence in AI and AI-enabled systems.22
Given the rapidly evolving nature of AI technology and the “always-changing adversarial threat space” 20, one-time training programs will be insufficient. The DoD needs to cultivate a “learning organization” culture where continuous education, adaptation, and knowledge sharing are deeply embedded. This involves establishing not just formal training programs but also fostering communities of practice, implementing rapid feedback loops between operators, developers, and acquisition personnel, and promoting a culture of continuous learning. Addressing the challenge of “human trust in AI” 20 is paramount, underscoring the importance of ongoing education and transparent performance evaluation to build confidence and ensure effective utilization of these new tools.
6.4 Sustainment and Infrastructure: Coordinating Strategies and Leveraging Common Capabilities
To ensure the long-term viability and cost-effectiveness of generative AI capabilities, the DoD must develop and coordinate comprehensive sustainment strategies across all influence stakeholders.9 These strategies should cover the full spectrum of capabilities, from enterprise-wide solutions to bespoke, in-house developments.9 The current “ad hoc” approach [User Query] often leads to redundant purchases and inefficient yearly contracts.10 Without a coordinated sustainment strategy, the DoD risks accumulating a patchwork of incompatible and difficult-to-maintain AI systems, leading to higher long-term costs and reduced interoperability.
Strategic investment in foundational AI infrastructure and tools is also crucial to enable rapid pilot development, experimentation, and testing.37 Furthermore, establishing robust processes for data quality, governance, and accessibility—including effective collection, storage, cleaning, and transformation—is paramount.23 This is critical because AI systems are fundamentally “only as good as the data that it is trained on”.20 A strategic sustainment process, covering enterprise to bespoke capabilities, is essential for cost-effectiveness and ensuring that capabilities remain relevant, secure, and perform optimally throughout their lifecycle, directly linking to the need for continuous data feeds and model updates.
6.5 Ethical Integration: Ensuring Continuous Review and Countering Misinformation
The responsible adoption of generative AI necessitates the development of clear guidelines, or “guardrails,” to govern the use of AI-generated output in influence activities.9 An ethical review process, such as the necessity, effectiveness, and proportionality framework, should be formally integrated into the review and approval process for all influence operations.27 This ensures that ethical considerations are addressed from the outset.
Central to responsible AI use is the principle of “meaningful human control” over AI systems, particularly in sensitive applications involving the use of force or influence.11 Procedures must be developed to mitigate unintended bias in AI outputs, ensuring that AI systems are designed to pursue “objective truth” and avoid perpetuating existing societal biases.24
Given the escalating threat of AI-generated misinformation and deepfakes, the DoD must invest heavily in capabilities to detect and mitigate these malicious outputs.12 This includes implementing robust “red teaming” strategies, involving diverse experts, to proactively identify potential harms and develop effective mitigation techniques.33 Finally, increasing transparency regarding AI use and waivers, and clarifying responsibility when AI systems make autonomous decisions, are crucial for maintaining public trust and accountability.11
The tension between “rapid acquisition” and “responsible AI” is a central challenge. While political shifts may lean towards deregulation 11, ethical failures in influence operations—such as biased outputs, unacknowledged deepfakes, or unintended harm to civilians—can have severe strategic repercussions, undermining U.S. objectives and credibility.1 Therefore, the DoD must prioritize “responsible speed,” integrating ethical considerations from the outset of development and deployment. This approach ensures that the accelerated adoption of generative AI enhances, rather than compromises, national security interests, requiring a robust internal ethical culture and clear policy directives that transcend political administrations.
7. Conclusion: Charting a Path to AI-Enabled Influence Superiority
The U.S. Department of Defense faces a critical juncture in the evolving landscape of global strategic competition. The imperative to rapidly acquire and responsibly employ generative AI capabilities for influence activities is undeniable, serving as a vital component in maintaining decision superiority against sophisticated adversaries. The current fragmented, ad hoc efforts are demonstrably inefficient, leading to a substantial lack of investment and unity of effort across the influence community. This report has highlighted the systemic challenges in acquisition, human capital development, and contracting, alongside the complex technical and ethical considerations inherent in this powerful technology.
Generative AI is not a panacea but a force multiplier, capable of dramatically improving analysis, operational planning, content generation, and assessment of influence activities. Its dual-use nature, however, amplifies the stakes, making robust ethical guardrails and a proactive stance against adversary exploitation equally critical. The geopolitical landscape underscores the urgency, as competitors are already leveraging generative AI to gain asymmetric advantages in information warfare.
To fully realize generative AI’s transformative potential, the DoD must embark on a strategic and flexible acquisition approach, underpinned by a comprehensive enterprise-wide strategy. This includes defining formal requirements, fostering cross-functional collaboration, investing significantly in AI training and education, and implementing agile contracting mechanisms that incentivize innovation while protecting national interests. Crucially, the DoD must embed ethical principles—such as meaningful human control, bias mitigation, and transparency—into every stage of AI development and deployment, ensuring accountability and building trust. This commitment to “responsible speed” will enable the U.S. military to effectively compete and deter in the contested information environment, safeguarding both its technological edge and its ethical standing on the global stage.
8. References
- Agile Defense. (2025, June 26). Agile Defense to Drive Mission-Critical AI and Data Solutions Under New DoD Contract. PR Newswire. Retrieved from https://www.prnewswire.com/news-releases/agile-defense-to-drive-mission-critical-ai-and-data-solutions-under-new-dod-contract-302492194.html 19
- AI.mil. (n.d.). AI Rapid Capabilities Cell. Retrieved from(https://www.ai.mil/Initiatives/AI-Rapid-Capabilities-Cell/) 37
- AI.mil. (n.d.). Resources. Retrieved from(https://www.ai.mil/About/Resources/) 23
- Brennan Center for Justice. (2025, April). Narrowing the National Security Exception to Federal AI Guardrails. Retrieved from https://www.brennancenter.org/our-work/analysis-opinion/narrowing-national-security-exception-federal-ai-guardrails-0 24
- Canada’s Security Intelligence Service. (n.d.). The Evolution of Disinformation: A Deepfake Future. Retrieved from https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/implications-of-deepfake-technologies-on-national-security.html 14
- Chief Digital and Artificial Intelligence Office (CDAO). (n.d.). CDAO Announces Partnerships with Frontier AI Companies to Address National Security Challenges. Retrieved from(https://www.ai.mil/Latest/News-Press/PR-View/Article/4242822/cdao-announces-partnerships-with-frontier-ai-companies-to-address-national-secu/) 7
- CIGI Online. (n.d.). Preparing for Next-Generation Information Warfare with Generative AI. Retrieved from https://www.cigionline.org/publications/preparing-for-next-generation-information-warfare-with-generative-ai/ 13
- Congress.gov. (n.d.). The Technology and AI Fight for 21st Century Operations in the Department of Defense. Retrieved from https://www.congress.gov/index.php/event/118th-congress/house-event/117013 38
- DAFCIO. (n.d.). Department of the Air Force Chief Information Officer Public Strategy. Retrieved from(https://www.dafcio.af.mil/AI/Strategy/) 25
- DAU. (n.d.). Agile—the Pros and Cons. Retrieved from https://www.dau.edu/library/damag/july-august2018/agile-pros-and-cons 18
- DefenseScoop. (2025, July 29). SOCOM adds new advanced AI capabilities to tech wish list. Retrieved from https://defensescoop.com/2025/07/29/socom-sof-ai-artificial-intelligence-advanced-technologies-baa/ 6
- DefenseScoop. (2025, July 31). Army wants AI tech to help manage airspace operations. Retrieved from https://defensescoop.com/2025/07/31/army-rfi-ai-enabled-airspace-management/ 39
- DelBene, S. (n.d.). National Data Privacy Standard. Retrieved from(https://delbene.house.gov/news/documentsingle.aspx?DocumentID=3728) 34
- GoTechInsights. (n.d.). Modernizing DoD Acquisition Strategies for Speed, Efficiency, and Innovation. Retrieved from https://www.gotechinsights.com/blog/modernizing-dod-acquisition-strategies-for-speed-efficiency-and-innovation 16
- Google Cloud. (n.d.). Adversarial misuse of generative AI. Retrieved from https://cloud.google.com/blog/topics/threat-intelligence/adversarial-misuse-generative-ai 35
- Holland & Knight. (2025, July 24). America’s AI Action Plan: What’s In, What’s Out, What’s Next. Retrieved from https://www.hklaw.com/en/insights/publications/2025/07/americas-ai-action-plan-whats-in-whats-out-whats-next 30
- ISC2. (2025, February). Cybersecurity and the Deepfake, Disinformation and Misinformation. Retrieved from(https://www.isc2.org/Insights/2025/02/Deepfakes-Disinformation-Misinformation) 32
- Johnson, B. (2021, March 3). AI Systems: Unique Challenges for Defense Applications. NPS Systems Engineering. Retrieved from(https://nps.edu/documents/105938399/0/Bonnie+Johnson+ARS2021+Presentation.pdf/b17e321f-a6b4-83b6-4208-2fb96101a229?t=1615494176294) 20
- Marcellino, W., Welch, J., Clayton, B., Webber, S., & Goode, T. (2025). Acquiring Generative Artificial Intelligence to Improve U.S. Department of Defense Influence Activities. RAND Corporation, RR-A3157-1. Retrieved from(https://www.rand.org/pubs/research_reports/RRA3157-1.html) 9
- Marcellino, W., Welch, J., Clayton, B., Webber, S., & Goode, T. (2025). Acquiring Generative Artificial Intelligence to Improve U.S. Department of Defense Influence Activities (Research Brief). RAND Corporation, RBA3157-1. Retrieved from(https://www.rand.org/pubs/research_briefs/RBA3157-1.html) 4
- MERICS. (n.d.). China’s drive toward self-reliance in Artificial Intelligence: Chips, large language models. Retrieved from https://merics.org/en/report/chinas-drive-toward-self-reliance-artificial-intelligence-chips-large-language-models 3
- MITRE. (n.d.). Breaking Barriers in Defense Acquisition. Retrieved from https://www.mitre.org/news-insights/publication/barriers-defense-acquisition 17
- National Security Commission on AI. (2021). Final Report. 25
- Office of the Under Secretary of Defense for Research and Engineering. (2024, October 26). U.S. Department of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway. Retrieved from(https://media.defense.gov/2024/Oct/26/2003571790/-1/-1/0/2024-06-RAI-STRATEGY-IMPLEMENTATION-PATHWAY.PDF) 22
- RAND Corporation. (2023). Ethical Considerations for U.S. Department of Defense Influence Operations. RR-A1969-1. Retrieved from(https://www.rand.org/pubs/research_reports/RRA1969-1.html) 27
- SocRadar. (n.d.). Adversarial Misuse of AI: How Threat Actors Leverage AI. Retrieved from https://socradar.io/adversarial-misuse-of-ai-how-threat-actors-leverage-ai/ 36
- Sociable. (n.d.). Pentagon to acquire generative AI for influence activities: RAND. Retrieved from https://sociable.co/military-technology/pentagon-acquire-generative-ai-influence-activities-rand/ 1
- State Department. (n.d.). Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. Retrieved from https://www.state.gov/bureau-of-arms-control-deterrence-and-stability/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy 26
- Taylor Wessing. (2025). Ethics and Regulation of AI in Defence Technology. Retrieved from https://www.taylorwessing.com/en/interface/2025/defence-tech/ethics-and-regulation-of-ai-in-defence-technology 11
- The Army. (n.d.). Innovating Defense: Generative AI’s Role in Military Evolution. Retrieved from https://www.army.mil/article/286707/innovating_defense_generative_ais_role_in_military_evolution 8
- The Brookings Institution. (2025, February 1). What to make of the Trump administration’s AI action plan?. Retrieved from https://www.brookings.edu/articles/what-to-make-of-the-trump-administrations-ai-action-plan/ 28
- The Forge. (n.d.). Data Dominance in Modern Warfare: The Crucial Role of AI and Data Analytics. Retrieved from https://theforge.defence.gov.au/article/data-dominance-modern-warfare-crucial-role-ai-and-data-analytics 12
- The White House. (2025, July). America’s AI Action Plan. Retrieved from https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf 2
- U.S. Department of the Air Force & MIT. (n.d.). AI Acquisition Guidebook. Retrieved from(https://atarc.org/wp-content/uploads/2022/11/Air-Force-MIT-Guidboik..pdf) 40
- Warren, E. (n.d.). Warren, Schmitt Renew Bipartisan Fight for More Competition in Pentagon’s AI and Cloud Contracting. Retrieved from https://www.warren.senate.gov/newsroom/press-releases/warren-schmitt-renew-bipartisan-fight-for-more-competition-in-pentagons-ai-and-cloud-contracting 21
- WeForum. (2025, July). AI, geopolitics and data centres: the new frontiers of technological rivalry. Retrieved from https://www.weforum.org/stories/2025/07/ai-geopolitics-data-centres-technological-rivalry/ 5
- Wiley Rein LLP. (2025, July 24). White House Launches AI Action Plan and Executive Orders to Promote Innovation, Infrastructure, and International Diplomacy and Security. Retrieved from(https://www.wiley.law/alert-White-House-Launches-AI-Action-Plan-and-Executive-Orders-to-Promote-Innovation-Infrastructure-and-International-Diplomacy-and-Security) 29
- Wasil, S. & Miotti, R. (2024, February 13). Deepfakes: A Growing Threat to Global Security and Personal Liberty. arXiv. Retrieved from https://arxiv.org/html/2402.09581v1 31
- Wiley. (n.d.). How the Growth of AI Technology Contributes to Misinformation. Retrieved from(https://pubsonline.informs.org/do/10.1287/LYTX.2025.01.06/full/) 33
Works cited
- Pentagon Looks To Acquire Generative AI for Influence Activities: RAND – The Sociable, accessed July 31, 2025, https://sociable.co/military-technology/pentagon-acquire-generative-ai-influence-activities-rand/
- AMERICA’S AI ACTION PLAN | The White House, accessed July 31, 2025, https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf
- China’s drive toward self-reliance in artificial intelligence: from chips to large language models | Merics, accessed July 31, 2025, https://merics.org/en/report/chinas-drive-toward-self-reliance-artificial-intelligence-chips-large-language-models
- Acquiring Generative Artificial Intelligence for U.S. Department of Defense Influence Activities – RAND Corporation, accessed July 31, 2025, https://www.rand.org/content/dam/rand/pubs/research_briefs/RBA3100/RBA3157-1/RAND_RBA3157-1.pdf
- AI geopolitics and data centres in the age of technological rivalry, accessed July 31, 2025, https://www.weforum.org/stories/2025/07/ai-geopolitics-data-centres-technological-rivalry/
- SOCOM adds new advanced AI capabilities to tech wish list | DefenseScoop, accessed July 31, 2025, https://defensescoop.com/2025/07/29/socom-sof-ai-artificial-intelligence-advanced-technologies-baa/
- CDAO Announces Partnerships with Frontier AI Companies to Address National Security Mission Areas – Chief Digital and Artificial Intelligence Office, accessed July 31, 2025, https://www.ai.mil/Latest/News-Press/PR-View/Article/4242822/cdao-announces-partnerships-with-frontier-ai-companies-to-address-national-secu/
- Innovating Defense: Generative AI’s Role in Military Evolution | Article – Army.mil, accessed July 31, 2025, https://www.army.mil/article/286707/innovating_defense_generative_ais_role_in_military_evolution
- Acquiring Generative Artificial Intelligence to Improve U.S. Department of Defense Influence Activities | RAND, accessed July 31, 2025, https://www.rand.org/pubs/research_reports/RRA3157-1.html
- Acquiring Generative Artificial Intelligence for U.S. Department of Defense Influence Activities | RAND, accessed July 31, 2025, https://www.rand.org/pubs/research_briefs/RBA3157-1.html
- Ethics and regulation of AI in defence technology: navigating the legal and moral landscape, accessed July 31, 2025, https://www.taylorwessing.com/en/interface/2025/defence-tech/ethics-and-regulation-of-ai-in-defence-technology
- Data Dominance in Modern Warfare The Crucial Role of AI and Data Analytics – The Forge, accessed July 31, 2025, https://theforge.defence.gov.au/article/data-dominance-modern-warfare-crucial-role-ai-and-data-analytics
- Preparing for Next-Generation Information Warfare with Generative AI, accessed July 31, 2025, https://www.cigionline.org/publications/preparing-for-next-generation-information-warfare-with-generative-ai/
- Implications of Deepfake Technologies on National Security – Canada.ca, accessed July 31, 2025, https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/implications-of-deepfake-technologies-on-national-security.html
- Acquiring Generative Artificial Intelligence to Improve U.S. Department of Defense Influence Activities – RAND, accessed July 31, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3100/RRA3157-1/RAND_RRA3157-1.pdf
- Modernizing DoD Acquisition: Strategies for Speed, Efficiency, and Innovation, accessed July 31, 2025, https://www.gotechinsights.com/blog/modernizing-dod-acquisition-strategies-for-speed-efficiency-and-innovation
- Barriers in Defense Acquisition | MITRE, accessed July 31, 2025, https://www.mitre.org/news-insights/publication/barriers-defense-acquisition
- Agile—the Pros and Cons | http://www.dau.edu, accessed July 31, 2025, https://www.dau.edu/library/damag/july-august2018/agile-pros-and-cons
- Agile Defense to Drive Mission-Critical AI and Data Solutions Under New DoD Contract, accessed July 31, 2025, https://www.prnewswire.com/news-releases/agile-defense-to-drive-mission-critical-ai-and-data-solutions-under-new-dod-contract-302492194.html
- Artificial Intelligence Systems: Unique Challenges for Defense Applications, accessed July 31, 2025, https://nps.edu/documents/105938399/0/Bonnie+Johnson+ARS2021+Presentation.pdf/b17e321f-a6b4-83b6-4208-2fb96101a229?t=1615494176294
- Warren, Schmitt Renew Bipartisan Fight for More Competition in Pentagon’s AI and Cloud Contracting, accessed July 31, 2025, https://www.warren.senate.gov/newsroom/press-releases/warren-schmitt-renew-bipartisan-fight-for-more-competition-in-pentagons-ai-and-cloud-contracting
- us department of defense responsible artificial intelligence strategy and implementation pathway, accessed July 31, 2025, https://media.defense.gov/2024/Oct/26/2003571790/-1/-1/0/2024-06-RAI-STRATEGY-IMPLEMENTATION-PATHWAY.PDF
- Chief Digital and Artificial Intelligence Office > About > Resources, accessed July 31, 2025, https://www.ai.mil/About/Resources/
- Narrowing the National Security Exception to Federal AI Guardrails, accessed July 31, 2025, https://www.brennancenter.org/our-work/analysis-opinion/narrowing-national-security-exception-federal-ai-guardrails-0
- 2023 Department of Defense Data, Analytics, and Artificial Intelligence Adoption Strategy, accessed July 31, 2025, https://www.dafcio.af.mil/AI/Strategy/
- Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, accessed July 31, 2025, https://www.state.gov/bureau-of-arms-control-deterrence-and-stability/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy
- Planning Ethical Influence Operations – RAND, accessed July 31, 2025, https://www.rand.org/pubs/research_reports/RRA1969-1.html
- What to make of the Trump administration’s AI Action Plan – Brookings Institution, accessed July 31, 2025, https://www.brookings.edu/articles/what-to-make-of-the-trump-administrations-ai-action-plan/
- White House Launches AI Action Plan and Executive Orders to Promote Innovation, Infrastructure, and International Diplomacy and Security – Wiley Rein, accessed July 31, 2025, https://www.wiley.law/alert-White-House-Launches-AI-Action-Plan-and-Executive-Orders-to-Promote-Innovation-Infrastructure-and-International-Diplomacy-and-Security
- America’s AI Action Plan: What’s In, What’s Out, What’s Next | Insights | Holland & Knight, accessed July 31, 2025, https://www.hklaw.com/en/insights/publications/2025/07/americas-ai-action-plan-whats-in-whats-out-whats-next
- Combatting deepfakes: Policies to address national security threats and rights violations, accessed July 31, 2025, https://arxiv.org/html/2402.09581v1
- Cybersecurity and the Deepfake, Disinformation and Misinformation Challenge – ISC2, accessed July 31, 2025, https://www.isc2.org/Insights/2025/02/Deepfakes-Disinformation-Misinformation
- Protecting Society from AI-Generated Misinformation: A Guide for Ethical AI Use | Analytics Magazine – PubsOnLine, accessed July 31, 2025, https://pubsonline.informs.org/do/10.1287/LYTX.2025.01.06/full/
- Newsweek: Want Protection From AI? The First Step Is a National Privacy Law (Opinion), accessed July 31, 2025, https://delbene.house.gov/news/documentsingle.aspx?DocumentID=3728
- Adversarial Misuse of Generative AI | Google Cloud Blog, accessed July 31, 2025, https://cloud.google.com/blog/topics/threat-intelligence/adversarial-misuse-generative-ai
- The Adversarial Misuse of AI: How Threat Actors Are Leveraging AI for Cyber Operations, accessed July 31, 2025, https://socradar.io/adversarial-misuse-of-ai-how-threat-actors-leverage-ai/
- AI Rapid Capabilities Cell – Chief Digital and Artificial Intelligence Office, accessed July 31, 2025, https://www.ai.mil/Initiatives/AI-Rapid-Capabilities-Cell/
- The Technology and AI Fight for 21st Century Operations in the Department of Defense | Congress.gov, accessed July 31, 2025, https://www.congress.gov/index.php/event/118th-congress/house-event/117013
- Army wants AI tech to help manage airspace operations – DefenseScoop, accessed July 31, 2025, https://defensescoop.com/2025/07/31/army-rfi-ai-enabled-airspace-management/
- Artificial Intelligence Acquisition Guidebook – ATARC, accessed July 31, 2025, https://atarc.org/wp-content/uploads/2022/11/Air-Force-MIT-Guidboik..pdf


Leave a comment