The intersection of artificial intelligence and defence technology represents one of the most complex and consequential regulatory challenges of our time. Governments and international bodies are grappling with fundamental questions about human oversight, and the preservation of humanitarian law in warfare. This article examines the current state of AI regulation in defence technology, exploring legislative developments, ethical frameworks, and the path forward for responsible governance.
Military applications of AI are evolving rapidly. Lethal autonomous weapons system attract much focus, but its applications are much broader. AI can enhance logistics; intelligence, surveillance and reconnaissance; semi-autonomous and autonomous vehicles including drones; cyber warfare; disinformation, and more.
These systems span both offensive and defensive roles and the technologies are intended in part to augment or replace human operators, freeing them to perform more complex and cognitively demanding work. In addition, AI-enabled systems could:
- react significantly faster than systems that rely on operator input
- cope with an exponential increase in the amount of data available for analysis, and
- enable new concepts of operations such as swarming (ie co-operative behaviour in which unmanned vehicles autonomously coordinate to achieve a task) which could confer a tactical advantage by overwhelming adversary defensive systems.
Ethical and legal considerations could not be more important when determining the evolution of military AI.
The current legislative landscape
United states: federal and defense approaches to AI governance and ethics
The US military has long relied on technological superiority for national security. The 2018 and 2022 US National Defense Strategies highlight AI as critical for maintaining military superiority. Examples of its AI-enabled weapons systems include the MQ-9 Reaper drone, which uses AI for target identification and tracking, and the Sea Hunter, an autonomous naval vessel designed for anti-submarine warfare.
In October 2023, President Biden issued Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence establishing foundational requirements for AI safety and security across federal agencies, including defence applications. The order requires federal agencies to establish AI governance structures and mandates safety evaluations for AI systems that could pose risks to national security. The Order was, however, revoked by President Trump in January 2025 and the White House recently published its AI Action Plan which focuses less on safe AI development and more on growth.
The US Department of Defense (DOD) has implemented its own AI ethics principles through the Responsible AI Strategy and Implementation Pathway which promotes human-machine teaming rather than fully autonomous systems. The framework emphasises integrating AI technologies in a lawful, ethical, and accountable manner to maintain military advantage and trust among allies and partners. The Pentagon's approach focuses on "meaningful human control" over lethal decisions, aligning with broader international humanitarian law principles.
Congressional oversight has intensified, with various committees scrutinising developments closely. The National Defense Authorization Act directs the DOD to accelerate the development and responsible integration of AI technologies across military operations, emphasising human oversight and ethical use. It mandates pilot programs, research, and collaboration with allies to ensure AI systems are safe, secure, and interoperable. Both the House and Senate Armed Services Committees have held hearings on AI governance, though comprehensive federal AI legislation remains under development.
The US also proposed a non-binding Code of Conduct for Lethal Autonomous Weapon Systems to encourage responsible behaviour and adherence to legal standards, but opposed any pre-emptive ban on LAWS. In February 2023, during the Summit on Responsible Artificial Intelligence in the Military Domain (REAIM) in The Hague, the US Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy provided a normative framework to guide the ethical and responsible development, deployment, and use of AI in military settings. It emphasises compliance with international law, particularly international humanitarian law, and underscores the importance of maintaining human accountability in AI-enabled military operations. As of November 2023, 45 countries have endorsed this declaration, but it was disavowed by the Trump administration as, like the AI EO and various other Biden era AI-policies, it was viewed as restrictive to innovation.
United Kingdom: principles-based regulation
The United Kingdom has taken a proactive and structured approach towards the ethical and regulatory challenges posed by the integration of AI in defence technology. Central to this approach is the UK Ministry of Defence's (MOD) commitment to ensuring that Al systems are deployed responsibly, aligned with international humanitarian law, and underpinned by clear ethical principles. The UK Defence AI Strategy, published in 2022, focuses on transforming the MOD into an 'AI-ready' organisation by developing the necessary skills, technical enablers, and research and development programmes to accelerate the adoption of AI-enabled systems and capabilities. It underscores the importance of maintaining human judgement in critical decisions, ensuring transparency, and mitigating risks associated with autonomous systems in military contexts. The strategy also aims to strengthen the UK's defence and security AI ecosystem and shape global AI developments to promote security, stability, and democratic values.
Ethical oversight in the UK defence sector is now formally embedded within Joint Service Publication 936 (Part 1), which outlines the MOD's Al governance model and ethical principles. This framework integrates ethical considerations such as accountability, reliability, fairness, and respect for human rights into the lifecycle of AI-enabled systems. It mandates structured ethical assessments and a tiered risk management process to ensure responsible Al development. Crucially, the MOD reiterates the requirement for "meaningful and informed human involvement" in the operation of Al systems, especially those with the potential to cause harm.
In practical terms, the UK adopts a multi-stakeholder, evidence-led model to develop standards and safeguards around AI in defence. This includes collaboration across government departments, industry, and academia, as well as engagement with activities of international bodies like the UN Convention on Certain Conventional Weapons (CCW), which discusses 'Lethal Autonomous Weapons Systems' (LAWS). These partnerships help the UK contribute to global norm-setting and ensure its domestic frameworks remain interoperable with allied nations and compliant with international law.
While the MOD's governance is distinct from civilian regulatory regimes, it is consistent with the UK government's broader "pro-innovation" approach to Al regulation, as laid out in a 2023 White Paper. The cross-sector strategy prioritises safety, accountability, and contestability, without imposing heavy-handed legislation. This allows sectors like defence to lead on tailored governance while maintaining coherence with national AI principles. Together, these frameworks reflect a unified ethical direction, balancing technological advancement with legal responsibility and moral integrity.
European Union, Council of Europe and UN negotiations
The European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689) explicitly excludes military applications under Article 2(3). Although defence-specific systems are exempt, the principles embedded in the AI Act provide a normative benchmark that can inform military AI governance indirectly. For any dual-use scenario, the AI Act will apply and, in consequence, requires comprehensive transparency measures and thorough high-risk systems assessments.
One of the core legal concepts in the AI Act is the requirement for human oversight in high-risk AI systems which directly aligns with the broader ethical principle of 'meaningful human control' (MHC) in defence contexts. MHC mandates that decisions involving the use of force must remain under human authority. Comparably, the AI Act requires that operators of high-risk systems be able to interpret and intervene in AI decision-making processes. The AI Act also supports dual-use scenarios with its documentation requirements for providers of high-risk AI systems and ethical principles of transparency and auditability, which are critical in defence applications where decisions may have massive consequences. Additionally, such documentation and in particular auditability might facilitate post-operation reviews, legal accountability, and compliance with international humanitarian law.
The legal framework of the AI Act also offers a model for structured ethical assessments in military AI, encouraging the adoption of similar practices such as tiered risk analysis and ethical impact assessments. The concept of responsibilities with distinct stakeholders in the AI Act is also relevant to the (dual-)use of AI in a defence context, where assigning legal responsibility for autonomous decisions remains a complex challenge.
In consequence, the legal framework of the AI Act might serve as a guideline for the defence sector including in scenarios aside from dual-use cases where the Act nevertheless applies directly to the respective AI functionalities. The application of the principles of the AI Act can improve ethical rules for AI-supported operation in the defence sector and provide a sound and comprehensive framework for the use of AI in this regard.
The European Parliament has been particularly active in addressing autonomous weapons systems through resolutions. A January 2021 report urged a legal EU framework mandating "meaningful human control" over military AI and prohibiting autonomous lethal systems. The 2020 A9-0186 resolution recommended robust oversight of defence AI and continuity with international humanitarian law and international human rights law. It calls for legally enforceable norms and authorises the Commission to propose binding measures addressing autonomy in targeting and deployment functions.
The Council of Europe's 2024 Framework Convention on AI upholds human rights, democracy, and the rule of law throughout the AI lifecycle. It obliges parties to adopt domestic measures - legislative, administrative, or otherwise - as a baseline for regulating Al. While national security and defence applications are exempted from direct obligations, parties must still ensure compliance with broader international human rights and rule-of-law obligations under the ECHR and other treaties.
Global initiatives
At the global level, countries that are part of the CCW have been discussing banning and restricting the use of specific types of conventional weapons that are considered to cause unnecessary suffering to combatants or to affect civilians indiscriminately since 2014. The Group of Governmental Experts (GGE) established under the CCW has addressed emerging technologies in the area of LAWS and has emphasised the necessity of human responsibility and accountability, ensuring that decisions regarding the use of force remain under human control throughout the lifecycle of the weapon. The GGE also advocates for the development of a normative and operational framework to address the challenges posed by LAWS, including the establishment of legal reviews and risk assessments to ensure compliance with international humanitarian law. Some countries and inter-governmental organisations have pushed in these discussions for a pre-emptive ban on such systems. However, achieving consensus has been impossible, as some countries oppose outright bans, instead favouring regulation or codes of conduct.
UN Secretary-General Antonio Guterres has made repeated calls for a binding instrument prohibiting the use of LAWS without human control or oversight by 2026. Without a binding international framework, there is a risk of an arms race in autonomous weapons, which could lead to increased global instability and security challenges. However, some countries are hesitant to commit to a binding international agreement due to the strategic military advantages and technological competitiveness of using LAWS and prefer to regulate these under national regulatory frameworks.
Ethical frameworks and normative standards
Law and policy frameworks across jurisdictions consistently highlight foundational ethical principles for defence AI:
- Meaningful human control: the decision to use force must rest with a human, with autonomy limited to non-lethal support functions. This principle requires clear definition of what constitutes "meaningful control" - whether real-time engagement, pre-mission authority, or reviewability.
- Distinction and proportionality: Al systems must recognise combatant status, and ensure that combatants and civilians are properly differentiated. They must also evaluate proposed military action against IHL's proportionality standard, to ensure that any action is proportionate to the threat. This ensures compliance with fundamental principles of international humanitarian law.
- Transparency and traceability: operators must document Al decisions, and oversight institutions must have visibility into system design and operation. However, this principle faces challenges when military AI systems are classified, limiting ethical scrutiny.
- Accountability and responsibility: determining who is responsible when Al systems make autonomous decisions is complex. If an AI system causes unintended harm, assigning liability becomes challenging, leading to concerns about accountability in military operations. Legal regimes must clarify fault attribution - whether to the commander, manufacturer, programmer, or deploying state. AI decisions made in milliseconds pose particular challenges for assigning legal blame.
Despite broad consensus on these principles, implementation faces significant challenges. The ambiguity in human oversight requirements creates uncertainty about what constitutes "meaningful control". The closed-source nature of military AI limits ethical scrutiny and raises concerns about the effectiveness of safeguards. Perhaps most critically, the lack of enforcement mechanisms means that compliance with resolutions and guidelines rely on voluntary adherence. There are broader concerns that AI has the potential to escalate conflicts. By lowering the risks to a state's own military personnel, such systems may lower the political barrier to deploying or using force, and hence make future wars more frequent. Rapid operational response, at the cost of substantive human oversight and time to reflect, can increase the likelihood of conflict escalation due to swift, machine-led interactions.
Looking forward: critical junctures ahead
The current legislative landscape is characterised by competing approaches - from emerging regulatory frameworks in the United States and United Kingdom, to non-binding but influential EU resolutions and UN-level treaty negotiations. The consistent ethical principles across jurisdictions emphasise human oversight, transparency, accountability, and compliance with international humanitarian law. The next few years will be critical for the future of AI regulation in defence technology and may mark a turning point from normative rhetoric to enforceable regulation.
Find out more about issues and opportunities in the aerospace & defence sector and how we can help here.