Collateral Damage: Economics and ethics are casualties in the militarisation of AI

The current government places a central emphasis on technology and innovation in its evolving national security strategy, and wider approach to governance. Labour proposes reviving a struggling British economy through investment in defence with artificial intelligence (AI) featuring as an important component. Starmer’s premiership seems to align several objectives: economic growth, defence industrial development and technological innovation.

Rachel Reeves and John Healey hold roundtable with military company bosses, in front of Reaper drone at RAF Waddington, Feb 2025. Image: MoD

Taken together, these suggest that the government is positioning AI primarily in the context of war and defence innovation. This not only risks undermining the government’s stated ambitions of stability and economic growth but a strategy that prioritises speed over scrutiny, to the neglect of important ethical concerns.

The private defence industry has been positioned as an important pillar of this strategy. Before the Strategic Defence Review (SDR) was published, Chancellor Rachel Reeves and Defence Secretary John Healey initiated a Defence and Economic Growth Task Force to drive UK growth through defence capabilities and production. Arms companies are no longer vital for the purposes of national security but now presented as engines of future prosperity. AI is central to this, consistently highlighted in government communications John Healy has explicitly acknowledged that AI will increasingly power the British military whilst Kier Starmer stated that AI ‘will drive incredible change’.

UK focusing AI on military applications

The AI Action Plan, released in January 2025, explicitly links AI to economic growth. Although this included references to ‘responsible use and leadership’, the government has now shifted emphasis on military applications at the expense of crucial policy areas. On the 4th of July, the Science and Technology Secretary Peter Kyle wrote to the Alan Turing Institute – Britain’s premier AI research organization – to refocus research on military applications of AI. The Institutes prior research agenda spanned environmental sustainability, health and national security; under this new directive priorities are fundamentally being narrowed.

BAE Systems Project Odyssey uses AI and VR to make training ‘more realistic’. Image: BAE Systems

Relatedly, the Industrial Strategy released by the government aims to ‘embolden’ the UK’s digital and technologies economy, with £500 million to be delivered through a sovereign AI unit – this however will be focused on ‘building capacity in the most strategically important areas’. Given Peter Kyles re-direction and the overwhelming emphasis the government has placed on AI’s productive capacity in war, it becomes clear that AI research in defence will be at the cost of socially beneficial research in the case of the Alan Turing Institute.

Take Britain’s bleak economic outlook: sluggish productivity; post-Brexit stagnation; strained public finances; mounting government debt repayments; surging costs of living and inflating house prices. There is little evidence to suggest that defence-led growth will yield impactful returns on this catalogue of challenges. No credible economist is going to advise, in the face of these challenges, that investing in defence and redirecting research on AI in the name of national security, is going to give a better return on investment.

Research conducted in America illustrates that investing in health, education, infrastructure and green investment is more likely to give better returns on individual income specifically and broadly, the country’s prospects. Similarly, Lord Blunkett (former minster under Blair) pointed out that without GDP growth, raising defence spending as a share of GDP may not increase the actual funding.

Concerning applications to health outcomes, in August the World Economic Forum reported AI’s striking potential in doubling accuracy in the examination of brains in stroke patients, detecting fractures often missed in overstretched departments and predicting diseases with high confidence. This is critical given the NHS’s persistent challenges: long waiting times, underfunding, regional inequality, staff shortages and bureaucratic inertia.

Health and economic growth are closely related: healthier individuals are more productive, children attend school more consistently, preventative care lowers long-term costs – fundamentally strong health systems add value to the economy and our lives . Yet health is just one example. We are in the embryonic stages of AI development, and by prioritising research on military applications over civilian ones with public value, the government risks undermining, not fuelling long-term economic growth.

Crucially, framing arms companies as a major engine of economic growth is wildly misleading and economically unfounded. Arms sales account for 0.004% of the treasury’s total revenue and the defence industrial base accounts for only 1% of UK economic output. This sector is highly monopolized and so the benefits of ‘growth’ are concentrated among a handful of dominant corporations. Even then, the profit generated will not be reinvested into the UK. The biggest arms company in the UK – BAE Systems – is essentially a joint US-UK company with most of its capital invested in the US with majority shareholders emanating from US investment companies like BlackRock.

Prioritising speed over scrutiny

Beyond the economics, this is part of a wider strategy that signals a growing dismissal of ethical concerns, prioritising speed over scrutiny. The SDR acknowledged that technology is outpacing regulatory frameworks, noting that ‘the UK’s competitors are unlikely to adhere to common ethical standards’. In April 2025, Matthew Clifford – AI advisor to the PM – has been quoted saying ‘speed is everything’. While the Ministry of Defence (in 2022) promised to take an ‘ambitious, safe and responsible’ approach to the development of military AI, the current emphasis on speed sidelines important ethical concerns in the rush for military-technological superiority.

Militarily, the SDR makes plans to invest in drones, autonomous systems and £1 billion for a ‘digital targeting web’. A key foundational principle of International Humanitarian Law is the protection of civilians and their distinction with military targets. An AI-enabled ‘digital targeting web’ – like the one proposed in the SDR – connects sensors and weapons enabling faster detection and killing of human life. These networks would be able to identify and suggest targets faster than humans ever could, leaving soldiers in the best case, minutes, and the worst case, seconds to decide whether the drone should kill.

Digital Warfare: US and UK forces at the Combined Air Operations Center (CAOC), Al Udeid Air Base, Qatar,

One notable example is the Maven Smart System, recently procured by NATO. According to the US Think Tank, the Centre of Security and Emerging Technology, the system makes possible small armies to make ‘1000 tactical decisions per hour’. Some legal scholars have pointed out that the prioritisation of speed, within AI-powered battleground technology, raises questions surrounding the preservation of meaningful human control and restraint in warfare. Israeli use of AI-powered automated targeting systems such as ‘Lavender’ during its assault and occupation of Gaza is illustrative of this point. Systems such as these have been highlighted as one of the factors behind the shockingly high civilian death toll there.

This problem is compounded by the recent research that has shown that new large language models are known to ‘hallucinate’ – producing outputs in error or made up. As these systems become embedded within military decision-making chains, the risk of escalation due to technical failure increase dramatically. A false signal, misread sensor or a corrupted database could lead to erroneous targeting, or unintended conflict escalation.

In sum, the UK’s current approach – predominantly framing AI’s utility though the lens of defence – risks squandering its broader social and economic potential. The redirection of public research institutes, the privileging of AI investment in military applications (or so-called ‘strategic areas’) and the emphasis on speed over scrutiny raises serious concerns. Ethically, the erosion of meaningful human control in battlefield decision-making, the risk of AI-driven conflict escalation and the disregard of international humanitarian principles points to a troubling trajectory. The UK risks drifting towards the ethical standards of Russia and Israel in its use of military AI. A government approach to AI grounded in human security (freedom from fear and want), not war is not only more ethical but far more likely to generate sustainable economic growth for the United Kingdom.

  • Matthew Croft is a postgraduate student at Kings College London studying Conflict, Security and Development with a particular interest on the ethics of national security and the politics of technology.

UK crossing the line as it implements use of AI for lethal targeting under Project Asgard

Despite grave ethical and legal concerns about the introduction of AI into decision making around the use of lethal force, the UK is rapidly pressing ahead with a number of programmes and projects to do so, with the British Army recently trialling a new AI-enabled targeting system called ASGARD as part of a NATO exercise in Estonia in May 2025.

A mock HQ utilising ASGARD at MoD briefing, July 2025. Crown Copyright 2025.

Last week, the Ministry of Defence (MoD) gave a briefing to selected media and industry ‘partners’ on Project ASGARD – which it describes as the UK’s programme to “double the lethality” of the British Army through the use of AI and other technology. ASGARD is not aimed at  producing or procuring a particular piece of equipment but rather at developing a communications and decision-making network that uses AI and other technology to vastly increase the speed of undertaking lethal strikes.

ASGARD is part of a £1 billion ‘Digital Targeting Web’ designed to “connect sensors, shooters, and decision-makers” across the land, sea, air, and space domains. “This is the future of warfare,” Maria Eagle, Minister for Defence Procurement and Industry told the gathering. 

According to one reporter present at the briefing, the prototype network “used AI-powered fire control software, low-latency tactical networks, and semi-autonomous target recommendation tools.” 

Janes reported that through ASGARD, “any sensor”, whether it be an unmanned aircraft system (UAS), radar, or human eye, is enabled by AI to identify and prioritise targets and then suggest weapons for destroying them. “Before Asgard it might take hours or even days. Now it takes seconds or minutes to complete the digital targeting chain,” Sir Roly Walker, Head of the British Army told the gathering.

Drones used in conjunction with ASGARD
DART 250EW one-way attack drone
Helsing HX-2 one-way attack drone

While the system currently has a ‘human in the loop’  officials suggested that this could change in future, with The I Paper reporting ‘the system is technically capable of running without human oversight and insiders did not rule out allowing the AI to operate independently if ethical and legal considerations changed.’

How it works

A British Army report after the media event suggested that  “Asgard has introduced three new ways of fighting designed to find, strike and blunt enemy manoeuvre: 

  • A dismounted data system for use at company group and below.
  • The introduction of the DART 250 One Way Effector. This enables the targeting of enemy infrastructure three times further than the current UK land based deep fires rockets.
  • A mission support network to accelerate what is called the digital targeting or ‘kill’ chain.

According to a detailed and useful write-up of the Estonia exercise, ASGARD uses existing equipment currently in service alongside new systems including Lattice command and control software from Anduril which provides a ‘mesh network’ for communications, as well as Altra and Altra Strike software from Helsing used to identify and ‘fingerprint’ targets. The report goes on:

“targets were passed to PRIISM which would conduct further development including legal review, collateral damage estimates, and weapon-to-target matching.”  

Helsing’s HX-2 drone was also used during the exercise and is another indication that the UK is likely to acquire these one-way attack drones. DART 250, a UK manufactured jet-powered one-way attack drone with a range of 250 km that can fly at more than 400 km/h was also deployed as part of the exercise. The manufacturer says that it can fly accurately even when GPS signals are jammed and that it is fitted with seeker that enables it to home-in and destroy jamming equipment.  

AI: speed eroding oversight and accountability

The grave dangers of introducing AI into warfare, and in particular for the use of force are by now well known.  While arguments have been made for and against these systems for more than a decade, increasing we are moving from a theoretical, future possibility to the real world: here, now, today.

While some argue almost irrationally in the powers and benefits of AI, in the real world AI-enabled systems remain error prone and unreliable. AI is far from fallible and relies on training data which time and time again have led to serious mistakes through bias.  

Systems like ASGARD may be able to locate tanks on an open plain in a well-controlled training exercise environment (see video above), the real world is very different.  Most armed conflicts do not take place in remote battlefields but in complex and complicated urban environments.  Relying on AI to choose military targets in such a scenario is fraught with danger.

Advocates of ASGARD and similar systems argue that the ‘need’ for speed in targeting decisions means that the use of AI brings enormous benefits.  And it is undoubtedly true that algorithms can process data much faster than humans. But speeding up such targeting decisions significantly erodes human oversight and accountability.  Humans in such circumstances are reduced to merely rubber-stamping the output of the machine.

Meanwhile, the Ministry of Defence confirmed that the next phase of ASGARD’s development has received government funding while at the UN, the UK continues to oppose the negotiation of a new legally binding instrument on autonomous weapons systems.

Proceed in Harmony: The Government replies to the Lords on AI in Weapon Systems

Click to open

Last December a select committee of the House of Lords published ‘Proceed with Caution’: a report setting out the findings of a year-long investigation into the use of artificial intelligence (AI) in weapon systems.

Members of the Lords committee were drawn entirely from the core of the UK’s political and security establishment, and their report was hardly radical in its conclusions.  Nevertheless, their report made a number of useful recommendations and concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.  The Lords found that Ministry of Defence (MoD) claims to be “ambitious, safe, responsible” in its use of AI had “not lived up to reality”.

The government subsequently pledged to reply to the Lords report, and on 21 February published its formal response.  Perhaps the best way of summarising the tone of the response is to quote from its concluding paragraph:  ““Proceed with caution”, the overall message of this [Lords] report, mirrors the MoD’s approach to AI adoption.”   There is little new in the government response and nothing in it will be of any surprise to observers and analysts of UK government policy on AI and autonomous technologies.  The response merely outlines how the government intends to follow the course of action it had already planned to take, reiterating the substance of past policy statements such as the Defence Artificial Intelligence Strategy and puffing up recent MoD activity and achievements in the military AI field.

As might be imagined, the response takes a supportive approach to recommendations from the Lords which are aligned to its own agenda, such as developing high-quality data sets, improving MoD’s AI procurement arrangements, and undertaking research into potential future AI capabilities.  On the positive side, it is encouraging to see that in some areas concerns over the risks and limitations of AI technologies are highlighted, for example in the need for review and rigorous testing of new systems.  MoD acknowledges that rigorous testing would be required before an operator could be confident in an AI system’s use and effect, that current procedures, including the Article 36 weapons review process, will need to be adapted and updated, and that changes in operational environment may require weapon systems to be retested.

The response also reveals that the government is working on a Joint Service Publication covering all the armed forces to give more concrete directions and guidance on implementing MoD’s AI ethical principles.  The document, ‘Dependable AI in Defence’, will set out the governance, accountabilities, processes and reporting mechanisms needed to translate ethical policies into tangible actions and procedures.  Drone Wars UK and other civil society organisations have long called for MoD to formulate such guidance as a priority.

In some areas the MoD has relatively little power to meet the committee’s recommendations, such as in adjusting government pay scales to match market rates and attract qualified staff to work on MoD AI projects.  Here the rejoinder is little more than flannel, mentioning that “a range of steps” are being taken “to make Defence AI an attractive and aspirational choice.”

In other respects the Lords have challenged MoD’s approach more substantially, and in such cases these challenges are rejected in the government response.  This is so in relation to the Lords’ recommendation that the government should adopt a definition for autonomous weapons systems (AWS).  The section of the response dealing with this point lays bare the fact that the government’s priority “is to maximise our military capability in the face of growing threats”.  A rather unconvincing assertion that “the irresponsible and unethical behaviours and outcomes about which the Committee is rightly concerned are already prohibited under existing legal mechanisms” is followed by the real reason for the government’s opposition: “there is a strong tendency in the ongoing debate about autonomous weapons to assert that any official AWS definition should serve as the starting point for a new legal instrument prohibiting certain types of systems”.  Any international treaty which would outlaw autonomous weapon systems “represents a threat to UK Defence interests” the government argues.  The argument ends with a side-swipe at Russia and an attempt to shut down further debate by claiming that the debate is taking place “at the worst possible time, given Russia’s action in Ukraine and a general increase in bellicosity from potential adversaries.”  This basically seems to be saying that in adopting a definition for autonomous weapon systems the UK would be making itself more vulnerable to Russian military action.  Really? Read more

Proceed with caution: Lords warn over development of military AI and killer robots

Click to open report

The use of artificial intelligence (AI) for the purposes of warfare through the development of AI-powered autonomous weapon systems – ‘killer robots’ –  “is one of the most controversial uses of AI today”, according to a new report by an influential House of Lords Committee.

The committee, which spent ten months investigating the application of AI to weapon systems and probing the UK government’s plans to develop military AI systems, concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.

Echoing concerns which Drone Wars UK has repeatedly raised, the Lords found that the stated aspiration of the Ministry of Defence (MoD) to be “ambitious, safe, responsible” in its use of AI “has not lived up to reality”, and that although MoD has claimed that transparency and challenge are central to its approach, “we have not found this yet to be the case”.

The cross-party House of Lords Committee on AI in Weapon Systems was set up in January 2023 at the suggestion of Liberal Democrat peer Lord Clement-Jones, and started taking evidence in March.    The committee heard oral evidence from 35 witnesses and received nearly 70 written evidence submissions, including evidence from Drone Wars UK.

The committee’s report is entitled ‘Proceed with Caution: Artificial Intelligence in Weapon Systems’ and ‘proceed with caution’ gives a fair summary of its recommendations.  The panel was drawn entirely from the core of the UK’s political and military establishment, and at times some members appeared to have difficulty in grasping the technical concepts underpinning the technologies behind autonomous weapons.  Under the circumstances the committee was never remotely likely to recommend that the government should not commit to the development of new weapons systems based on advanced technology, and in many respects its report provides a road-map setting out the committee’s views on how the MoD should go ahead in integrating AI into weapons systems and build public support for doing this.

Nevertheless, the committee has taken a sceptical view of the advantages claimed for autonomous weapons systems; has recognised the very real risks that they pose; and has proposed safeguards to mitigate the worst of the risks alongside a robust call for the government to “lead by example in international engagement on regulation of AWS [autonomous weapon systems]”.  Despite hearing from witnesses who argued that autonomous weapons “could be faster, more accurate and more resilient than existing weapon systems, could limit the casualties of war, and could protect “our people from harm by automating ‘dirty and dangerous’ tasks””, the committee was apparently unconvinced, concluding that “although a balance sheet of benefits and risks can be drawn, determining the net effect of AWS is difficult” – and that “this was acknowledged by the Ministry of Defence”.

Perhaps the  most important recommendation in the committee’s report relates to human control over autonomous weapons.  The committee found that:

The Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis on information obtained from sensors. But it is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.

Read more

Military AI: MoD’s timid approach to challenging ethical issues will not be enough to prevent harm

Papers released to Drone Wars UK by the Ministry of Defence (MoD) under the Freedom of Information Act reveal that progress in preparing ethical guidance for Ministry of Defence (MoD) staff working on military artificial intelligence (AI) projects is proceeding at a snail’s pace.  As a result, MoD’s much vaunted AI strategy and ethical principles are at risk of failing as the department races ahead to develop AI as a key military technology.

Minutes of meetings of MoD’s Ethical Advisory Panel show that although officials have repeatedly stressed the need to focus on implementation of AI programmes, the ethical framework and guidelines needed to ensure that AI systems are safe and responsible are still only in draft form and there is “not yet a distinct sense of a clear direction” as to how they will be developed.

The FOI papers also highlight concerns about the transparency of the panel’s work.  Independent members of the panel have repeatedly stressed the need for the panel to work in an open and transparent manner, yet MoD refuses to publish the terms of membership, meeting minutes, and reports prepared for the panel.  With the aim of remedying this situation, Drone Wars UK is publishing the panel documents released in response to our FOI request as part of this blog article (see pdf files at the end of the article).

The Ministry of Defence AI Ethics Advisory Panel

One of the aims of the Defence Artificial Intelligence Strategy, published in June 2022, was to set out MoD’s “clear commitment to lawful and ethical AI use in line with our core values”.  To help meet this aim MoD published a companion document, entitled ‘Ambitious, safe, responsible‘ alongside the strategy to represent “a positive blueprint for effective, innovative and responsible AI adoption”.

‘Ambitious, safe, responsible’ had two main foundations: a set of ethical principles to guide MoD’s use of AI and an Ethics Advisory Panel, described as “an informal advisory board” to assist with policy relating to the safe and responsible development and use of AI.  The document stated that the panel had assisted in formulating the ethical principles and listed the members of the panel, who are drawn from within Ministry of Defence and the military, industry, and universities and civil society.

The terms of reference for the panel were not published in the ‘Ambitious, safe, responsible’ document, but the FOI papers provided to Drone Wars UK show that it is tasked with advising on:

  • “The development, maintenance and application of a set of ethical principles for AI in Defence, which will demonstrate the MOD’s position and guide our approach to responsible AI across the department.
  • “A framework for implementing these principles and related policies / processes across Defence.
  • “Appropriate governance and decision-making processes to assure ethical outcomes in line with the department’s principles and policies”.

The ethical principles were published alongside the Defence AI Strategy, but more than two years after the panel first met – and despite a constant refrain at panel meetings on the need to focus on implementation – it has yet to make substantial progress on the second and third of these objectives.  An implementation framework and associated policies and governance and decision-making processes have yet to appear.  This appears in no way to be due to shortcomings on behalf of the panel, who seem to have a keen appetite for their work, but rather is the result of slow progress by MoD.  In the meantime work is proceeding at full speed ahead on the development of AI systems in the absence of these key ethical tools.

The work of the panel

The first meeting of the panel, held in March 2021, was chaired by Stephen Lovegrove, the then Permanent Secretary at the Ministry of Defence.  The panel discussed the MoD’s work to date on developing an AI Ethics framework and the panel’s role and objectives.  The panel was to be a “permanent and ongoing source of scrutiny” and “should provide expert advice and challenge” to MoD, working through a  regular quarterly meeting cycle.  Read more

MoD AI projects list shows UK is developing technology that allows autonomous drones to kill

Omniscient graphic: ‘High Level Decision Making Module’ which integrates sensor information using deep probabilistic algorithms to detect, classify, and identify targets, threats, and their behaviours. Source: Roke

Artificial intelligence (AI) projects that could help to unleash new lethal weapons systems requiring little or no human control are being undertaken by the Ministry of Defence (MoD), according to information released to Drone Wars UK through a Freedom of Information Act request.

The development of lethal autonomous military systems – sometimes described as ‘killer robots’ – is deeply contentious and raises major ethical and human rights issues.  Last year the MoD published its Defence Artificial Intelligence Strategy setting out how it intends to adopt AI technology in its activities.

Drone Wars UK asked the MoD to provide it with the list of “over 200 AI-related R&D programmes” which the Strategy document stated the MoD was  working on.  Details of these programmes were not given in the Strategy itself, and MoD evaded questions from Parliamentarians who have asked for more details of its AI activities.

Although the Defence Artificial Intelligence Strategy claimed that over 200 programmes  are underway, only 73 are shown on the list provided to Drone Wars.  Release of the names of some projects were refused on defence, security and /or national security grounds.

However, MoD conceded that a list of “over 200” projects was never held when the strategy document was prepared in 2022, explaining that “our assessment of AI-related projects and programmes drew on a data collection exercise that was undertaken in 2019 that identified approximately 140 activities underway across the Front-Line Commands, Defence Science and Technology Laboratory (Dstl), Defence Equipment and Support (DE&S) and other organisations”.  The assessment that there were at least 200 programmes in total “was based on our understanding of the totality of activity underway across the department at the time”.

The list released includes programmes for all three armed forces, including a number of projects related to intelligence analysis systems and to drone swarms, as well as  more mundane  ‘back office’ projects.  It covers major multi-billion pound projects stretching over several decades, such as the Future Combat Air System (which includes the proposed new Tempest aircraft), new spy satellites, uncrewed submarines, and applications for using AI in everyday tasks such as predictive equipment maintenance, a repository of research reports, and a ‘virtual agent’ for administration.

However, the core of the list is a scheme to advance the development of AI-powered autonomous systems for use on the battlefield.  Many of these are based around the use of drones as a platform – usually aerial systems, but also maritime drones and autonomous ground vehicles.  A number of the projects on the list relate to the computerised identification of military targets by analysis of data from video feeds, satellite imagery, radar, and other sources.  Using artificial intelligence / machine learning for target identification is an important step towards the  development of autonomous weapon systems – ‘killer robots’ – which are able to operate without human control.  Even when they are under nominal human control, computer-directed weapons pose a high risk of civilian casualties for a number of reasons including the rapid speed at which they operate and difficulties in understanding the often un-transparent ways in which they make decisions.

The government claims it “does not possess fully autonomous weapons and has no intention of developing them”. However, the UK has consistently declined to support proposals put forward at the United Nations to ban them.

Among the initiatives on the list are the following projects.  All of them are focused on developing technologies that have potential for use in autonomous weapon systems.  Read more