Collateral Damage: Economics and ethics are casualties in the militarisation of AI

The current government places a central emphasis on technology and innovation in its evolving national security strategy, and wider approach to governance. Labour proposes reviving a struggling British economy through investment in defence with artificial intelligence (AI) featuring as an important component. Starmer’s premiership seems to align several objectives: economic growth, defence industrial development and technological innovation.

Rachel Reeves and John Healey hold roundtable with military company bosses, in front of Reaper drone at RAF Waddington, Feb 2025. Image: MoD

Taken together, these suggest that the government is positioning AI primarily in the context of war and defence innovation. This not only risks undermining the government’s stated ambitions of stability and economic growth but a strategy that prioritises speed over scrutiny, to the neglect of important ethical concerns.

The private defence industry has been positioned as an important pillar of this strategy. Before the Strategic Defence Review (SDR) was published, Chancellor Rachel Reeves and Defence Secretary John Healey initiated a Defence and Economic Growth Task Force to drive UK growth through defence capabilities and production. Arms companies are no longer vital for the purposes of national security but now presented as engines of future prosperity. AI is central to this, consistently highlighted in government communications John Healy has explicitly acknowledged that AI will increasingly power the British military whilst Kier Starmer stated that AI ‘will drive incredible change’.

UK focusing AI on military applications

The AI Action Plan, released in January 2025, explicitly links AI to economic growth. Although this included references to ‘responsible use and leadership’, the government has now shifted emphasis on military applications at the expense of crucial policy areas. On the 4th of July, the Science and Technology Secretary Peter Kyle wrote to the Alan Turing Institute – Britain’s premier AI research organization – to refocus research on military applications of AI. The Institutes prior research agenda spanned environmental sustainability, health and national security; under this new directive priorities are fundamentally being narrowed.

BAE Systems Project Odyssey uses AI and VR to make training ‘more realistic’. Image: BAE Systems

Relatedly, the Industrial Strategy released by the government aims to ‘embolden’ the UK’s digital and technologies economy, with £500 million to be delivered through a sovereign AI unit – this however will be focused on ‘building capacity in the most strategically important areas’. Given Peter Kyles re-direction and the overwhelming emphasis the government has placed on AI’s productive capacity in war, it becomes clear that AI research in defence will be at the cost of socially beneficial research in the case of the Alan Turing Institute.

Take Britain’s bleak economic outlook: sluggish productivity; post-Brexit stagnation; strained public finances; mounting government debt repayments; surging costs of living and inflating house prices. There is little evidence to suggest that defence-led growth will yield impactful returns on this catalogue of challenges. No credible economist is going to advise, in the face of these challenges, that investing in defence and redirecting research on AI in the name of national security, is going to give a better return on investment.

Research conducted in America illustrates that investing in health, education, infrastructure and green investment is more likely to give better returns on individual income specifically and broadly, the country’s prospects. Similarly, Lord Blunkett (former minster under Blair) pointed out that without GDP growth, raising defence spending as a share of GDP may not increase the actual funding.

Concerning applications to health outcomes, in August the World Economic Forum reported AI’s striking potential in doubling accuracy in the examination of brains in stroke patients, detecting fractures often missed in overstretched departments and predicting diseases with high confidence. This is critical given the NHS’s persistent challenges: long waiting times, underfunding, regional inequality, staff shortages and bureaucratic inertia.

Health and economic growth are closely related: healthier individuals are more productive, children attend school more consistently, preventative care lowers long-term costs – fundamentally strong health systems add value to the economy and our lives . Yet health is just one example. We are in the embryonic stages of AI development, and by prioritising research on military applications over civilian ones with public value, the government risks undermining, not fuelling long-term economic growth.

Crucially, framing arms companies as a major engine of economic growth is wildly misleading and economically unfounded. Arms sales account for 0.004% of the treasury’s total revenue and the defence industrial base accounts for only 1% of UK economic output. This sector is highly monopolized and so the benefits of ‘growth’ are concentrated among a handful of dominant corporations. Even then, the profit generated will not be reinvested into the UK. The biggest arms company in the UK – BAE Systems – is essentially a joint US-UK company with most of its capital invested in the US with majority shareholders emanating from US investment companies like BlackRock.

Prioritising speed over scrutiny

Beyond the economics, this is part of a wider strategy that signals a growing dismissal of ethical concerns, prioritising speed over scrutiny. The SDR acknowledged that technology is outpacing regulatory frameworks, noting that ‘the UK’s competitors are unlikely to adhere to common ethical standards’. In April 2025, Matthew Clifford – AI advisor to the PM – has been quoted saying ‘speed is everything’. While the Ministry of Defence (in 2022) promised to take an ‘ambitious, safe and responsible’ approach to the development of military AI, the current emphasis on speed sidelines important ethical concerns in the rush for military-technological superiority.

Militarily, the SDR makes plans to invest in drones, autonomous systems and £1 billion for a ‘digital targeting web’. A key foundational principle of International Humanitarian Law is the protection of civilians and their distinction with military targets. An AI-enabled ‘digital targeting web’ – like the one proposed in the SDR – connects sensors and weapons enabling faster detection and killing of human life. These networks would be able to identify and suggest targets faster than humans ever could, leaving soldiers in the best case, minutes, and the worst case, seconds to decide whether the drone should kill.

Digital Warfare: US and UK forces at the Combined Air Operations Center (CAOC), Al Udeid Air Base, Qatar,

One notable example is the Maven Smart System, recently procured by NATO. According to the US Think Tank, the Centre of Security and Emerging Technology, the system makes possible small armies to make ‘1000 tactical decisions per hour’. Some legal scholars have pointed out that the prioritisation of speed, within AI-powered battleground technology, raises questions surrounding the preservation of meaningful human control and restraint in warfare. Israeli use of AI-powered automated targeting systems such as ‘Lavender’ during its assault and occupation of Gaza is illustrative of this point. Systems such as these have been highlighted as one of the factors behind the shockingly high civilian death toll there.

This problem is compounded by the recent research that has shown that new large language models are known to ‘hallucinate’ – producing outputs in error or made up. As these systems become embedded within military decision-making chains, the risk of escalation due to technical failure increase dramatically. A false signal, misread sensor or a corrupted database could lead to erroneous targeting, or unintended conflict escalation.

In sum, the UK’s current approach – predominantly framing AI’s utility though the lens of defence – risks squandering its broader social and economic potential. The redirection of public research institutes, the privileging of AI investment in military applications (or so-called ‘strategic areas’) and the emphasis on speed over scrutiny raises serious concerns. Ethically, the erosion of meaningful human control in battlefield decision-making, the risk of AI-driven conflict escalation and the disregard of international humanitarian principles points to a troubling trajectory. The UK risks drifting towards the ethical standards of Russia and Israel in its use of military AI. A government approach to AI grounded in human security (freedom from fear and want), not war is not only more ethical but far more likely to generate sustainable economic growth for the United Kingdom.

  • Matthew Croft is a postgraduate student at Kings College London studying Conflict, Security and Development with a particular interest on the ethics of national security and the politics of technology.

UK crossing the line as it implements use of AI for lethal targeting under Project Asgard

Despite grave ethical and legal concerns about the introduction of AI into decision making around the use of lethal force, the UK is rapidly pressing ahead with a number of programmes and projects to do so, with the British Army recently trialling a new AI-enabled targeting system called ASGARD as part of a NATO exercise in Estonia in May 2025.

A mock HQ utilising ASGARD at MoD briefing, July 2025. Crown Copyright 2025.

Last week, the Ministry of Defence (MoD) gave a briefing to selected media and industry ‘partners’ on Project ASGARD – which it describes as the UK’s programme to “double the lethality” of the British Army through the use of AI and other technology. ASGARD is not aimed at  producing or procuring a particular piece of equipment but rather at developing a communications and decision-making network that uses AI and other technology to vastly increase the speed of undertaking lethal strikes.

ASGARD is part of a £1 billion ‘Digital Targeting Web’ designed to “connect sensors, shooters, and decision-makers” across the land, sea, air, and space domains. “This is the future of warfare,” Maria Eagle, Minister for Defence Procurement and Industry told the gathering. 

According to one reporter present at the briefing, the prototype network “used AI-powered fire control software, low-latency tactical networks, and semi-autonomous target recommendation tools.” 

Janes reported that through ASGARD, “any sensor”, whether it be an unmanned aircraft system (UAS), radar, or human eye, is enabled by AI to identify and prioritise targets and then suggest weapons for destroying them. “Before Asgard it might take hours or even days. Now it takes seconds or minutes to complete the digital targeting chain,” Sir Roly Walker, Head of the British Army told the gathering.

Drones used in conjunction with ASGARD
DART 250EW one-way attack drone
Helsing HX-2 one-way attack drone

While the system currently has a ‘human in the loop’  officials suggested that this could change in future, with The I Paper reporting ‘the system is technically capable of running without human oversight and insiders did not rule out allowing the AI to operate independently if ethical and legal considerations changed.’

How it works

A British Army report after the media event suggested that  “Asgard has introduced three new ways of fighting designed to find, strike and blunt enemy manoeuvre: 

  • A dismounted data system for use at company group and below.
  • The introduction of the DART 250 One Way Effector. This enables the targeting of enemy infrastructure three times further than the current UK land based deep fires rockets.
  • A mission support network to accelerate what is called the digital targeting or ‘kill’ chain.

According to a detailed and useful write-up of the Estonia exercise, ASGARD uses existing equipment currently in service alongside new systems including Lattice command and control software from Anduril which provides a ‘mesh network’ for communications, as well as Altra and Altra Strike software from Helsing used to identify and ‘fingerprint’ targets. The report goes on:

“targets were passed to PRIISM which would conduct further development including legal review, collateral damage estimates, and weapon-to-target matching.”  

Helsing’s HX-2 drone was also used during the exercise and is another indication that the UK is likely to acquire these one-way attack drones. DART 250, a UK manufactured jet-powered one-way attack drone with a range of 250 km that can fly at more than 400 km/h was also deployed as part of the exercise. The manufacturer says that it can fly accurately even when GPS signals are jammed and that it is fitted with seeker that enables it to home-in and destroy jamming equipment.  

AI: speed eroding oversight and accountability

The grave dangers of introducing AI into warfare, and in particular for the use of force are by now well known.  While arguments have been made for and against these systems for more than a decade, increasing we are moving from a theoretical, future possibility to the real world: here, now, today.

While some argue almost irrationally in the powers and benefits of AI, in the real world AI-enabled systems remain error prone and unreliable. AI is far from fallible and relies on training data which time and time again have led to serious mistakes through bias.  

Systems like ASGARD may be able to locate tanks on an open plain in a well-controlled training exercise environment (see video above), the real world is very different.  Most armed conflicts do not take place in remote battlefields but in complex and complicated urban environments.  Relying on AI to choose military targets in such a scenario is fraught with danger.

Advocates of ASGARD and similar systems argue that the ‘need’ for speed in targeting decisions means that the use of AI brings enormous benefits.  And it is undoubtedly true that algorithms can process data much faster than humans. But speeding up such targeting decisions significantly erodes human oversight and accountability.  Humans in such circumstances are reduced to merely rubber-stamping the output of the machine.

Meanwhile, the Ministry of Defence confirmed that the next phase of ASGARD’s development has received government funding while at the UN, the UK continues to oppose the negotiation of a new legally binding instrument on autonomous weapons systems.

Autonomous Collaborative Platforms: The UK’s New Autonomous Drones 

BAE Systems concept for Tier 2 ACP

Following on from the MoD’s Defence Drone Strategy released in February (see our report here), the RAF has now published its ‘Autonomous Collaborative Platform Strategy’ as it works to develop, produce and deploy these new type of military drones.

The strategy defines Autonomous Collaborative Platform (ACP) as types of uncrewed systems (drones) “which demonstrate autonomous behaviour and are able to operate in collaborative manner with other assets.”   The strategy argues that Reaper and the (soon-to-enter-service)  Protector drones “are vulnerable in warfighting conflicts involving peer or near-peer adversary. Therefore, as a priority the RAF needs to go beyond RPAS [Remotely Piloted Air Systems] to develop ACP capabilities.”

The plan argues that “through increasing use of autonomy, remote mission operators (commanders /supervisors) will be able to command an increasing number of AV [drones] within each ACP system.”

Underpinning the development, is the notion that the “geopolitical climate demands that we move beyond the caution of the post-cold war world” and that therefore the RAF must “undertake activity in areas that are demanding, difficult or overtly hostile.”   While the Strategy sets out a variety of tasks for these new drones, it makes clear that a key focus is on “overwhelming an adversary’s air defences.”  ACP are therefore not a defensive system, but are designed from the outset to enable the UK to engage in attack.

Tiers for Fears

The strategy sets out three ‘Tiers’ of ACP based on their ability to survive in “high-risk” (i.e. defended) environments:

  • Tier 1 ae disposable drones, with life-cycle of one or very few missions;
  • Tier 2 are “attritable” (or “risk tolerant”) that is, expected to survive but losses are acceptable;
  • Tier 3 are drones which have high strategic value, which if lost would significantly affect how the RAF will fight.

Diagram from Autonomous Collaborative Platform Strategy

Echoing the words of the Chief of the Air Staff Sir Richard Knighton before the Defence Select Committee earlier this year, the document states that a Tier 1 ACP will be operational “by the end of 2024”, while Tier 2 systems will be part of RAF combat force by 2030.  Read more

Proceed in Harmony: The Government replies to the Lords on AI in Weapon Systems

Click to open

Last December a select committee of the House of Lords published ‘Proceed with Caution’: a report setting out the findings of a year-long investigation into the use of artificial intelligence (AI) in weapon systems.

Members of the Lords committee were drawn entirely from the core of the UK’s political and security establishment, and their report was hardly radical in its conclusions.  Nevertheless, their report made a number of useful recommendations and concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.  The Lords found that Ministry of Defence (MoD) claims to be “ambitious, safe, responsible” in its use of AI had “not lived up to reality”.

The government subsequently pledged to reply to the Lords report, and on 21 February published its formal response.  Perhaps the best way of summarising the tone of the response is to quote from its concluding paragraph:  ““Proceed with caution”, the overall message of this [Lords] report, mirrors the MoD’s approach to AI adoption.”   There is little new in the government response and nothing in it will be of any surprise to observers and analysts of UK government policy on AI and autonomous technologies.  The response merely outlines how the government intends to follow the course of action it had already planned to take, reiterating the substance of past policy statements such as the Defence Artificial Intelligence Strategy and puffing up recent MoD activity and achievements in the military AI field.

As might be imagined, the response takes a supportive approach to recommendations from the Lords which are aligned to its own agenda, such as developing high-quality data sets, improving MoD’s AI procurement arrangements, and undertaking research into potential future AI capabilities.  On the positive side, it is encouraging to see that in some areas concerns over the risks and limitations of AI technologies are highlighted, for example in the need for review and rigorous testing of new systems.  MoD acknowledges that rigorous testing would be required before an operator could be confident in an AI system’s use and effect, that current procedures, including the Article 36 weapons review process, will need to be adapted and updated, and that changes in operational environment may require weapon systems to be retested.

The response also reveals that the government is working on a Joint Service Publication covering all the armed forces to give more concrete directions and guidance on implementing MoD’s AI ethical principles.  The document, ‘Dependable AI in Defence’, will set out the governance, accountabilities, processes and reporting mechanisms needed to translate ethical policies into tangible actions and procedures.  Drone Wars UK and other civil society organisations have long called for MoD to formulate such guidance as a priority.

In some areas the MoD has relatively little power to meet the committee’s recommendations, such as in adjusting government pay scales to match market rates and attract qualified staff to work on MoD AI projects.  Here the rejoinder is little more than flannel, mentioning that “a range of steps” are being taken “to make Defence AI an attractive and aspirational choice.”

In other respects the Lords have challenged MoD’s approach more substantially, and in such cases these challenges are rejected in the government response.  This is so in relation to the Lords’ recommendation that the government should adopt a definition for autonomous weapons systems (AWS).  The section of the response dealing with this point lays bare the fact that the government’s priority “is to maximise our military capability in the face of growing threats”.  A rather unconvincing assertion that “the irresponsible and unethical behaviours and outcomes about which the Committee is rightly concerned are already prohibited under existing legal mechanisms” is followed by the real reason for the government’s opposition: “there is a strong tendency in the ongoing debate about autonomous weapons to assert that any official AWS definition should serve as the starting point for a new legal instrument prohibiting certain types of systems”.  Any international treaty which would outlaw autonomous weapon systems “represents a threat to UK Defence interests” the government argues.  The argument ends with a side-swipe at Russia and an attempt to shut down further debate by claiming that the debate is taking place “at the worst possible time, given Russia’s action in Ukraine and a general increase in bellicosity from potential adversaries.”  This basically seems to be saying that in adopting a definition for autonomous weapon systems the UK would be making itself more vulnerable to Russian military action.  Really? Read more

Proceed with caution: Lords warn over development of military AI and killer robots

Click to open report

The use of artificial intelligence (AI) for the purposes of warfare through the development of AI-powered autonomous weapon systems – ‘killer robots’ –  “is one of the most controversial uses of AI today”, according to a new report by an influential House of Lords Committee.

The committee, which spent ten months investigating the application of AI to weapon systems and probing the UK government’s plans to develop military AI systems, concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.

Echoing concerns which Drone Wars UK has repeatedly raised, the Lords found that the stated aspiration of the Ministry of Defence (MoD) to be “ambitious, safe, responsible” in its use of AI “has not lived up to reality”, and that although MoD has claimed that transparency and challenge are central to its approach, “we have not found this yet to be the case”.

The cross-party House of Lords Committee on AI in Weapon Systems was set up in January 2023 at the suggestion of Liberal Democrat peer Lord Clement-Jones, and started taking evidence in March.    The committee heard oral evidence from 35 witnesses and received nearly 70 written evidence submissions, including evidence from Drone Wars UK.

The committee’s report is entitled ‘Proceed with Caution: Artificial Intelligence in Weapon Systems’ and ‘proceed with caution’ gives a fair summary of its recommendations.  The panel was drawn entirely from the core of the UK’s political and military establishment, and at times some members appeared to have difficulty in grasping the technical concepts underpinning the technologies behind autonomous weapons.  Under the circumstances the committee was never remotely likely to recommend that the government should not commit to the development of new weapons systems based on advanced technology, and in many respects its report provides a road-map setting out the committee’s views on how the MoD should go ahead in integrating AI into weapons systems and build public support for doing this.

Nevertheless, the committee has taken a sceptical view of the advantages claimed for autonomous weapons systems; has recognised the very real risks that they pose; and has proposed safeguards to mitigate the worst of the risks alongside a robust call for the government to “lead by example in international engagement on regulation of AWS [autonomous weapon systems]”.  Despite hearing from witnesses who argued that autonomous weapons “could be faster, more accurate and more resilient than existing weapon systems, could limit the casualties of war, and could protect “our people from harm by automating ‘dirty and dangerous’ tasks””, the committee was apparently unconvinced, concluding that “although a balance sheet of benefits and risks can be drawn, determining the net effect of AWS is difficult” – and that “this was acknowledged by the Ministry of Defence”.

Perhaps the  most important recommendation in the committee’s report relates to human control over autonomous weapons.  The committee found that:

The Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis on information obtained from sensors. But it is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.

Read more

Developments on both sides of the Atlantic signal push to develop AI attack drones

Artist impression of crewed aircraft operating with autonomous drones. Credit: Lockheed Martin

Recent government and industry announcements signal clear intent by both the US and the UK to press ahead with the development of a new generation of AI attack drones despite serious concerns about the development of autonomous weapons. While most details are being kept secret, it is clear from official statements, industry manoeuvring and budget commitments that these new drones are expected to be operational by the end of the decade.

The current focus is the development of drones that were previously labelled as ‘loyal wingman’ but are now being described either as ‘Collaborative Combat Aircraft (CCA) or ‘Autonomous Collaborative Platforms (ACP).  As always, the nomenclature around ‘drones’ is a battlefield itself.  The concept for this type of drone is for one or more to fly alongside, or in the vicinity of, a piloted military aircraft with the drones carrying out specifically designated tasks such as surveillance, electronic warfare, guiding weapons onto targets, or carrying out air-to-air or air-to-ground strikes.  Rather than being directly controlled by individual on the ground such as current armed drones like the Reaper or Bayraktar, these drones will fly autonomously. According to DARPA officials (using the beloved sports metaphor) these drone will allow pilots to direct squads of unmanned aircraft “like a football coach who chooses team members and then positions them on the field to run plays.”

Next Generation

In May, the US Air Force issued a formal request for US defence companies to bid to build a new piloted aircraft to replace the F-22.  However, equally important for the ‘Next Generation Air Dominance (NGAD)’ program is the development of new autonomous drones and a ‘combat cloud’ communication network.  While the development of the drones is a covert programme, US Air Force Secretary Frank Kendall said they will be built “in parallel” to the piloted aircraft. Kendall publicly stated that the competition to develop CCA was expected to begin in Fiscal Year 2024 (note this runs from Oct 2023 to Sept 2024).

While it is planning to build around 200 of the new crewed aircraft, Kendall told reporters that the USAF is expecting to build around 1,000 of the drones. “This figure was derived from an assumed two CCAs per 200 NGAD platforms and an additional two for each of 300 F-35s for a total of 1,000,” Kendall explained. Others expect even more of these drones to be built.  While the NGAD fighter aircraft itself is not expected to be operational until the 2030s, CCA’s are expected to be deployed by the end of the 2020’s.

It’s important to be aware that there will not be one type of drone built under this programme, but a range with different capabilities able to carry out different tasks.  Some of them will be ‘expendable’ – that is designed for just one mission – something like the ‘one-way attack’ drones that we have seen increasing used in Ukraine and elsewhere; some will be ‘attritable’, that is designed that if they are lost in combat it would not be severely damaging, while others, described as ‘exquisite’ will be more capable and specifically designed not to be lost during combat.  A number of companies have set out their proposals, with some even building prototypes and undertaking test flights. Read more