Collateral Damage: Economics and ethics are casualties in the militarisation of AI

The current government places a central emphasis on technology and innovation in its evolving national security strategy, and wider approach to governance. Labour proposes reviving a struggling British economy through investment in defence with artificial intelligence (AI) featuring as an important component. Starmer’s premiership seems to align several objectives: economic growth, defence industrial development and technological innovation.

Rachel Reeves and John Healey hold roundtable with military company bosses, in front of Reaper drone at RAF Waddington, Feb 2025. Image: MoD

Taken together, these suggest that the government is positioning AI primarily in the context of war and defence innovation. This not only risks undermining the government’s stated ambitions of stability and economic growth but a strategy that prioritises speed over scrutiny, to the neglect of important ethical concerns.

The private defence industry has been positioned as an important pillar of this strategy. Before the Strategic Defence Review (SDR) was published, Chancellor Rachel Reeves and Defence Secretary John Healey initiated a Defence and Economic Growth Task Force to drive UK growth through defence capabilities and production. Arms companies are no longer vital for the purposes of national security but now presented as engines of future prosperity. AI is central to this, consistently highlighted in government communications John Healy has explicitly acknowledged that AI will increasingly power the British military whilst Kier Starmer stated that AI ‘will drive incredible change’.

UK focusing AI on military applications

The AI Action Plan, released in January 2025, explicitly links AI to economic growth. Although this included references to ‘responsible use and leadership’, the government has now shifted emphasis on military applications at the expense of crucial policy areas. On the 4th of July, the Science and Technology Secretary Peter Kyle wrote to the Alan Turing Institute – Britain’s premier AI research organization – to refocus research on military applications of AI. The Institutes prior research agenda spanned environmental sustainability, health and national security; under this new directive priorities are fundamentally being narrowed.

BAE Systems Project Odyssey uses AI and VR to make training ‘more realistic’. Image: BAE Systems

Relatedly, the Industrial Strategy released by the government aims to ‘embolden’ the UK’s digital and technologies economy, with £500 million to be delivered through a sovereign AI unit – this however will be focused on ‘building capacity in the most strategically important areas’. Given Peter Kyles re-direction and the overwhelming emphasis the government has placed on AI’s productive capacity in war, it becomes clear that AI research in defence will be at the cost of socially beneficial research in the case of the Alan Turing Institute.

Take Britain’s bleak economic outlook: sluggish productivity; post-Brexit stagnation; strained public finances; mounting government debt repayments; surging costs of living and inflating house prices. There is little evidence to suggest that defence-led growth will yield impactful returns on this catalogue of challenges. No credible economist is going to advise, in the face of these challenges, that investing in defence and redirecting research on AI in the name of national security, is going to give a better return on investment.

Research conducted in America illustrates that investing in health, education, infrastructure and green investment is more likely to give better returns on individual income specifically and broadly, the country’s prospects. Similarly, Lord Blunkett (former minster under Blair) pointed out that without GDP growth, raising defence spending as a share of GDP may not increase the actual funding.

Concerning applications to health outcomes, in August the World Economic Forum reported AI’s striking potential in doubling accuracy in the examination of brains in stroke patients, detecting fractures often missed in overstretched departments and predicting diseases with high confidence. This is critical given the NHS’s persistent challenges: long waiting times, underfunding, regional inequality, staff shortages and bureaucratic inertia.

Health and economic growth are closely related: healthier individuals are more productive, children attend school more consistently, preventative care lowers long-term costs – fundamentally strong health systems add value to the economy and our lives . Yet health is just one example. We are in the embryonic stages of AI development, and by prioritising research on military applications over civilian ones with public value, the government risks undermining, not fuelling long-term economic growth.

Crucially, framing arms companies as a major engine of economic growth is wildly misleading and economically unfounded. Arms sales account for 0.004% of the treasury’s total revenue and the defence industrial base accounts for only 1% of UK economic output. This sector is highly monopolized and so the benefits of ‘growth’ are concentrated among a handful of dominant corporations. Even then, the profit generated will not be reinvested into the UK. The biggest arms company in the UK – BAE Systems – is essentially a joint US-UK company with most of its capital invested in the US with majority shareholders emanating from US investment companies like BlackRock.

Prioritising speed over scrutiny

Beyond the economics, this is part of a wider strategy that signals a growing dismissal of ethical concerns, prioritising speed over scrutiny. The SDR acknowledged that technology is outpacing regulatory frameworks, noting that ‘the UK’s competitors are unlikely to adhere to common ethical standards’. In April 2025, Matthew Clifford – AI advisor to the PM – has been quoted saying ‘speed is everything’. While the Ministry of Defence (in 2022) promised to take an ‘ambitious, safe and responsible’ approach to the development of military AI, the current emphasis on speed sidelines important ethical concerns in the rush for military-technological superiority.

Militarily, the SDR makes plans to invest in drones, autonomous systems and £1 billion for a ‘digital targeting web’. A key foundational principle of International Humanitarian Law is the protection of civilians and their distinction with military targets. An AI-enabled ‘digital targeting web’ – like the one proposed in the SDR – connects sensors and weapons enabling faster detection and killing of human life. These networks would be able to identify and suggest targets faster than humans ever could, leaving soldiers in the best case, minutes, and the worst case, seconds to decide whether the drone should kill.

Digital Warfare: US and UK forces at the Combined Air Operations Center (CAOC), Al Udeid Air Base, Qatar,

One notable example is the Maven Smart System, recently procured by NATO. According to the US Think Tank, the Centre of Security and Emerging Technology, the system makes possible small armies to make ‘1000 tactical decisions per hour’. Some legal scholars have pointed out that the prioritisation of speed, within AI-powered battleground technology, raises questions surrounding the preservation of meaningful human control and restraint in warfare. Israeli use of AI-powered automated targeting systems such as ‘Lavender’ during its assault and occupation of Gaza is illustrative of this point. Systems such as these have been highlighted as one of the factors behind the shockingly high civilian death toll there.

This problem is compounded by the recent research that has shown that new large language models are known to ‘hallucinate’ – producing outputs in error or made up. As these systems become embedded within military decision-making chains, the risk of escalation due to technical failure increase dramatically. A false signal, misread sensor or a corrupted database could lead to erroneous targeting, or unintended conflict escalation.

In sum, the UK’s current approach – predominantly framing AI’s utility though the lens of defence – risks squandering its broader social and economic potential. The redirection of public research institutes, the privileging of AI investment in military applications (or so-called ‘strategic areas’) and the emphasis on speed over scrutiny raises serious concerns. Ethically, the erosion of meaningful human control in battlefield decision-making, the risk of AI-driven conflict escalation and the disregard of international humanitarian principles points to a troubling trajectory. The UK risks drifting towards the ethical standards of Russia and Israel in its use of military AI. A government approach to AI grounded in human security (freedom from fear and want), not war is not only more ethical but far more likely to generate sustainable economic growth for the United Kingdom.

  • Matthew Croft is a postgraduate student at Kings College London studying Conflict, Security and Development with a particular interest on the ethics of national security and the politics of technology.

Companies vying for share of purloined aid budget as UK plans to spend big on drones and military tech

Keir Starmer visits drone factory. Credit: Reuters

While behind-the-scenes wrangling on the final details of the latest Strategic Defence Review continue, the overall message is crystal clear: the UK intends to significantly increase military spending. To enable this there have already been a number of government decisions designed to make funds available, in particular, for new weapons technology and programmes.

In November, the Defence Secretary announced he was cutting a number of ‘outdated’ military programmes (including the infamous Watchkeeper drone) to make funds available for new military technology. The Chief of the Defence Staff, Admiral Tony Radakin argued that “accelerating the disposal of legacy equipment is the logical approach to focus on the transition to new capabilities that better reflect changing technology and tactics.”

In a more ambitious money grab, PM Kier Starmer announced that he was cutting the UK’s aid budget to help increase military spending to 2.5% of GDP and said he would use the released funds to “accelerate the adoption of cutting-edge capabilities.” Starmer argued that the aid cuts would mean an extra £13.4bn military spending per year from 2027. Others, however, argued that in real terms, the increase would be around £6bn per year. Many noted that whatever the boost to UK military spending, the cuts would significantly harm the worlds poorest people.

Finally, there has been a concerted effort to ensure that banks, pension funds and other big investors – who have accepted that military companies should be excluded from ethical investment portfolios – get back in line and ensure that military companies have full access to all their funds. The government it seems, is adamant that private as well as public funds are made available to such companies. Not unrelated to this move, universities are also coming under pressure to crackdown on opposition to military company recruitment on campus.

Which drones companies are likely to benefit?

A number of newer and older military companies are likely to benefit from the coming increase in military spending and, in anticipation, we have seen a surge in the stock prices of many of the companies involved. While drones and related technology are only one part of the increase in military spending, a number of companies in this area are likely to benefit.

Helsing

Helsing is a new company set-up by three AI experts in Berlin in 2021. Its website states that it was “founded to put ethics at the core of defence technology development” and insists that ”artificial intelligence will be the key capability to keep liberal democracies from harm”

HX-2 one way attack drones stocked at Hesling factory
HX-2 one way attack drones stocked at Hesling factory

One of the company’s first products is the HX-2 attack drone. HX-2 is a meter long, electrically propelled X-wing, one-way attack drone with up to 100 km range. The company says that on-board AI enables it to resist jamming and that multiple HX-2 can be assembled into swarms. The drone has been designed to be mass-producible and Helsing announced in February 2025 that It had set-up the first of what it is calling its ‘resilience factories’ in southern Germany to mass produce 6,000 of the drones for Ukraine. Jane’s reported in December 2024 that Helsing was to set up a factory in the UK and it is highly likely that the UK will order the HX-2 drone.

Anduril

Palmer Luckey with Andruil's Fury drone
Palmer Luckey with Andruil’s Fury drone

Although a little older than Helsing, Anduril too is a relatively new player to the defence industry. Co-founded in 2017 by technology entrepreneur Palmer Luckey, the company (named after a sword in Lord of the Rings) and its co-founder have been subject to an extra-ordinary amount of adulatory media coverage.

The UK has already awarded Anduril a number of contracts including a £30m deal in March 2025 to supply the Altius 600m and Altius 700m drones to Ukraine and it too announced this week plans to open a drone factory in the UK. Anduril is one of two companies left in the competition to supply the US air force with new category of drone called Collaborative Combat Aircraft (CCA). The UK too wants to acquire these type of drones to work in conjunction with its F-35 fighter aircraft and future Tempest combat aircraft. Anduril also works closely with another US AI tech company, Palantir, in development of AI-enabled intelligence and ‘battle-management’ systems similar in vein to the Israel ‘Lavender’ and ‘Gospel’ systems. This too is an area that the UK is likely to want to fund.

BAE Systems

Image of model of BAe's new drone concept
BAES System’s latest concept model for the UK’s ‘Autonomous Collaborative Platform’

The opposite of a newcomer, BAE Systems has a long history of being the main beneficiary of UK military spending. Research by CAAT showed that between 2012 and 2023, the company had more meetings with British prime ministers than any other private company.

With a track record of being involved with the development of drones including the UK’s experimental Taranis combat drone, BAE Systems is keen to position itself at the forefront of development of uncrewed autonomous systems. It has showcased its designs for the UK’s Autonomous Collaborative Platforms (ACP) programme – the UK’s equivalent to the US Collaborative Combat Aircraft (CCA) – and it continue to trial is Phasa-35 High-altitude surveillance drones.

Alongside this, BAE has quietly bought up a number of smaller, niche military drone companies to acquire new designs and expertise from those companies – including Prismatic, Malloy Aeronautics and Callen-Lenz – and has signed an agreement with QinetiQ to collaborate on the development of drone technology.    Read more

Online meeting 29th November, 7pm: ‘Cyborg Dawn? The military use of human augmentation’

 

Soldiers who see in the dark, communicate telepathically, or fly a drone by thought alone all sound like characters from a science fiction film. Yet research projects investigating all these possibilities are underway in laboratories and research centres around the globe as part of an upsurge of interest in the possibilities of human enhancement enabled largely by expanding knowledge in the field of neuroscience: the study of the human brain and nervous system.

Drone Wars UK and Scientists for Global Responsibility (SGR) are holding this online event to mark the publication of ‘Cyborg Dawn?‘, a new study investigating the military use of human augmentation, in order to increase understanding of the possibilities and hazards posed by human enhancement technology.

Speakers:

  • Peter Burt: Peter, a long-time researcher and campaigner on peace and human rights issues, authored the ‘Cyborg Dawn’ report. At Drone Wars UK he primarily works on issues relating to artificial intelligence and autonomy and their role in the future development of drones. Peter is also a Trustee the Nuclear Education Trust.
  • Ben Taylor-Green: Ben was awarded his DPhil from the University of Oxford in early 2023. His doctoral thesis, Empathic Predators: On the Affects and Optics of Brain-Computer Interface unmanned Aerial Vehicle Research is a pioneering philosophical anthropological inquiry concerning the dual use problem in international brain-computer interface (BCI) research.
  • Helen Close (Chair): Helen, a member of Drone Wars UK Steering Committee, is a Research Associate at the Omega Research Foundation, an NGO that researches the manufacture, trade in, and use of conventional arms and law enforcement equipment. She has worked at Omega since 2009 and works on number of issues including researching the manufacture of specific weapons of concern. Helen is a trustee of the Trust for Research and Education on the Arms Trade.

 

To attend this online event register here.

Click to view report

 

Cyborg Dawn?  Human-machine fusion and the future of warfighting

Click to open report

Soldiers who see in the dark, communicate telepathically, or fly a drone by thought alone all sound like characters from in a science fiction film.  Yet research projects investigating all these possibilities are under way in laboratories and research centres around the globe as part of an upsurge of interest in the possibilities of human enhancement enabled largely by expanding knowledge in the field of neuroscience: the study of the human brain and nervous system.

In order to help in understanding the possibilities and hazards posed by human enhancement technology, Drone Wars UK is publishing ‘Cyborg Dawn?‘, a new study investigating the military use of human augmentation.

Human enhancement –  a medical or biological intervention to the body designed to improve performance, appearance, or capability beyond what is necessary to achieve, sustain or restore health – may lead to fundamentally new concepts of warfare and can be expected to play a role in enabling the increased use of remotely operated and uncrewed systems in war.

Although military planners are eager to create ‘super soldiers’, the idea of artificially modifying humans to give them capabilities beyond their natural abilities presents significant moral, legal, and health risks.  The field of human augmentation is fraught with danger, and without stringent regulation, neurotechnologies and genetic modification will lead us to an increasingly dangerous future where technology encourages and accelerates warfare.  The difficulties are compounded by the dual use nature of human augmentation, where applications with legitimate medical uses could equally be used to further the use of remote lethal military force.  There is currently considerable discussion about the dangers of ‘killer robot’ autonomous weapon systems, but it is also time to start discussing how to control human enhancement and cyborg technologies which military planners intend to develop.  Read more

New trials of AI-controlled drones show push towards ‘killer robots’ as Lords announces special inquiry

General Atomics Avenger controlled by AI in trial

Two recently announced trials of AI-controlled drones dramatically demonstrates the urgent need to develop international controls over the development and use of lethal autonomous weapon systems known as ‘killer robots’.

In early January, the UK Ministry of Defence (MoD) announced that a joint UK-US AI taskforce had undertaken a trial of its ‘AI toolbox’ during an exercise on Salisbury Plain in December 2022.  The trial saw a number of Blue Bear’s Ghost drones controlled by AI which was updated during the drone’s flight. The experiments said the MoD, “demonstrated that UK-US developed algorithms from the AI Toolbox could be deployed onto a swarm of UK UAVs and retrained by the joint AI Taskforce at the ground station and the model updated in flight, a first for the UK.”  The trials were undertaken as part of the on-going US-UK Autonomy and Artificial Intelligence Collaboration (AAIC) Partnership Agreement.  The MoD has refused to give MPs sight of the agreement.

Two weeks later, US drone manufacturer General Atomics announced that it had conducted flight trials on 14 December 2022 where an AI had controlled one of its large Avenger drones from the company’s own flight operations facility in El Mirage, California.

Blue Bear Ghost drones in AI in trail on Salisbury Plain

General Atomics said in its press release that the AI “successfully navigated the live plane while dynamically avoiding threats to accomplish its mission.” Subsequently, AI was used to control both the  drone and a ‘virtual’ drone at the same time in order to “collaboratively chase a target while avoiding threats,” said the company.  In the final trial, the AI “used sensor information to select courses of action based on its understanding of the world state. According to the company, “this demonstrated the AI pilot’s ability to successfully process and act on live real-time information independently of a human operator to make mission-critical decisions at the speed of relevance.”

Drone Wars UK has long warned that despite denials from governments on the development of killer robots, behind the scenes corporations and militaries are pressing ahead with testing, trialling and development of technology to create such systems. As we forecast in our 2018 report ‘Off the Leash’ armed drones are the gateway to the development of lethal autonomous systems.  Whiles these particular trials will not lead directly to the deployment of lethal autonomous systems, byte-by-byte the building blocks are being put in place.

House of Lords Special Committee

Due to continuing developments in this area we were pleased to learn that the House of Lords voted to accept Lord Clement-Jones’ proposal for a year-long inquiry by a special committee to investigate the use of artificial intelligence in weapon systems.  We will monitor the work of the Committee throughout the year but for now here is the accepted proposal in full:  Read more

Future War: The Shape of Things to Come

A day conference of workshops, discussion and debate on the impact new technologies
will have on future conflicts – and the challenges facing peace activists.

While terrible wars currently rage in Ukraine, Yemen, Ethiopia and elsewhere, preparations for future wars using new technologies is also underway.

New technology can be a spur for great social change, offering tremendous possibilities.  However, innovations in artificial intelligence, robotics, autonomous systems and biotechnology are also being used in the military and security realms in ways which will directly and indirectly affect global peace and security. Scrutiny of these developments and building towards peaceful ways to solve political conflicts in ways which do not threaten people and the environment is crucial.

This open public conference organised by Drone Wars and CND  will bring together expert speakers and campaigners to discuss these developments and debate how we can work together to challenge wars today and in the future.

Book your free tickets here 

Supported by Scientists for Global Responsibility, UK Campaign to Stop Killer Robots, Peace News and others.  Read more