The current government places a central emphasis on technology and innovation in its evolving national security strategy, and wider approach to governance. Labour proposes reviving a struggling British economy through investment in defence with artificial intelligence (AI) featuring as an important component. Starmer’s premiership seems to align several objectives: economic growth, defence industrial development and technological innovation.

Taken together, these suggest that the government is positioning AI primarily in the context of war and defence innovation. This not only risks undermining the government’s stated ambitions of stability and economic growth but a strategy that prioritises speed over scrutiny, to the neglect of important ethical concerns.
The private defence industry has been positioned as an important pillar of this strategy. Before the Strategic Defence Review (SDR) was published, Chancellor Rachel Reeves and Defence Secretary John Healey initiated a Defence and Economic Growth Task Force to drive UK growth through defence capabilities and production. Arms companies are no longer vital for the purposes of national security but now presented as engines of future prosperity. AI is central to this, consistently highlighted in government communications John Healy has explicitly acknowledged that AI will increasingly power the British military whilst Kier Starmer stated that AI ‘will drive incredible change’.
UK focusing AI on military applications
The AI Action Plan, released in January 2025, explicitly links AI to economic growth. Although this included references to ‘responsible use and leadership’, the government has now shifted emphasis on military applications at the expense of crucial policy areas. On the 4th of July, the Science and Technology Secretary Peter Kyle wrote to the Alan Turing Institute – Britain’s premier AI research organization – to refocus research on military applications of AI. The Institutes prior research agenda spanned environmental sustainability, health and national security; under this new directive priorities are fundamentally being narrowed.

Relatedly, the Industrial Strategy released by the government aims to ‘embolden’ the UK’s digital and technologies economy, with £500 million to be delivered through a sovereign AI unit – this however will be focused on ‘building capacity in the most strategically important areas’. Given Peter Kyles re-direction and the overwhelming emphasis the government has placed on AI’s productive capacity in war, it becomes clear that AI research in defence will be at the cost of socially beneficial research in the case of the Alan Turing Institute.
Take Britain’s bleak economic outlook: sluggish productivity; post-Brexit stagnation; strained public finances; mounting government debt repayments; surging costs of living and inflating house prices. There is little evidence to suggest that defence-led growth will yield impactful returns on this catalogue of challenges. No credible economist is going to advise, in the face of these challenges, that investing in defence and redirecting research on AI in the name of national security, is going to give a better return on investment.
Research conducted in America illustrates that investing in health, education, infrastructure and green investment is more likely to give better returns on individual income specifically and broadly, the country’s prospects. Similarly, Lord Blunkett (former minster under Blair) pointed out that without GDP growth, raising defence spending as a share of GDP may not increase the actual funding.
Concerning applications to health outcomes, in August the World Economic Forum reported AI’s striking potential in doubling accuracy in the examination of brains in stroke patients, detecting fractures often missed in overstretched departments and predicting diseases with high confidence. This is critical given the NHS’s persistent challenges: long waiting times, underfunding, regional inequality, staff shortages and bureaucratic inertia.

Health and economic growth are closely related: healthier individuals are more productive, children attend school more consistently, preventative care lowers long-term costs – fundamentally strong health systems add value to the economy and our lives . Yet health is just one example. We are in the embryonic stages of AI development, and by prioritising research on military applications over civilian ones with public value, the government risks undermining, not fuelling long-term economic growth.
Crucially, framing arms companies as a major engine of economic growth is wildly misleading and economically unfounded. Arms sales account for 0.004% of the treasury’s total revenue and the defence industrial base accounts for only 1% of UK economic output. This sector is highly monopolized and so the benefits of ‘growth’ are concentrated among a handful of dominant corporations. Even then, the profit generated will not be reinvested into the UK. The biggest arms company in the UK – BAE Systems – is essentially a joint US-UK company with most of its capital invested in the US with majority shareholders emanating from US investment companies like BlackRock.
Prioritising speed over scrutiny
Beyond the economics, this is part of a wider strategy that signals a growing dismissal of ethical concerns, prioritising speed over scrutiny. The SDR acknowledged that technology is outpacing regulatory frameworks, noting that ‘the UK’s competitors are unlikely to adhere to common ethical standards’. In April 2025, Matthew Clifford – AI advisor to the PM – has been quoted saying ‘speed is everything’. While the Ministry of Defence (in 2022) promised to take an ‘ambitious, safe and responsible’ approach to the development of military AI, the current emphasis on speed sidelines important ethical concerns in the rush for military-technological superiority.
Militarily, the SDR makes plans to invest in drones, autonomous systems and £1 billion for a ‘digital targeting web’. A key foundational principle of International Humanitarian Law is the protection of civilians and their distinction with military targets. An AI-enabled ‘digital targeting web’ – like the one proposed in the SDR – connects sensors and weapons enabling faster detection and killing of human life. These networks would be able to identify and suggest targets faster than humans ever could, leaving soldiers in the best case, minutes, and the worst case, seconds to decide whether the drone should kill.

One notable example is the Maven Smart System, recently procured by NATO. According to the US Think Tank, the Centre of Security and Emerging Technology, the system makes possible small armies to make ‘1000 tactical decisions per hour’. Some legal scholars have pointed out that the prioritisation of speed, within AI-powered battleground technology, raises questions surrounding the preservation of meaningful human control and restraint in warfare. Israeli use of AI-powered automated targeting systems such as ‘Lavender’ during its assault and occupation of Gaza is illustrative of this point. Systems such as these have been highlighted as one of the factors behind the shockingly high civilian death toll there.
This problem is compounded by the recent research that has shown that new large language models are known to ‘hallucinate’ – producing outputs in error or made up. As these systems become embedded within military decision-making chains, the risk of escalation due to technical failure increase dramatically. A false signal, misread sensor or a corrupted database could lead to erroneous targeting, or unintended conflict escalation.
In sum, the UK’s current approach – predominantly framing AI’s utility though the lens of defence – risks squandering its broader social and economic potential. The redirection of public research institutes, the privileging of AI investment in military applications (or so-called ‘strategic areas’) and the emphasis on speed over scrutiny raises serious concerns. Ethically, the erosion of meaningful human control in battlefield decision-making, the risk of AI-driven conflict escalation and the disregard of international humanitarian principles points to a troubling trajectory. The UK risks drifting towards the ethical standards of Russia and Israel in its use of military AI. A government approach to AI grounded in human security (freedom from fear and want), not war is not only more ethical but far more likely to generate sustainable economic growth for the United Kingdom.
- Matthew Croft is a postgraduate student at Kings College London studying Conflict, Security and Development with a particular interest on the ethics of national security and the politics of technology.










