UK MoD awards 26 companies contracts to develop AI targeting system for UK armed forces

While public concern about the use of AI for war-fighting continues to grow, the UK is quietly pressing ahead with development of new AI-based military targeting systems.

In a little-noticed post in January, the MoD named a group of 26 companies who have been awarded a four year deal to develop what it calls “advanced digital decision-supporting capabilities” as part of the ASGARD programme. 

AI integration into military targeting system is developing rapidly. Image: Shutterstock

The group includes specialised US military AI company Anduril and Germany-based Helsing, traditional military tech companies like QinetiQ and Leonardo and a host of smaller niche companies focused on the use of AI.  A full list of companies is below.

ASGARD

First announced in October 2024, the MoD says ASGARD will “exploit AI and novel communications networks” to provide “rapid targeting and decision-support to personnel.”  While militaries are keen to use AI to speed up decision making around lethal strikes, there are serious ethical and legal concerns about these developments. 

Use of AI by Israel to develop targets for strikes by Israeli during its war on Gaza and more recently by the US for strikes on Iran indicates that these developments are rapidly outstripping political and legal debate about whether these systems should be deployed at all.  This week an investigation by Airwars and the Independent newspaper revealed that the US had accepted that a civilian had been killed in a series of US strikes carried out in February 2024, which at the time, the US said had been carried out with the assistance of Project Maven, a US programme to integrate AI/machine learning into military operations.

While continuing to argue in public that the UK has ‘no intention of developing a fully autonomous weapon’ the MoD also states that when “incorporating AI within weapon systems… there must be context-appropriate human involvement in [systems] which identify, select and attack targets.”  This is vague to the point of meaninglessness and is impossible to know how such a policy will operate in practise.

A mock HQ utilising ASGARD at MoD press briefing, July 2025. Crown Copyright.
Accelerating Digital Decisions

The 26 companies have been awarded contracts in relation to a tender notice published by the MoD in July 2025, seeking companies to take part in an ‘Open Framework’ (that is, an ongoing development work) to develop AI/Machine Learning software to support decision making in military targeting for the British Army.  As the tender notice stated:

“This Open Framework will focus on the ‘Decide’ element of the target acquisition cycle (Sense-Decide-Effect); supporting ASGARD’s goal of reinventing, and transforming, how land forces deliver operational decision-support and decision-making software via the use of modern Artificial Intelligence / Machine Learning (AI/ML) technologies.”

The Framework contains five separate ‘lots’ and the winning companies may be focusing on one or more of the different lots covering different aspects of the work. While some messaging around this Framework indicates the total amount to be awarded is between £180m and £216m, other indications are that this is the amount available for each lot. The MoD has said that ASGARD has been “backed by more than £1 billion in funding.”

The lots are as follows:

Lot 1: Data Integration

Work under this lot covers “higher-level functions like data validation, cataloguing, and lineage tracking. It will form the backbone for delivering trusted datasets supporting critical operations.”  The tender notes that “basic cloud storage and compute will be covered elsewhere.”

Lot 2: Accelerators

Work under this area seeks software “to enhance data-driven decision-making… The focus is on reducing time-to-insight and improving operational efficiency… This lot targets intelligent capabilities such as automated workflows, pre-trained models, and integration with operational systems.”

Lot 3: Applications

The tender notice states that this lot “addresses platforms and services enabling mission-critical software to operate efficiently and securely across the enterprise… Focus areas include fast delivery, scalability, and continuous innovation.”

Lot 4: Edge Storage and Compute

‘Edge computing’ in this context means that processing and analysis is done ‘locally’ i.e within the surveillance or weapon systems and that video or other electronic information is not transmitted to a central control. The idea is that the drone, for example, processes the information it has captured itself rather than transmit it over networks to a central base for processing there.  The tender says this lot “focuses on edge computing and local storage for real-time, low-latency data processing… Emphasis is on supporting distributed environments with limited or intermittent connectivity. This lot is essential for scalable, autonomous operations at the edge.”

Lot 5: Services

The tender states that this lot “includes expert services to support technology adoption and integration across all other lots. Offerings may include technical training, architecture consulting, synthetic data support, and proof-of-concept development.”

AI: speed eroding oversight and accountability

As we have said before, the grave dangers of introducing AI into warfare and in particular for the use of force are well known.  While arguments have been made for and against these systems for more than a decade, increasing we are moving from a theoretical, future possibility to the real world: here, now, today.

Advocates of ASGARD and similar systems argue that the ‘need’ for speed in targeting decisions means that the use of AI brings enormous benefits.  But while computer algorithms can process data much faster than humans, speeding up targeting decisions significantly erodes human oversight and accountability and will inevitably mean more civilian casualties.

While some argue almost irrationally in the powers and benefits of AI, in the real world AI-enabled systems remain error prone and unreliable. AI is far from fallible and relies on training data which time and time again have led to serious mistakes through bias.   Most armed conflicts do not take place in remote battlefields but in complex and complicated urban environments.  Relying on AI to choose military targets in such a scenario is fraught with danger.

The Companies Involved:

Leave a Reply