Autonomous Collaborative Platforms: The UK’s New Autonomous Drones 

BAE Systems concept for Tier 2 ACP

Following on from the MoD’s Defence Drone Strategy released in February (see our report here), the RAF has now published its ‘Autonomous Collaborative Platform Strategy’ as it works to develop, produce and deploy these new type of military drones.

The strategy defines Autonomous Collaborative Platform (ACP) as types of uncrewed systems (drones) “which demonstrate autonomous behaviour and are able to operate in collaborative manner with other assets.”   The strategy argues that Reaper and the (soon-to-enter-service)  Protector drones “are vulnerable in warfighting conflicts involving peer or near-peer adversary. Therefore, as a priority the RAF needs to go beyond RPAS [Remotely Piloted Air Systems] to develop ACP capabilities.”

The plan argues that “through increasing use of autonomy, remote mission operators (commanders /supervisors) will be able to command an increasing number of AV [drones] within each ACP system.”

Underpinning the development, is the notion that the “geopolitical climate demands that we move beyond the caution of the post-cold war world” and that therefore the RAF must “undertake activity in areas that are demanding, difficult or overtly hostile.”   While the Strategy sets out a variety of tasks for these new drones, it makes clear that a key focus is on “overwhelming an adversary’s air defences.”  ACP are therefore not a defensive system, but are designed from the outset to enable the UK to engage in attack.

Tiers for Fears

The strategy sets out three ‘Tiers’ of ACP based on their ability to survive in “high-risk” (i.e. defended) environments:

  • Tier 1 ae disposable drones, with life-cycle of one or very few missions;
  • Tier 2 are “attritable” (or “risk tolerant”) that is, expected to survive but losses are acceptable;
  • Tier 3 are drones which have high strategic value, which if lost would significantly affect how the RAF will fight.
Diagram from Autonomous Collaborative Platform Strategy

Echoing the words of the Chief of the Air Staff Sir Richard Knighton before the Defence Select Committee earlier this year, the document states that a Tier 1 ACP will be operational “by the end of 2024”, while Tier 2 systems will be part of RAF combat force by 2030.  Read more

Military AI: MoD’s timid approach to challenging ethical issues will not be enough to prevent harm

Papers released to Drone Wars UK by the Ministry of Defence (MoD) under the Freedom of Information Act reveal that progress in preparing ethical guidance for Ministry of Defence (MoD) staff working on military artificial intelligence (AI) projects is proceeding at a snail’s pace.  As a result, MoD’s much vaunted AI strategy and ethical principles are at risk of failing as the department races ahead to develop AI as a key military technology.

Minutes of meetings of MoD’s Ethical Advisory Panel show that although officials have repeatedly stressed the need to focus on implementation of AI programmes, the ethical framework and guidelines needed to ensure that AI systems are safe and responsible are still only in draft form and there is “not yet a distinct sense of a clear direction” as to how they will be developed.

The FOI papers also highlight concerns about the transparency of the panel’s work.  Independent members of the panel have repeatedly stressed the need for the panel to work in an open and transparent manner, yet MoD refuses to publish the terms of membership, meeting minutes, and reports prepared for the panel.  With the aim of remedying this situation, Drone Wars UK is publishing the panel documents released in response to our FOI request as part of this blog article (see pdf files at the end of the article).

The Ministry of Defence AI Ethics Advisory Panel

One of the aims of the Defence Artificial Intelligence Strategy, published in June 2022, was to set out MoD’s “clear commitment to lawful and ethical AI use in line with our core values”.  To help meet this aim MoD published a companion document, entitled ‘Ambitious, safe, responsible‘ alongside the strategy to represent “a positive blueprint for effective, innovative and responsible AI adoption”.

‘Ambitious, safe, responsible’ had two main foundations: a set of ethical principles to guide MoD’s use of AI and an Ethics Advisory Panel, described as “an informal advisory board” to assist with policy relating to the safe and responsible development and use of AI.  The document stated that the panel had assisted in formulating the ethical principles and listed the members of the panel, who are drawn from within Ministry of Defence and the military, industry, and universities and civil society.

The terms of reference for the panel were not published in the ‘Ambitious, safe, responsible’ document, but the FOI papers provided to Drone Wars UK show that it is tasked with advising on:

  • “The development, maintenance and application of a set of ethical principles for AI in Defence, which will demonstrate the MOD’s position and guide our approach to responsible AI across the department.
  • “A framework for implementing these principles and related policies / processes across Defence.
  • “Appropriate governance and decision-making processes to assure ethical outcomes in line with the department’s principles and policies”.

The ethical principles were published alongside the Defence AI Strategy, but more than two years after the panel first met – and despite a constant refrain at panel meetings on the need to focus on implementation – it has yet to make substantial progress on the second and third of these objectives.  An implementation framework and associated policies and governance and decision-making processes have yet to appear.  This appears in no way to be due to shortcomings on behalf of the panel, who seem to have a keen appetite for their work, but rather is the result of slow progress by MoD.  In the meantime work is proceeding at full speed ahead on the development of AI systems in the absence of these key ethical tools.

The work of the panel

The first meeting of the panel, held in March 2021, was chaired by Stephen Lovegrove, the then Permanent Secretary at the Ministry of Defence.  The panel discussed the MoD’s work to date on developing an AI Ethics framework and the panel’s role and objectives.  The panel was to be a “permanent and ongoing source of scrutiny” and “should provide expert advice and challenge” to MoD, working through a  regular quarterly meeting cycle.  Read more

MoD AI projects list shows UK is developing technology that allows autonomous drones to kill

Omniscient graphic: ‘High Level Decision Making Module’ which integrates sensor information using deep probabilistic algorithms to detect, classify, and identify targets, threats, and their behaviours. Source: Roke

Artificial intelligence (AI) projects that could help to unleash new lethal weapons systems requiring little or no human control are being undertaken by the Ministry of Defence (MoD), according to information released to Drone Wars UK through a Freedom of Information Act request.

The development of lethal autonomous military systems – sometimes described as ‘killer robots’ – is deeply contentious and raises major ethical and human rights issues.  Last year the MoD published its Defence Artificial Intelligence Strategy setting out how it intends to adopt AI technology in its activities.

Drone Wars UK asked the MoD to provide it with the list of “over 200 AI-related R&D programmes” which the Strategy document stated the MoD was  working on.  Details of these programmes were not given in the Strategy itself, and MoD evaded questions from Parliamentarians who have asked for more details of its AI activities.

Although the Defence Artificial Intelligence Strategy claimed that over 200 programmes  are underway, only 73 are shown on the list provided to Drone Wars.  Release of the names of some projects were refused on defence, security and /or national security grounds.

However, MoD conceded that a list of “over 200” projects was never held when the strategy document was prepared in 2022, explaining that “our assessment of AI-related projects and programmes drew on a data collection exercise that was undertaken in 2019 that identified approximately 140 activities underway across the Front-Line Commands, Defence Science and Technology Laboratory (Dstl), Defence Equipment and Support (DE&S) and other organisations”.  The assessment that there were at least 200 programmes in total “was based on our understanding of the totality of activity underway across the department at the time”.

The list released includes programmes for all three armed forces, including a number of projects related to intelligence analysis systems and to drone swarms, as well as  more mundane  ‘back office’ projects.  It covers major multi-billion pound projects stretching over several decades, such as the Future Combat Air System (which includes the proposed new Tempest aircraft), new spy satellites, uncrewed submarines, and applications for using AI in everyday tasks such as predictive equipment maintenance, a repository of research reports, and a ‘virtual agent’ for administration.

However, the core of the list is a scheme to advance the development of AI-powered autonomous systems for use on the battlefield.  Many of these are based around the use of drones as a platform – usually aerial systems, but also maritime drones and autonomous ground vehicles.  A number of the projects on the list relate to the computerised identification of military targets by analysis of data from video feeds, satellite imagery, radar, and other sources.  Using artificial intelligence / machine learning for target identification is an important step towards the  development of autonomous weapon systems – ‘killer robots’ – which are able to operate without human control.  Even when they are under nominal human control, computer-directed weapons pose a high risk of civilian casualties for a number of reasons including the rapid speed at which they operate and difficulties in understanding the often un-transparent ways in which they make decisions.

The government claims it “does not possess fully autonomous weapons and has no intention of developing them”. However, the UK has consistently declined to support proposals put forward at the United Nations to ban them.

Among the initiatives on the list are the following projects.  All of them are focused on developing technologies that have potential for use in autonomous weapon systems.  Read more

XLUUVs, Swarms, and STARTLE: New developments in the UK’s military autonomous systems

Behind the scenes, the UK is developing a range of military autonomous systems. Image: Crown Copyright

In November 2018 Drone Wars UK published ‘Off The Leash’, an in-depth research report outlining how the Ministry of Defence (MoD) was actively supporting research into technology to support the development of armed autonomous drones despite the government’s public claims that it “does not possess fully autonomous weapons and has no intention of developing them”.  This article provides an update on developments which have taken place in this field since our report was published, looking both at specific technology projects as well as developments on the UK’s policy position on Lethal Autonomous Weapons Systems (LAWS). Read more