Developments on both sides of the Atlantic signal push to develop AI attack drones

Artist impression of crewed aircraft operating with autonomous drones. Credit: Lockheed Martin

Recent government and industry announcements signal clear intent by both the US and the UK to press ahead with the development of a new generation of AI attack drones despite serious concerns about the development of autonomous weapons. While most details are being kept secret, it is clear from official statements, industry manoeuvring and budget commitments that these new drones are expected to be operational by the end of the decade.

The current focus is the development of drones that were previously labelled as ‘loyal wingman’ but are now being described either as ‘Collaborative Combat Aircraft (CCA) or ‘Autonomous Collaborative Platforms (ACP).  As always, the nomenclature around ‘drones’ is a battlefield itself.  The concept for this type of drone is for one or more to fly alongside, or in the vicinity of, a piloted military aircraft with the drones carrying out specifically designated tasks such as surveillance, electronic warfare, guiding weapons onto targets, or carrying out air-to-air or air-to-ground strikes.  Rather than being directly controlled by individual on the ground such as current armed drones like the Reaper or Bayraktar, these drones will fly autonomously. According to DARPA officials (using the beloved sports metaphor) these drone will allow pilots to direct squads of unmanned aircraft “like a football coach who chooses team members and then positions them on the field to run plays.”

Next Generation

In May, the US Air Force issued a formal request for US defence companies to bid to build a new piloted aircraft to replace the F-22.  However, equally important for the ‘Next Generation Air Dominance (NGAD)’ program is the development of new autonomous drones and a ‘combat cloud’ communication network.  While the development of the drones is a covert programme, US Air Force Secretary Frank Kendall said they will be built “in parallel” to the piloted aircraft. Kendall publicly stated that the competition to develop CCA was expected to begin in Fiscal Year 2024 (note this runs from Oct 2023 to Sept 2024).

While it is planning to build around 200 of the new crewed aircraft, Kendall told reporters that the USAF is expecting to build around 1,000 of the drones. “This figure was derived from an assumed two CCAs per 200 NGAD platforms and an additional two for each of 300 F-35s for a total of 1,000,” Kendall explained. Others expect even more of these drones to be built.  While the NGAD fighter aircraft itself is not expected to be operational until the 2030s, CCA’s are expected to be deployed by the end of the 2020’s.

It’s important to be aware that there will not be one type of drone built under this programme, but a range with different capabilities able to carry out different tasks.  Some of them will be ‘expendable’ – that is designed for just one mission – something like the ‘one-way attack’ drones that we have seen increasing used in Ukraine and elsewhere; some will be ‘attritable’, that is designed that if they are lost in combat it would not be severely damaging, while others, described as ‘exquisite’ will be more capable and specifically designed not to be lost during combat.  A number of companies have set out their proposals, with some even building prototypes and undertaking test flights. Read more

Military AI: MoD’s timid approach to challenging ethical issues will not be enough to prevent harm

Papers released to Drone Wars UK by the Ministry of Defence (MoD) under the Freedom of Information Act reveal that progress in preparing ethical guidance for Ministry of Defence (MoD) staff working on military artificial intelligence (AI) projects is proceeding at a snail’s pace.  As a result, MoD’s much vaunted AI strategy and ethical principles are at risk of failing as the department races ahead to develop AI as a key military technology.

Minutes of meetings of MoD’s Ethical Advisory Panel show that although officials have repeatedly stressed the need to focus on implementation of AI programmes, the ethical framework and guidelines needed to ensure that AI systems are safe and responsible are still only in draft form and there is “not yet a distinct sense of a clear direction” as to how they will be developed.

The FOI papers also highlight concerns about the transparency of the panel’s work.  Independent members of the panel have repeatedly stressed the need for the panel to work in an open and transparent manner, yet MoD refuses to publish the terms of membership, meeting minutes, and reports prepared for the panel.  With the aim of remedying this situation, Drone Wars UK is publishing the panel documents released in response to our FOI request as part of this blog article (see pdf files at the end of the article).

The Ministry of Defence AI Ethics Advisory Panel

One of the aims of the Defence Artificial Intelligence Strategy, published in June 2022, was to set out MoD’s “clear commitment to lawful and ethical AI use in line with our core values”.  To help meet this aim MoD published a companion document, entitled ‘Ambitious, safe, responsible‘ alongside the strategy to represent “a positive blueprint for effective, innovative and responsible AI adoption”.

‘Ambitious, safe, responsible’ had two main foundations: a set of ethical principles to guide MoD’s use of AI and an Ethics Advisory Panel, described as “an informal advisory board” to assist with policy relating to the safe and responsible development and use of AI.  The document stated that the panel had assisted in formulating the ethical principles and listed the members of the panel, who are drawn from within Ministry of Defence and the military, industry, and universities and civil society.

The terms of reference for the panel were not published in the ‘Ambitious, safe, responsible’ document, but the FOI papers provided to Drone Wars UK show that it is tasked with advising on:

  • “The development, maintenance and application of a set of ethical principles for AI in Defence, which will demonstrate the MOD’s position and guide our approach to responsible AI across the department.
  • “A framework for implementing these principles and related policies / processes across Defence.
  • “Appropriate governance and decision-making processes to assure ethical outcomes in line with the department’s principles and policies”.

The ethical principles were published alongside the Defence AI Strategy, but more than two years after the panel first met – and despite a constant refrain at panel meetings on the need to focus on implementation – it has yet to make substantial progress on the second and third of these objectives.  An implementation framework and associated policies and governance and decision-making processes have yet to appear.  This appears in no way to be due to shortcomings on behalf of the panel, who seem to have a keen appetite for their work, but rather is the result of slow progress by MoD.  In the meantime work is proceeding at full speed ahead on the development of AI systems in the absence of these key ethical tools.

The work of the panel

The first meeting of the panel, held in March 2021, was chaired by Stephen Lovegrove, the then Permanent Secretary at the Ministry of Defence.  The panel discussed the MoD’s work to date on developing an AI Ethics framework and the panel’s role and objectives.  The panel was to be a “permanent and ongoing source of scrutiny” and “should provide expert advice and challenge” to MoD, working through a  regular quarterly meeting cycle.  Read more

MoD AI projects list shows UK is developing technology that allows autonomous drones to kill

Omniscient graphic: ‘High Level Decision Making Module’ which integrates sensor information using deep probabilistic algorithms to detect, classify, and identify targets, threats, and their behaviours. Source: Roke

Artificial intelligence (AI) projects that could help to unleash new lethal weapons systems requiring little or no human control are being undertaken by the Ministry of Defence (MoD), according to information released to Drone Wars UK through a Freedom of Information Act request.

The development of lethal autonomous military systems – sometimes described as ‘killer robots’ – is deeply contentious and raises major ethical and human rights issues.  Last year the MoD published its Defence Artificial Intelligence Strategy setting out how it intends to adopt AI technology in its activities.

Drone Wars UK asked the MoD to provide it with the list of “over 200 AI-related R&D programmes” which the Strategy document stated the MoD was  working on.  Details of these programmes were not given in the Strategy itself, and MoD evaded questions from Parliamentarians who have asked for more details of its AI activities.

Although the Defence Artificial Intelligence Strategy claimed that over 200 programmes  are underway, only 73 are shown on the list provided to Drone Wars.  Release of the names of some projects were refused on defence, security and /or national security grounds.

However, MoD conceded that a list of “over 200” projects was never held when the strategy document was prepared in 2022, explaining that “our assessment of AI-related projects and programmes drew on a data collection exercise that was undertaken in 2019 that identified approximately 140 activities underway across the Front-Line Commands, Defence Science and Technology Laboratory (Dstl), Defence Equipment and Support (DE&S) and other organisations”.  The assessment that there were at least 200 programmes in total “was based on our understanding of the totality of activity underway across the department at the time”.

The list released includes programmes for all three armed forces, including a number of projects related to intelligence analysis systems and to drone swarms, as well as  more mundane  ‘back office’ projects.  It covers major multi-billion pound projects stretching over several decades, such as the Future Combat Air System (which includes the proposed new Tempest aircraft), new spy satellites, uncrewed submarines, and applications for using AI in everyday tasks such as predictive equipment maintenance, a repository of research reports, and a ‘virtual agent’ for administration.

However, the core of the list is a scheme to advance the development of AI-powered autonomous systems for use on the battlefield.  Many of these are based around the use of drones as a platform – usually aerial systems, but also maritime drones and autonomous ground vehicles.  A number of the projects on the list relate to the computerised identification of military targets by analysis of data from video feeds, satellite imagery, radar, and other sources.  Using artificial intelligence / machine learning for target identification is an important step towards the  development of autonomous weapon systems – ‘killer robots’ – which are able to operate without human control.  Even when they are under nominal human control, computer-directed weapons pose a high risk of civilian casualties for a number of reasons including the rapid speed at which they operate and difficulties in understanding the often un-transparent ways in which they make decisions.

The government claims it “does not possess fully autonomous weapons and has no intention of developing them”. However, the UK has consistently declined to support proposals put forward at the United Nations to ban them.

Among the initiatives on the list are the following projects.  All of them are focused on developing technologies that have potential for use in autonomous weapon systems.  Read more

MoD’s AI ethics panel expert tells Lord’s Committee: ‘More should be done’

L-R: Alexander Blanchard, Digital Ethics Research Fellow, Alan Turing Institute; Mariarosaria Taddeo, Associate Professor, Oxford Internet Institute; Verity Coyle, Senior Campaigner/Advisor, Amnesty UK

Almost a year ago the Ministry of Defence (MoD) launched its Defence Artificial Intelligence Strategy to explain how it would adopt and exploit artificial intelligence (AI) “at pace and scale”.  Among other things, the strategy set out the aspiration for MoD to be “trusted – by the public, our partners and our people, for the safety and reliability of our AI systems, and our clear commitment to lawful and ethical AI use in line with our core values”.

An accompanying policy document, with the title ‘Ambitious, Safe, Responsible‘ explained how MoD intended to win trust for its AI systems.  The document put forward five Ethical Principles for AI in Defence, and announced that MoD had convened an AI Ethics Advisory Panel: a group of experts from academia, industry, civil society and from within MoD itself to advise on the development of policy on the safe and responsible development and use of AI.

The AI Ethics Advisory Panel and its role was one of the topics of interest to the House of Lords Select Committee on AI in Weapon Systems when it met for the fourth time recently to take evidence on the ethical and human rights issues posed by the development of autonomous weapons and their use in warfare.  Witnesses giving evidence at the session were Verity Coyle from Amnesty International, Professor Mariarosaria Taddeo from the Oxford Internet Institute, and Dr Alexander Blanchard from the Alan Turing Institute.  As Professor Taddeo is a member of the MoD’s AI Ethics Advisory Panel, former Defence Secretary Lord Browne took the opportunity to ask her to share her experiences of the panel.

Lord Browne:

“It is the membership of the panel that really interests me. This is a hybrid panel. It has a number of people whose interests are very obvious; it has academics, where the interests are not nearly as clearly obvious, if they have them; and it has some people in industry, who may well have interests.

What are the qualifications to be a member and what is the process you went through to become a member? At any time were you asked about interests? For example, are there academics on this panel who have been funded by the Ministry of Defence or government to do research? That would be of interest to people. Where is the transparency? This panel has met three times by June 2022. I have no idea how often it has met, because I cannot find anything about what was said at it or who said it. I am less interested in who said it, but it would appear there is no transparency at all about what ethical advice was actually shared.

As an ethicist, are you comfortable about being in a panel of this nature, which is such an important element of the judgment we will have to take as to the tolerance of our society, in light of our values, for the deployment of these weapons systems? Should it be done in this hybrid, complex way, without any transparency as to who is giving the advice, what the advice is and what effect it has had on what comes out in this policy document?”

Lord Browne’s questions neatly capture some of the concerns which Drone Wars shares about the MoD’s approach to AI ethics.  Professor Taddeo set out the benefits of the panel as she saw them in her reply, but clearly shared many of Lord Browne’s concerns.  “These are very good questions, which the MoD should address”, she answered.  She agreed that “there can be improvement in terms of transparency of the processes, notes and records”, and said that “this is mentioned whenever we meet”.  She also raised questions about the effectiveness of the panel, telling the Lords that: “This discussion is one hour and a half, and there are a lot of experts in the room who are all prepared, but we did not even scratch the surface of many issues that we have to address”.  The panel is an advisory panel, and “so far, all we have done is to be provided with a draft of, for example, the principles or the document and to give feedback”.

If the only role the MoD’s AI Ethics Advisory Panel has played was to advise on principles for inclusion in the Defence Artificial Intelligence Strategy, then an obvious question is what is needed instead to ensure that MoD develops and uses AI in a safe and responsible way?  Professor Taddeo felt that the current panel “is a good effort in the right direction”, but “would hope it is not deemed sufficient to ensure ethical behaviour of defence organisations; more should be done”.    Read more

New trials of AI-controlled drones show push towards ‘killer robots’ as Lords announces special inquiry

General Atomics Avenger controlled by AI in trial

Two recently announced trials of AI-controlled drones dramatically demonstrates the urgent need to develop international controls over the development and use of lethal autonomous weapon systems known as ‘killer robots’.

In early January, the UK Ministry of Defence (MoD) announced that a joint UK-US AI taskforce had undertaken a trial of its ‘AI toolbox’ during an exercise on Salisbury Plain in December 2022.  The trial saw a number of Blue Bear’s Ghost drones controlled by AI which was updated during the drone’s flight. The experiments said the MoD, “demonstrated that UK-US developed algorithms from the AI Toolbox could be deployed onto a swarm of UK UAVs and retrained by the joint AI Taskforce at the ground station and the model updated in flight, a first for the UK.”  The trials were undertaken as part of the on-going US-UK Autonomy and Artificial Intelligence Collaboration (AAIC) Partnership Agreement.  The MoD has refused to give MPs sight of the agreement.

Two weeks later, US drone manufacturer General Atomics announced that it had conducted flight trials on 14 December 2022 where an AI had controlled one of its large Avenger drones from the company’s own flight operations facility in El Mirage, California.

Blue Bear Ghost drones in AI in trail on Salisbury Plain

General Atomics said in its press release that the AI “successfully navigated the live plane while dynamically avoiding threats to accomplish its mission.” Subsequently, AI was used to control both the  drone and a ‘virtual’ drone at the same time in order to “collaboratively chase a target while avoiding threats,” said the company.  In the final trial, the AI “used sensor information to select courses of action based on its understanding of the world state. According to the company, “this demonstrated the AI pilot’s ability to successfully process and act on live real-time information independently of a human operator to make mission-critical decisions at the speed of relevance.”

Drone Wars UK has long warned that despite denials from governments on the development of killer robots, behind the scenes corporations and militaries are pressing ahead with testing, trialling and development of technology to create such systems. As we forecast in our 2018 report ‘Off the Leash’ armed drones are the gateway to the development of lethal autonomous systems.  Whiles these particular trials will not lead directly to the deployment of lethal autonomous systems, byte-by-byte the building blocks are being put in place.

House of Lords Special Committee

Due to continuing developments in this area we were pleased to learn that the House of Lords voted to accept Lord Clement-Jones’ proposal for a year-long inquiry by a special committee to investigate the use of artificial intelligence in weapon systems.  We will monitor the work of the Committee throughout the year but for now here is the accepted proposal in full:  Read more

Fine words, Few assurances: Assessing new MoD policy on the military use of Artificial Intelligence

Drone Wars UK is today publishing a short paper analysing the UK’s approach to the ethical issues raised by the use of artificial intelligence (AI) for military purposes in two recently policy documents.  The first part of the paper reviews and critiques the Ministry of Defence’s (MoD’s) Defence Artificial Intelligence Strategy published in June 2022, while the second part considers the UK’s commitment to ‘responsible’ military artificial intelligence capabilities, presented in the document ‘Ambitious, Safe, Responsible‘  published alongside the strategy document.

What was once the realm of science fiction, the technology needed to build autonomous weapon systems is currently under development by in a number of nations, including the United Kingdom.  Due to recent advances in unmanned aircraft technology, it is likely that the first autonomous weapons will be a drone-based system.

Drone Wars UK believes that the development and deployment of AI-enabled autonomous weapons would give rise to a number of grave risks, primarily the loss of human values on the battlefield.  Giving machines the ability to take life crosses a key ethical and legal Rubicon.  Lethal autonomous drones would simply lack human judgment and other qualities that are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.

In the short term it is likely that the military applications of autonomous technology will be in low risk areas, such logistics and the supply chain, where, proponents argue, there are cost advantages and minimal implications for combat situations.  These systems are likely to be closely supervised by human operators.  In the longer term, as technology advances and AI becomes more sophisticated, autonomous technology is increasingly likely to become weaponised and the degree of human supervision can be expected to drop.

The real issue perhaps is not the development of autonomy itself but the way in which this milestone in technological development is controlled and used by humans.  Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities.   These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing autonomous weapons systems.  Read more