New trials of AI-controlled drones show push towards ‘killer robots’ as Lords announces special inquiry

General Atomics Avenger controlled by AI in trial

Two recently announced trials of AI-controlled drones dramatically demonstrates the urgent need to develop international controls over the development and use of lethal autonomous weapon systems known as ‘killer robots’.

In early January, the UK Ministry of Defence (MoD) announced that a joint UK-US AI taskforce had undertaken a trial of its ‘AI toolbox’ during an exercise on Salisbury Plain in December 2022.  The trial saw a number of Blue Bear’s Ghost drones controlled by AI which was updated during the drone’s flight. The experiments said the MoD, “demonstrated that UK-US developed algorithms from the AI Toolbox could be deployed onto a swarm of UK UAVs and retrained by the joint AI Taskforce at the ground station and the model updated in flight, a first for the UK.”  The trials were undertaken as part of the on-going US-UK Autonomy and Artificial Intelligence Collaboration (AAIC) Partnership Agreement.  The MoD has refused to give MPs sight of the agreement.

Two weeks later, US drone manufacturer General Atomics announced that it had conducted flight trials on 14 December 2022 where an AI had controlled one of its large Avenger drones from the company’s own flight operations facility in El Mirage, California.

Blue Bear Ghost drones in AI in trail on Salisbury Plain

General Atomics said in its press release that the AI “successfully navigated the live plane while dynamically avoiding threats to accomplish its mission.” Subsequently, AI was used to control both the  drone and a ‘virtual’ drone at the same time in order to “collaboratively chase a target while avoiding threats,” said the company.  In the final trial, the AI “used sensor information to select courses of action based on its understanding of the world state. According to the company, “this demonstrated the AI pilot’s ability to successfully process and act on live real-time information independently of a human operator to make mission-critical decisions at the speed of relevance.”

Drone Wars UK has long warned that despite denials from governments on the development of killer robots, behind the scenes corporations and militaries are pressing ahead with testing, trialling and development of technology to create such systems. As we forecast in our 2018 report ‘Off the Leash’ armed drones are the gateway to the development of lethal autonomous systems.  Whiles these particular trials will not lead directly to the deployment of lethal autonomous systems, byte-by-byte the building blocks are being put in place.

House of Lords Special Committee

Due to continuing developments in this area we were pleased to learn that the House of Lords voted to accept Lord Clement-Jones’ proposal for a year-long inquiry by a special committee to investigate the use of artificial intelligence in weapon systems.  We will monitor the work of the Committee throughout the year but for now here is the accepted proposal in full: 

Use of Artificial Intelligence in weapon systems

Description of proposal

Artificial Intelligence for military purposes is becoming an increasingly prioritised area of Government investment (with the Integrated Review announcing at least £6.6bn for R&D), and rapid advancements in technology have put us on the brink of a new generation of warfare where AI plays an instrumental role. Deployment of this technology poses significant cross-cutting considerations about the ethics of outsourcing decisions on human life to machines, the military and peacebuilding risks and benefits of its use, and the declared goal of the UK to be a science and technology superpower which leads the world in the development of ethical AI (as stated in the Integrated Review).

But the window for the UK to play a leading role in shaping the international approach to the issue of Lethal Autonomous Weapons Systems (LAWS) is fast closing and parliament risks being left behind as the international community makes decisions in the coming years that will impact both national and global security.

Currently, the Secretary-General of the United Nations, the International Committee of the Red Cross and the majority of countries at the Convention on Conventional Weapons are calling for work to begin on a new international treaty to regulate autonomous weapons systems. Global tech leaders from Elon Musk (SpaceX) to Mustafa Suleyman (Google DeepMind) have joined this call, highlighting the grave risk that software designed for peaceful applications will be proliferated and misused. And over 180 international, regional, and national nongovernmental organisations and academic partners across 66 countries – under the banner of the Stop Killer Robots campaign – are calling for a treaty to prohibit and regulate autonomous weapons.

There is also increasing, cross-party parliamentary interest in the issue of AI usage in weapons systems, including recent proposals of amendments relating to autonomous weapons systems in the Armed Forces and Overseas Operations Bills. However, there has been little opportunity for debate or scrutiny of government policy – including of the government’s recently released Defence AI strategy and accompanying policy statement. It is also an issue of growing public interest; a particularly pressing issue for younger generations concerned with the impacts of AI on their lives and futures, it was also the subject of a 2021 BBC Reith Lecture – delivered by prominent AI Professor Stuart Russell, who will address the House of Lords on this issue in October.

If the UK ambition to be a leader in ethical AI is to be realised, and our position as a global tech superpower is not to be stymied, parliament and the UK Government needs to show leadership on this issue.

This proposed special inquiry would address the unfinished business of the AI Select Committee, which examined the implications of advances in AI but did not look at military and ethical considerations specifically – instead noting that this area ‘merits a committee of its own’. The ability to draw on the diverse insight and expertise of members, means the Lords is uniquely suited to undertake the interdisciplinary approach an inquiry on this issue requires.

Purpose of inquiry:

• Interrogate the fundamental ethical question relating to our relationship with technology – how far should society be prepared to go with respect to outsourcing military operations to algorithms, sensors and autonomous technologies?

• Assess the present state of technological developments, the prospects of deployment of Lethal Autonomous Weapons Systems, and the inherent risks represented by LAWS.

• Explore the efficacy of existing international law for regulating use of autonomous weapons and assess the progress of international negotiations towards a new treaty to regulate LAWS and the UK’s role in them.

• Provide an in-depth assessment on the adequacy of the Ministry of Defence’s national strategy for the deployment of AI and accompanying policy statement released in June 2022.

• Assess the consequences of remote and autonomous weaponry in shifting the balance of power in regional and global conflicts.

• Highlight perspectives from the tech sector with the regards to the risks posed by dual-use technology, and their export

• Would the anticipated effect of the UK working toward internationally agreed limits on these weapons styme innovation as the Government claims, or would it in fact contribute to the UK’s national objective of being a global leader in ethical AI and a Science and Tech Superpower (as stated by the Integrated Review)?

• Raise awareness of the moral and ethical issues relating to the uses of AI in military contexts and contribute to the emerging national conversation on this issue, demonstrating the relevance of Lords committee work to an area of increasing interest among a wide demographic audience.

• Formulate recommendations to the UK government aimed at ensuring an ethical and responsible use of AI and autonomous technology, particularly when applied to weapon systems.

Relevant Member experience

Interrogation of the complex dimensions of the issue of AI in weapons systems – technological, ethical, military, legal, societal, commercial – will require the rich expertise present in the Lords. The inquiry will benefit hugely from the participation of former senior military personnel, faith leaders and ethicists, academics and legal scholars, industry professionals and former government ministers all of whom are present in the Lords making it uniquely suited to undertake the interdisciplinary approach an inquiry on this issue requires.

This proposed special inquiry will take up points of unfinished business from the AI Select Committee – which examined the implications of advances in AI but did not look at military and ethical considerations specifically, instead noting that this area ‘merits a committee of its own’ (to date, no select committee, Lords or Commons, has focussed specifically on the military application of AI).

It will allow further analysis of the follow-up Liaison Committee report, which cautioned against ‘complacency’ and highlighted the need for ‘greater and higher-level’ coordination on the national government use of AI, and warned how the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners.

Cross-cutting departmental boundaries

The cross-cutting and interdisciplinary nature of this issue – technology, AI, international law, ethics, foreign policy, defence – means it does not fit solely within the boundaries of any one specific department. Nor does it fall within the purview of oversight committees which traditionally focus on military issues – including the Defence, Foreign Affairs Commons Committees or the International Relations and Defence Lords Committee. Indeed, the Science and Technology Committee, the Committee on Standards in Public Life and the Education Committee have all interrogated the role of AI in the public sector and all point to the need for vigilance and further interrogation of this issue.

One-year time frame

Yes, the stated objectives of the inquiry, set out above, would be achievable by November 2023.

Additional information

Periodic sessions held by the All-Party Parliamentary Group on Artificial Intelligence (APPG AI), which I co-founded and co-chair, have attracted interest and attendance from a diverse and broad range of parliamentarians and wider civil society, including younger and more diverse demographics to which these issues are of great interest, and whose concerns the proposed committee would connect with. The meetings of the APPG AI have only further emphasised the need for a focussed and in-depth inquiry with transparency, recording and mandated response to these issues.

Most recently I, along with Stephen Metcalfe MP, co-chaired an Evidence Hearing Session entitled, “Artificial Intelligence and National Security & Defence: Autonomous Weapons Systems”. Evidence was given by speakers with considerable levels of expertise in the fields of ethics, weapons development, non-governmental organisations advocacy and military research, with The Rt Revd. Steven Croft, Lord Bishop of Oxford, also presenting.

The vast array of issues which were touched upon included:

• the incremental nature of development

• the enduring lack of definitions

• endemic black box issues such as un-explainability and bias

• unsolvable issues of unpredictability in complex systems and the associated problems of compliance with international laws of distinction, proportionality, indiscriminate harm, excessive harm and adverse distinction

• dual-use technology and its risk factors

• safeguarding industry / academia and export from use by hostile actors

• human control

The insufficiency of current law without an additional normative framework or legally binding instrument specifying positive obligations and prohibitions

• The extent to which ai can add value and where it must be regulated in weapons systems

A striking element of the APPG meeting was the high level of expertise on offer, both by the panel, but also by the attendees, who comprised a broad spectrum of artificial intelligence and adjacent technology developers, some with military procurement contracts, who were keenly interested and sympathetic to the intricacies of the issues. This also attracted a younger demographic than other sessions as many rising STEM students and tech developers are emerging from university and start-ups with ethical concerns. The audience included filmmakers, young writers, captured by the social impact of such technology and its impact on our everyday lives as well as experienced conflict journalists with first hand experience of weapon impact, such is the broad scope of concern and weight of this issue.

The discussion ranged from the ethical implications arising from the possible delegation of targeting to a machine, to the lack of work on antitrust laws preventing monopolies in specialist supply chains and the safeguarding of industry, academia and research and development. Brief but stringent interrogations of the technical capacities and flaws, with the likelihood of safeguarding civilians or combatants were posited. The audience engagement and panel responses, highlighted how a deeper and more technologically rigorous inquiry is needed with and by experts in this area, without which the government cannot not be expected to take a responsible or informed view.      CLEMENT-JONES

Leave a Reply