Proceed with caution: Lords warn over development of military AI and killer robots

Click to open report

The use of artificial intelligence (AI) for the purposes of warfare through the development of AI-powered autonomous weapon systems – ‘killer robots’ –  “is one of the most controversial uses of AI today”, according to a new report by an influential House of Lords Committee.

The committee, which spent ten months investigating the application of AI to weapon systems and probing the UK government’s plans to develop military AI systems, concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.

Echoing concerns which Drone Wars UK has repeatedly raised, the Lords found that the stated aspiration of the Ministry of Defence (MoD) to be “ambitious, safe, responsible” in its use of AI “has not lived up to reality”, and that although MoD has claimed that transparency and challenge are central to its approach, “we have not found this yet to be the case”.

The cross-party House of Lords Committee on AI in Weapon Systems was set up in January 2023 at the suggestion of Liberal Democrat peer Lord Clement-Jones, and started taking evidence in March.    The committee heard oral evidence from 35 witnesses and received nearly 70 written evidence submissions, including evidence from Drone Wars UK.

The committee’s report is entitled ‘Proceed with Caution: Artificial Intelligence in Weapon Systems’ and ‘proceed with caution’ gives a fair summary of its recommendations.  The panel was drawn entirely from the core of the UK’s political and military establishment, and at times some members appeared to have difficulty in grasping the technical concepts underpinning the technologies behind autonomous weapons.  Under the circumstances the committee was never remotely likely to recommend that the government should not commit to the development of new weapons systems based on advanced technology, and in many respects its report provides a road-map setting out the committee’s views on how the MoD should go ahead in integrating AI into weapons systems and build public support for doing this.

Nevertheless, the committee has taken a sceptical view of the advantages claimed for autonomous weapons systems; has recognised the very real risks that they pose; and has proposed safeguards to mitigate the worst of the risks alongside a robust call for the government to “lead by example in international engagement on regulation of AWS [autonomous weapon systems]”.  Despite hearing from witnesses who argued that autonomous weapons “could be faster, more accurate and more resilient than existing weapon systems, could limit the casualties of war, and could protect “our people from harm by automating ‘dirty and dangerous’ tasks””, the committee was apparently unconvinced, concluding that “although a balance sheet of benefits and risks can be drawn, determining the net effect of AWS is difficult” – and that “this was acknowledged by the Ministry of Defence”.

Perhaps the  most important recommendation in the committee’s report relates to human control over autonomous weapons.  The committee found that:

The Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis on information obtained from sensors. But it is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.

Read more

The arms race towards autonomous weapons – industry acknowledge concerns

(L to R) Courtney Bowman, Palantir Technologies UK; Dr Kenneth Payne, Professor of Strategy, King’s College London; James Black, Assistant Director of the Defence and Security Research Group, RAND Europe; Keith Dear, Director of Artificial Intelligence Innovation, Fujitsu;

The third evidence session for the House of Lords Select Committee on Artificial Intelligence (AI) in weapon systems heard views on the development and impact of autonomous weapons from the perspective of the military technology sector.

Witnesses giving evidence at the session were former RAF officer and Ministry of Defence (MoD) advisor Dr Keith Dear, now at Fujitsu Defence and Security; James Black of RAND Europe, Kenneth Payne of Kings College London and the MoD’s Defence Academy at Shrivenham, and Courtney Bowman of US tech company Palantir Technologies.  Palantir specialises in the development of AI technologies for surveillance and military purposes and has been described as a “pro-military arm of Silicon Valley”.  The company boasts that its software is “responsible for most of the targeting in Ukraine”, supporting the Ukrainian military in identifying tanks, artillery, and other targets in the war against Russia, and its Chief Technology Officer recently told the US Senate’s Armed Services Committee that: “If we want to effectively deter those that threaten US interests, we must spend at least 5% of our budget on capabilities that will terrify our adversaries”.

Not surprisingly, the witnesses tended to take a pro-industry view towards the development of AI and autonomous weapon systems, arguing that incentives, not regulation, were required to encourage technology companies to engage with concerns over ethics and impacts, and taking the fatalistic view that there is no way of stopping the AI juggernaut.  Nevertheless, towards the end of the session an interesting discussion on the hazards of arms racing took place, with the witnesses suggesting some positive steps which could help to reduce such a risk.

Arms racing and the undermining of global peace and security becomes a risk when qualitatively new technologies promising clear military advantages seem close at hand.  China, Russia, and the United States of America are already investing heavily in robotic and artificial intelligence technologies with the aim of exploiting their military potential.  Secrecy over military technology, and uncertainty and suspicion over the capabilities that a rival may have further accelerates arms races.

Competition between these rivals to gain an advantage over each other in autonomous technology and its military capabilities already meets the definition of an arms race –  ‘the participation of two or more nation-states in apparently competitive or interactive increases in quantity or quality of war material and/or persons under arms’ – and has the potential to escalate.  This competition has no absolute end goal: merely the relative goal of staying ahead of other competitors. Should one of these states, or another technologically advanced state, develop and deploy autonomous weapon systems in the field, it is very likely that others would follow suit. The ensuing race can be expected to be highly destabilising and dangerous. Read more

The UK, accountability for civilian harm, and autonomous weapon systems

Second evidence session. Click to watch video

The second public session of the House of Lords inquiry into artificial intelligence (AI) in weapon systems took place at the end of March.  The session examined how the development and deployment of autonomous weapons might impact upon the UK’s foreign policy and its position on the global stage and heard evidence from Yasmin Afina, Research Associate at Chatham House, Vincent Boulanin, Director of Governance of Artificial Intelligence at the Stockholm International Peace Research Institute, and Charles Ovink, Political Affairs Officer at United Nations Office for Disarmament.

Among the wide range of issues covered in the two-hour session was the question of who could be held accountable if human rights abuses were committed by a weapon system acting autonomously.  A revealing exchange took place between Lord Houghton, a former Chief of Defence Staff (the most senior officer of the UK’s armed forces), and Charles Ovink.  Houghton asked whether it might be possible for an autonomous weapon system to comply with the laws of war under certain circumstances (at 11.11 in the video of the session):

“If that fully autonomous system has been tested and approved in such a way that it doesn’t rely on a black box technology, that constant evaluation has proved that the risk of it non-complying with the parameters of international humanitarian law are accepted, that then there is a delegation effectively from a human to a machine, why is that not then compliant, or why would you say that that should be prohibited?”

This is, of course, a highly loaded question that assumes that a variety of improbable circumstances would apply, and then presents a best-case scenario as the norm.  Ovink carefully pointed out that any decision on whether such a system should be prohibited would be for United Nations member states to decide, but that the question posed ‘a big if’, and it was not clear what kind of test environment could mimic a real-life warzone with civilians present and guarantee that the laws of war would be followed.  Even if this was the case, there would still need to be a human accountable for any civilian deaths that might occur.  Read more

Lords Committee on AI in Weapons Systems: AI harms, humans vs computers, and unethical Russians

First evidence session. Click to watch video

A special investigation set up by the House of Lords is now taking evidence on the development, use and regulation of artificial intelligence (AI) in weapon systems.  Chaired by crossbench peer Lord Lisvane, a former Clerk of the House of  Commons, a stand-alone Select Committee is considering the utility and risks arising from military uses of AI.

The committee is seeking written evidence from members of the public and interested parties, and recently conducted the first of its oral evidence sessions.  Three specialists in international law, Noam Lubell of the University of Essex, Georgia Hinds, Legal Advisor at International Committee of the Red Cross (ICRC), and Daragh Murray of Queen Mary University of London, answered a variety of questions about whether autonomous weapon systems might be able to comply with international law and how they could be controlled at the international level.

One of the more interesting issues raised during the discussion was the point that, regardless of military uses, AI has the potential to wreak a broad range of harms across society, and there is a need to address this concern rather than racing on blindly with the development and roll-out of ever more powerful AI systems.  This is a matter which is beginning to attract wider attention.  Last month the Future of Life Institute published an open letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months.  Over 30,000 researchers and tech sector workers have signed the letter to date, including Stuart Russell, Steve Wozniak, Elon Musk, and Yuval Noah Harari.

Leaving aside whether six months could be long enough to resolve issues around AI safety, there is an important question to be answered here.  There are already numerous examples of cases where existing computerised and AI systems have caused harm, regardless of what the future might hold.  Why, then, are we racing forward in this field?  Has the combination of tech multinationals and unrestrained capitalism become such an unstoppable juggernaut that humanity is literally no longer able to control where the forces we have created are taking us?  If not, then why won’t governments intervene to put the brakes on the development and use of AI, and what interests are they actually working to protect?  This is unlikely to be a line of inquiry the Lords Committee will be pursuing.  Read more

New trials of AI-controlled drones show push towards ‘killer robots’ as Lords announces special inquiry

General Atomics Avenger controlled by AI in trial

Two recently announced trials of AI-controlled drones dramatically demonstrates the urgent need to develop international controls over the development and use of lethal autonomous weapon systems known as ‘killer robots’.

In early January, the UK Ministry of Defence (MoD) announced that a joint UK-US AI taskforce had undertaken a trial of its ‘AI toolbox’ during an exercise on Salisbury Plain in December 2022.  The trial saw a number of Blue Bear’s Ghost drones controlled by AI which was updated during the drone’s flight. The experiments said the MoD, “demonstrated that UK-US developed algorithms from the AI Toolbox could be deployed onto a swarm of UK UAVs and retrained by the joint AI Taskforce at the ground station and the model updated in flight, a first for the UK.”  The trials were undertaken as part of the on-going US-UK Autonomy and Artificial Intelligence Collaboration (AAIC) Partnership Agreement.  The MoD has refused to give MPs sight of the agreement.

Two weeks later, US drone manufacturer General Atomics announced that it had conducted flight trials on 14 December 2022 where an AI had controlled one of its large Avenger drones from the company’s own flight operations facility in El Mirage, California.

Blue Bear Ghost drones in AI in trail on Salisbury Plain

General Atomics said in its press release that the AI “successfully navigated the live plane while dynamically avoiding threats to accomplish its mission.” Subsequently, AI was used to control both the  drone and a ‘virtual’ drone at the same time in order to “collaboratively chase a target while avoiding threats,” said the company.  In the final trial, the AI “used sensor information to select courses of action based on its understanding of the world state. According to the company, “this demonstrated the AI pilot’s ability to successfully process and act on live real-time information independently of a human operator to make mission-critical decisions at the speed of relevance.”

Drone Wars UK has long warned that despite denials from governments on the development of killer robots, behind the scenes corporations and militaries are pressing ahead with testing, trialling and development of technology to create such systems. As we forecast in our 2018 report ‘Off the Leash’ armed drones are the gateway to the development of lethal autonomous systems.  Whiles these particular trials will not lead directly to the deployment of lethal autonomous systems, byte-by-byte the building blocks are being put in place.

House of Lords Special Committee

Due to continuing developments in this area we were pleased to learn that the House of Lords voted to accept Lord Clement-Jones’ proposal for a year-long inquiry by a special committee to investigate the use of artificial intelligence in weapon systems.  We will monitor the work of the Committee throughout the year but for now here is the accepted proposal in full:  Read more