New trials of AI-controlled drones show push towards ‘killer robots’ as Lords announces special inquiry

General Atomics Avenger controlled by AI in trial

Two recently announced trials of AI-controlled drones dramatically demonstrates the urgent need to develop international controls over the development and use of lethal autonomous weapon systems known as ‘killer robots’.

In early January, the UK Ministry of Defence (MoD) announced that a joint UK-US AI taskforce had undertaken a trial of its ‘AI toolbox’ during an exercise on Salisbury Plain in December 2022.  The trial saw a number of Blue Bear’s Ghost drones controlled by AI which was updated during the drone’s flight. The experiments said the MoD, “demonstrated that UK-US developed algorithms from the AI Toolbox could be deployed onto a swarm of UK UAVs and retrained by the joint AI Taskforce at the ground station and the model updated in flight, a first for the UK.”  The trials were undertaken as part of the on-going US-UK Autonomy and Artificial Intelligence Collaboration (AAIC) Partnership Agreement.  The MoD has refused to give MPs sight of the agreement.

Two weeks later, US drone manufacturer General Atomics announced that it had conducted flight trials on 14 December 2022 where an AI had controlled one of its large Avenger drones from the company’s own flight operations facility in El Mirage, California.

Blue Bear Ghost drones in AI in trail on Salisbury Plain

General Atomics said in its press release that the AI “successfully navigated the live plane while dynamically avoiding threats to accomplish its mission.” Subsequently, AI was used to control both the  drone and a ‘virtual’ drone at the same time in order to “collaboratively chase a target while avoiding threats,” said the company.  In the final trial, the AI “used sensor information to select courses of action based on its understanding of the world state. According to the company, “this demonstrated the AI pilot’s ability to successfully process and act on live real-time information independently of a human operator to make mission-critical decisions at the speed of relevance.”

Drone Wars UK has long warned that despite denials from governments on the development of killer robots, behind the scenes corporations and militaries are pressing ahead with testing, trialling and development of technology to create such systems. As we forecast in our 2018 report ‘Off the Leash’ armed drones are the gateway to the development of lethal autonomous systems.  Whiles these particular trials will not lead directly to the deployment of lethal autonomous systems, byte-by-byte the building blocks are being put in place.

House of Lords Special Committee

Due to continuing developments in this area we were pleased to learn that the House of Lords voted to accept Lord Clement-Jones’ proposal for a year-long inquiry by a special committee to investigate the use of artificial intelligence in weapon systems.  We will monitor the work of the Committee throughout the year but for now here is the accepted proposal in full:  Read more

Fine words, Few assurances: Assessing new MoD policy on the military use of Artificial Intelligence

Drone Wars UK is today publishing a short paper analysing the UK’s approach to the ethical issues raised by the use of artificial intelligence (AI) for military purposes in two recently policy documents.  The first part of the paper reviews and critiques the Ministry of Defence’s (MoD’s) Defence Artificial Intelligence Strategy published in June 2022, while the second part considers the UK’s commitment to ‘responsible’ military artificial intelligence capabilities, presented in the document ‘Ambitious, Safe, Responsible‘  published alongside the strategy document.

What was once the realm of science fiction, the technology needed to build autonomous weapon systems is currently under development by in a number of nations, including the United Kingdom.  Due to recent advances in unmanned aircraft technology, it is likely that the first autonomous weapons will be a drone-based system.

Drone Wars UK believes that the development and deployment of AI-enabled autonomous weapons would give rise to a number of grave risks, primarily the loss of human values on the battlefield.  Giving machines the ability to take life crosses a key ethical and legal Rubicon.  Lethal autonomous drones would simply lack human judgment and other qualities that are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.

In the short term it is likely that the military applications of autonomous technology will be in low risk areas, such logistics and the supply chain, where, proponents argue, there are cost advantages and minimal implications for combat situations.  These systems are likely to be closely supervised by human operators.  In the longer term, as technology advances and AI becomes more sophisticated, autonomous technology is increasingly likely to become weaponised and the degree of human supervision can be expected to drop.

The real issue perhaps is not the development of autonomy itself but the way in which this milestone in technological development is controlled and used by humans.  Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities.   These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing autonomous weapons systems.  Read more

Loitering munitions, the Ukraine war, and the drift towards ‘killer robots’.

Switchblade loitering munition flies towards target area. The operator views video feed and then designates which  target the munition should strike.

Loitering munitions are now hitting the headlines in the media as a result of their use in the Ukraine war.  Vivid descriptions of ‘kamikaze drones’ and ‘suicide drones’ outline the way in which these weapons operate: they are able to find targets and fly towards them before crashing into them and exploding.  Both Russia and Ukraine are deploying loitering munitions, which allow soldiers to fire on targets such as tanks and heavy armour without the predictability of a mortar or artillery round firing on a set trajectory.   Under some circumstances these ‘fire and forget’ weapons may be able operate with a high degree of autonomy.  For example they can programmed to fly around autonomously in a defined search area and highlight possible targets such as tanks to the operator.  In these circumstances they can be independent of human control. This trend towards increasing autonomy in weapons systems raising questions about how they might shape the future of warfare and the morality of their use.

Loitering munitions such as these have previously been used to military effect in Syria and the 2020 Nagorno-Karabakh war.  Although they are often described as drones, they are in many ways more like a smart missile than an uncrewed aircraft.  Loitering munitions were first developed in the 1980s and can be thought of as a ‘halfway house’ between drones and cruise missiles.  They differ from drones in that they are expendable, and unlike cruise missiles, have the ability to loiter passively in the target area and search for a target.  Potential targets are identified using radar, thermal imaging, or visual sensor data and, to date, a human operator selects the target and executes the command to destroy the target.  They are disposable, one-time use weapons intended to hunt for a target and then destroy it, hence their tag as ‘kamikaze’ weapons.  Dominic Cummings, former chief advisor to the Prime Minister describes a loitering munition as a “drone version of the AK-47: a cheap anonymous suicide drone that flies to the target and blows itself up – it’s so cheap you don’t care”.  Read more

Military AI Audit: Congress scrutinises how the US is developing its warfighting AI capabilities

Click to open report

In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military.  The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat.  It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.

The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software.  Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield.  Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021.  In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending.  Read more

Skyborg: AI control of military drones begins to take off

In June 2021, Skyborg took control of an MQ-20 Avenger drone during a military exercise in California.

The influential State of AI Report 2021, published in October, makes the alarming observation that the adoption of artificial intelligence (AI) for military purposes is now moving from research into the production phase.  The report highlights three indicators which it argues shows this development, one of which is the progress that the US Air Force Research Laboratory is making in testing its autonomous ‘Skyborg’ system to control military drones.

Skyborg (the name is a play on the word ‘cyborg’ – a biological lifeform that has been augmented with technology such as bionic implants)  is intended to be an AI ‘brain’ capable of controlling an aircraft in flight.  Initially, the technology is planned to assist a human pilot in flying the aircraft.

As is often the case with publicity material for military equipment programmes, it is not always easy to distinguish facts from hype or to penetrate the technospeak in which statements from developers are written.  However, news reports and press statements show that over the past year the US Air Force has for the first time succeeded in demonstrating an “active autonomy capability” during test flights of the Skyborg system, as a first step towards being able to use the system in combat.

Official literature on the system states that Skyborg is an “autonomous aircraft teaming architecture”, consisting of a core autonomous control system (ACS): a ‘brain’ comprised of both hardware and software components which can be used to both assist the pilot of a crewed combat aircraft and fly a swarm of uncrewed drones. The system is being designed by the military IT contractor Leidos, with input from the US Air Force and other Skyborg contractors.  It would allow the aircraft to autonomously avoid other aircraft, terrain, obstacles, and hazardous weather, and take off and land on its own. Read more

None too clever? Military applications of artificial intelligence

Drone Wars UK’s latest briefing looks at where and how artificial intelligence is currently being applied in the military context and considers the legal and ethical, operational and strategic risks posed.

Click to open

Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and offer immense opportunities to dramatically improve society.  Smartphones, internet search engines, AI personal assistants, and self-driving cars are among the many products and services that rely on AI to function.  However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.

In current AI applications, machines perform a specific task for a specific purpose.  The umbrella term ‘computational methods’ may be a better way of describing such systems, which fall far short of human intelligence but have wider problem-solving capabilities than conventional software.  Hypothetically, AI may eventually be able to perform a range of cognitive functions, respond to a wide variety of input data, and understand and solve any problem that a human brain can.  Although this is a goal of some AI research programmes, it remains a distant  prospect.

AI does not operate in isolation, but functions as a ‘backbone’ in a broader system to help the system achieve its purpose.  Users do not ‘buy’ the AI itself; they buy products and services that use AI or upgrade a legacy system with new AI technology.  Autonomous systems, which are machines able to execute a task without human input, rely on artificial intelligence computing systems to interpret information from sensors and then signal actuators, such as motors, pumps, or weapons, to cause an impact on the environment around the machine.  Read more