MoD’s AI ethics panel expert tells Lord’s Committee: ‘More should be done’

L-R: Alexander Blanchard, Digital Ethics Research Fellow, Alan Turing Institute; Mariarosaria Taddeo, Associate Professor, Oxford Internet Institute; Verity Coyle, Senior Campaigner/Advisor, Amnesty UK

Almost a year ago the Ministry of Defence (MoD) launched its Defence Artificial Intelligence Strategy to explain how it would adopt and exploit artificial intelligence (AI) “at pace and scale”.  Among other things, the strategy set out the aspiration for MoD to be “trusted – by the public, our partners and our people, for the safety and reliability of our AI systems, and our clear commitment to lawful and ethical AI use in line with our core values”.

An accompanying policy document, with the title ‘Ambitious, Safe, Responsible‘ explained how MoD intended to win trust for its AI systems.  The document put forward five Ethical Principles for AI in Defence, and announced that MoD had convened an AI Ethics Advisory Panel: a group of experts from academia, industry, civil society and from within MoD itself to advise on the development of policy on the safe and responsible development and use of AI.

The AI Ethics Advisory Panel and its role was one of the topics of interest to the House of Lords Select Committee on AI in Weapon Systems when it met for the fourth time recently to take evidence on the ethical and human rights issues posed by the development of autonomous weapons and their use in warfare.  Witnesses giving evidence at the session were Verity Coyle from Amnesty International, Professor Mariarosaria Taddeo from the Oxford Internet Institute, and Dr Alexander Blanchard from the Alan Turing Institute.  As Professor Taddeo is a member of the MoD’s AI Ethics Advisory Panel, former Defence Secretary Lord Browne took the opportunity to ask her to share her experiences of the panel.

Lord Browne:

“It is the membership of the panel that really interests me. This is a hybrid panel. It has a number of people whose interests are very obvious; it has academics, where the interests are not nearly as clearly obvious, if they have them; and it has some people in industry, who may well have interests.

What are the qualifications to be a member and what is the process you went through to become a member? At any time were you asked about interests? For example, are there academics on this panel who have been funded by the Ministry of Defence or government to do research? That would be of interest to people. Where is the transparency? This panel has met three times by June 2022. I have no idea how often it has met, because I cannot find anything about what was said at it or who said it. I am less interested in who said it, but it would appear there is no transparency at all about what ethical advice was actually shared.

As an ethicist, are you comfortable about being in a panel of this nature, which is such an important element of the judgment we will have to take as to the tolerance of our society, in light of our values, for the deployment of these weapons systems? Should it be done in this hybrid, complex way, without any transparency as to who is giving the advice, what the advice is and what effect it has had on what comes out in this policy document?”

Lord Browne’s questions neatly capture some of the concerns which Drone Wars shares about the MoD’s approach to AI ethics.  Professor Taddeo set out the benefits of the panel as she saw them in her reply, but clearly shared many of Lord Browne’s concerns.  “These are very good questions, which the MoD should address”, she answered.  She agreed that “there can be improvement in terms of transparency of the processes, notes and records”, and said that “this is mentioned whenever we meet”.  She also raised questions about the effectiveness of the panel, telling the Lords that: “This discussion is one hour and a half, and there are a lot of experts in the room who are all prepared, but we did not even scratch the surface of many issues that we have to address”.  The panel is an advisory panel, and “so far, all we have done is to be provided with a draft of, for example, the principles or the document and to give feedback”.

If the only role the MoD’s AI Ethics Advisory Panel has played was to advise on principles for inclusion in the Defence Artificial Intelligence Strategy, then an obvious question is what is needed instead to ensure that MoD develops and uses AI in a safe and responsible way?  Professor Taddeo felt that the current panel “is a good effort in the right direction”, but “would hope it is not deemed sufficient to ensure ethical behaviour of defence organisations; more should be done”.    Read more

New trials of AI-controlled drones show push towards ‘killer robots’ as Lords announces special inquiry

General Atomics Avenger controlled by AI in trial

Two recently announced trials of AI-controlled drones dramatically demonstrates the urgent need to develop international controls over the development and use of lethal autonomous weapon systems known as ‘killer robots’.

In early January, the UK Ministry of Defence (MoD) announced that a joint UK-US AI taskforce had undertaken a trial of its ‘AI toolbox’ during an exercise on Salisbury Plain in December 2022.  The trial saw a number of Blue Bear’s Ghost drones controlled by AI which was updated during the drone’s flight. The experiments said the MoD, “demonstrated that UK-US developed algorithms from the AI Toolbox could be deployed onto a swarm of UK UAVs and retrained by the joint AI Taskforce at the ground station and the model updated in flight, a first for the UK.”  The trials were undertaken as part of the on-going US-UK Autonomy and Artificial Intelligence Collaboration (AAIC) Partnership Agreement.  The MoD has refused to give MPs sight of the agreement.

Two weeks later, US drone manufacturer General Atomics announced that it had conducted flight trials on 14 December 2022 where an AI had controlled one of its large Avenger drones from the company’s own flight operations facility in El Mirage, California.

Blue Bear Ghost drones in AI in trail on Salisbury Plain

General Atomics said in its press release that the AI “successfully navigated the live plane while dynamically avoiding threats to accomplish its mission.” Subsequently, AI was used to control both the  drone and a ‘virtual’ drone at the same time in order to “collaboratively chase a target while avoiding threats,” said the company.  In the final trial, the AI “used sensor information to select courses of action based on its understanding of the world state. According to the company, “this demonstrated the AI pilot’s ability to successfully process and act on live real-time information independently of a human operator to make mission-critical decisions at the speed of relevance.”

Drone Wars UK has long warned that despite denials from governments on the development of killer robots, behind the scenes corporations and militaries are pressing ahead with testing, trialling and development of technology to create such systems. As we forecast in our 2018 report ‘Off the Leash’ armed drones are the gateway to the development of lethal autonomous systems.  Whiles these particular trials will not lead directly to the deployment of lethal autonomous systems, byte-by-byte the building blocks are being put in place.

House of Lords Special Committee

Due to continuing developments in this area we were pleased to learn that the House of Lords voted to accept Lord Clement-Jones’ proposal for a year-long inquiry by a special committee to investigate the use of artificial intelligence in weapon systems.  We will monitor the work of the Committee throughout the year but for now here is the accepted proposal in full:  Read more

Fine words, Few assurances: Assessing new MoD policy on the military use of Artificial Intelligence

Drone Wars UK is today publishing a short paper analysing the UK’s approach to the ethical issues raised by the use of artificial intelligence (AI) for military purposes in two recently policy documents.  The first part of the paper reviews and critiques the Ministry of Defence’s (MoD’s) Defence Artificial Intelligence Strategy published in June 2022, while the second part considers the UK’s commitment to ‘responsible’ military artificial intelligence capabilities, presented in the document ‘Ambitious, Safe, Responsible‘  published alongside the strategy document.

What was once the realm of science fiction, the technology needed to build autonomous weapon systems is currently under development by in a number of nations, including the United Kingdom.  Due to recent advances in unmanned aircraft technology, it is likely that the first autonomous weapons will be a drone-based system.

Drone Wars UK believes that the development and deployment of AI-enabled autonomous weapons would give rise to a number of grave risks, primarily the loss of human values on the battlefield.  Giving machines the ability to take life crosses a key ethical and legal Rubicon.  Lethal autonomous drones would simply lack human judgment and other qualities that are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.

In the short term it is likely that the military applications of autonomous technology will be in low risk areas, such logistics and the supply chain, where, proponents argue, there are cost advantages and minimal implications for combat situations.  These systems are likely to be closely supervised by human operators.  In the longer term, as technology advances and AI becomes more sophisticated, autonomous technology is increasingly likely to become weaponised and the degree of human supervision can be expected to drop.

The real issue perhaps is not the development of autonomy itself but the way in which this milestone in technological development is controlled and used by humans.  Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities.   These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing autonomous weapons systems.  Read more

Loitering munitions, the Ukraine war, and the drift towards ‘killer robots’.

Switchblade loitering munition flies towards target area. The operator views video feed and then designates which  target the munition should strike.

Loitering munitions are now hitting the headlines in the media as a result of their use in the Ukraine war.  Vivid descriptions of ‘kamikaze drones’ and ‘suicide drones’ outline the way in which these weapons operate: they are able to find targets and fly towards them before crashing into them and exploding.  Both Russia and Ukraine are deploying loitering munitions, which allow soldiers to fire on targets such as tanks and heavy armour without the predictability of a mortar or artillery round firing on a set trajectory.   Under some circumstances these ‘fire and forget’ weapons may be able operate with a high degree of autonomy.  For example they can programmed to fly around autonomously in a defined search area and highlight possible targets such as tanks to the operator.  In these circumstances they can be independent of human control. This trend towards increasing autonomy in weapons systems raising questions about how they might shape the future of warfare and the morality of their use.

Loitering munitions such as these have previously been used to military effect in Syria and the 2020 Nagorno-Karabakh war.  Although they are often described as drones, they are in many ways more like a smart missile than an uncrewed aircraft.  Loitering munitions were first developed in the 1980s and can be thought of as a ‘halfway house’ between drones and cruise missiles.  They differ from drones in that they are expendable, and unlike cruise missiles, have the ability to loiter passively in the target area and search for a target.  Potential targets are identified using radar, thermal imaging, or visual sensor data and, to date, a human operator selects the target and executes the command to destroy the target.  They are disposable, one-time use weapons intended to hunt for a target and then destroy it, hence their tag as ‘kamikaze’ weapons.  Dominic Cummings, former chief advisor to the Prime Minister describes a loitering munition as a “drone version of the AK-47: a cheap anonymous suicide drone that flies to the target and blows itself up – it’s so cheap you don’t care”.  Read more

Military AI Audit: Congress scrutinises how the US is developing its warfighting AI capabilities

Click to open report

In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military.  The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat.  It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.

The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software.  Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield.  Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021.  In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending.  Read more

Skyborg: AI control of military drones begins to take off

In June 2021, Skyborg took control of an MQ-20 Avenger drone during a military exercise in California.

The influential State of AI Report 2021, published in October, makes the alarming observation that the adoption of artificial intelligence (AI) for military purposes is now moving from research into the production phase.  The report highlights three indicators which it argues shows this development, one of which is the progress that the US Air Force Research Laboratory is making in testing its autonomous ‘Skyborg’ system to control military drones.

Skyborg (the name is a play on the word ‘cyborg’ – a biological lifeform that has been augmented with technology such as bionic implants)  is intended to be an AI ‘brain’ capable of controlling an aircraft in flight.  Initially, the technology is planned to assist a human pilot in flying the aircraft.

As is often the case with publicity material for military equipment programmes, it is not always easy to distinguish facts from hype or to penetrate the technospeak in which statements from developers are written.  However, news reports and press statements show that over the past year the US Air Force has for the first time succeeded in demonstrating an “active autonomy capability” during test flights of the Skyborg system, as a first step towards being able to use the system in combat.

Official literature on the system states that Skyborg is an “autonomous aircraft teaming architecture”, consisting of a core autonomous control system (ACS): a ‘brain’ comprised of both hardware and software components which can be used to both assist the pilot of a crewed combat aircraft and fly a swarm of uncrewed drones. The system is being designed by the military IT contractor Leidos, with input from the US Air Force and other Skyborg contractors.  It would allow the aircraft to autonomously avoid other aircraft, terrain, obstacles, and hazardous weather, and take off and land on its own. Read more