The Strategic Defence Review and Drone Warfare: Questioning a Dangerous Consensus

While there appears to be a consensus between mainstream political parties, officials and defence commentators that a significant increase in spending on drone and military AI systems would be a positive development, there are serious questions about the basis on which this decision is being made and the likely impact on global security.

New military technology in general, and uncrewed systems in particular, are being presented by politicians and the media as a quick and simple, cost-effective way for the armed forces to increase ‘mass’ and ‘lethality’ without having to procure hugely expensive kit that can take years to produce. Drones are also seen as an alternative to deploying troops in significant numbers at a time when recruitment has become increasingly difficult.

However, far from aiding security, increased spending on drones, autonomous weapons and other emerging military technology will simply lead to a further degrading of UK and global security. Remote and autonomous military systems lower the threshold for the use of armed force, making it much easier for state and non-state groups alike to engage in armed attack. Such systems encourage war as the first rather than the last option.

KEY QUESTIONS

Does the war in Ukraine really demonstrate that ‘drones are the future’?
  • It seems to be taken for granted that the ongoing war in Ukraine has demonstrated the effectiveness of drone and autonomous warfare and that therefore the UK must ‘learn the lesson’ and increase funding for such technology. However, while drones are being used extensively by both Russia and Ukraine – and causing very substantial numbers of casualties – it is far from clear that they are having any strategic impact.
  • Larger drones such as the Turkish Bayraktar TB2 operated by Ukraine – hailed as the saviour of  Ukraine at the beginning of the war  – and Russia’s Orion MALE armed drone have virtually disappeared above the battlefield as they are easily shot down. Larger one-way attack (sometimes called ‘suicide’) drones are being fired at each other’s major cities by both sides and are causing considerable harm. While these strikes are mainly for propaganda effect, again it is not clear if this will change the outcome of the war.
  • Short range surveillance/attack drones are being used very extensively on the battlefield, and the development in particular of First Person View (FPV) drones to carry out attacks on troops and vehicles has been a significant development. However, counter measures such as electronic jamming means that thousands of these drones are simply lost or crash. In many ways, drone warfare in Ukraine has become a long-term ‘cat and mouse’ fight between drones and counter-drone measures and this is only likely to continue.
Is ‘cutting edge military technology’ a silver bullet for UK Defence?
  • The capabilities of future military systems are frequently overstated and regularly underdelivered. Slick industry videos showcasing new weapons are more often than not the product of graphic designers creative imaginings rather than real world demonstrations of a new capability.
  • Click to open the briefing

    The hype surrounding trials of so-called ‘swarming drones’ is a good example. There is a world of difference between a ‘drone swarm’ in its true, techno-scientific meaning and a group of drones being deployed at the same time. A true drone swarm sees individual systems flying autonomously, communicating with each other and following a set of rules without a central controller. While manufacturers and militaries regularly claim they are testing or trialling ‘a drone swarm’, in reality they just operating a group of drones at the same time controlled by a group of operators.

  • While there have been considerable developments in the field of AI and machine learning over the past decade, the technology is still far from mature. Anyone using a chatbot, for  example, will quickly discover that there can be serious mistakes in the generated output. Trusting data generated by AI systems in a military context, without substantial human oversight and checking, is likely to result in very serious errors. The need for ongoing human oversight of AI systems is likely to render any financial of human resources saving from using AI virtually redundant.
Will funding new autonomous drones actually keep us safe?
  • Perhaps the key question about plans to heavily invest in future military AI and drone warfare is whether it will actually keep the UK safe. Just over a decade ago, armed drones were the preserve of just three states: the US, the UK and Israel. Today, many states and non-state groups are using armed drones to launch remote attacks, resulting in large numbers of civilian casualties. In essence, as they enable both states and non-state groups to engage in armed attack with little or no risk to themselves, remote and autonomous drones lower the threshold for the use of armed force, making warfare much more likely.
  • Given the global proliferation of such technology, it seems inevitable that any new developments in drone warfare funded by the UK over the next few years will inevitable proliferate and be used by other state and non-state groups. In many ways, it seems only a matter of time before drone warfare comes to the UK.
  • Rather than funding the development of new lethal autonomous drones, the UK should be at the forefront of efforts to curb and control the use of these systems, working with other states, NGOs and international experts to put in place globally accepted rules to control their proliferation and use.
Is the development and use of autonomous weapons inevitable?
  • Although the realm of science fiction until relatively recently, plans are now being developed by a number of states, including the UK, to develop and deploy lethal autonomous weapon systems. It is highly likely that the first fully autonomous weapons will be a drone-based system.
  • The real issue here is not the development of AI itself, but the way it is used. Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities. These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing AI weapons systems.
  • While some argue the inevitability of the development of these systems, there are a range of measures which could be used to prevent their development including establishing international treaties and norms, developing confidence-building measures, introducing international legal instruments, and adopting unilateral control measures. Given how much we have seen drone warfare spread and create global insecurity over the past decade, now is the time for the UK to be fully involved in international discussions to control the development of lethal fully autonomous weapon systems.

Read more

Proceed in Harmony: The Government replies to the Lords on AI in Weapon Systems

Click to open

Last December a select committee of the House of Lords published ‘Proceed with Caution’: a report setting out the findings of a year-long investigation into the use of artificial intelligence (AI) in weapon systems.

Members of the Lords committee were drawn entirely from the core of the UK’s political and security establishment, and their report was hardly radical in its conclusions.  Nevertheless, their report made a number of useful recommendations and concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.  The Lords found that Ministry of Defence (MoD) claims to be “ambitious, safe, responsible” in its use of AI had “not lived up to reality”.

The government subsequently pledged to reply to the Lords report, and on 21 February published its formal response.  Perhaps the best way of summarising the tone of the response is to quote from its concluding paragraph:  ““Proceed with caution”, the overall message of this [Lords] report, mirrors the MoD’s approach to AI adoption.”   There is little new in the government response and nothing in it will be of any surprise to observers and analysts of UK government policy on AI and autonomous technologies.  The response merely outlines how the government intends to follow the course of action it had already planned to take, reiterating the substance of past policy statements such as the Defence Artificial Intelligence Strategy and puffing up recent MoD activity and achievements in the military AI field.

As might be imagined, the response takes a supportive approach to recommendations from the Lords which are aligned to its own agenda, such as developing high-quality data sets, improving MoD’s AI procurement arrangements, and undertaking research into potential future AI capabilities.  On the positive side, it is encouraging to see that in some areas concerns over the risks and limitations of AI technologies are highlighted, for example in the need for review and rigorous testing of new systems.  MoD acknowledges that rigorous testing would be required before an operator could be confident in an AI system’s use and effect, that current procedures, including the Article 36 weapons review process, will need to be adapted and updated, and that changes in operational environment may require weapon systems to be retested.

The response also reveals that the government is working on a Joint Service Publication covering all the armed forces to give more concrete directions and guidance on implementing MoD’s AI ethical principles.  The document, ‘Dependable AI in Defence’, will set out the governance, accountabilities, processes and reporting mechanisms needed to translate ethical policies into tangible actions and procedures.  Drone Wars UK and other civil society organisations have long called for MoD to formulate such guidance as a priority.

In some areas the MoD has relatively little power to meet the committee’s recommendations, such as in adjusting government pay scales to match market rates and attract qualified staff to work on MoD AI projects.  Here the rejoinder is little more than flannel, mentioning that “a range of steps” are being taken “to make Defence AI an attractive and aspirational choice.”

In other respects the Lords have challenged MoD’s approach more substantially, and in such cases these challenges are rejected in the government response.  This is so in relation to the Lords’ recommendation that the government should adopt a definition for autonomous weapons systems (AWS).  The section of the response dealing with this point lays bare the fact that the government’s priority “is to maximise our military capability in the face of growing threats”.  A rather unconvincing assertion that “the irresponsible and unethical behaviours and outcomes about which the Committee is rightly concerned are already prohibited under existing legal mechanisms” is followed by the real reason for the government’s opposition: “there is a strong tendency in the ongoing debate about autonomous weapons to assert that any official AWS definition should serve as the starting point for a new legal instrument prohibiting certain types of systems”.  Any international treaty which would outlaw autonomous weapon systems “represents a threat to UK Defence interests” the government argues.  The argument ends with a side-swipe at Russia and an attempt to shut down further debate by claiming that the debate is taking place “at the worst possible time, given Russia’s action in Ukraine and a general increase in bellicosity from potential adversaries.”  This basically seems to be saying that in adopting a definition for autonomous weapon systems the UK would be making itself more vulnerable to Russian military action.  Really? Read more

Developments on both sides of the Atlantic signal push to develop AI attack drones

Artist impression of crewed aircraft operating with autonomous drones. Credit: Lockheed Martin

Recent government and industry announcements signal clear intent by both the US and the UK to press ahead with the development of a new generation of AI attack drones despite serious concerns about the development of autonomous weapons. While most details are being kept secret, it is clear from official statements, industry manoeuvring and budget commitments that these new drones are expected to be operational by the end of the decade.

The current focus is the development of drones that were previously labelled as ‘loyal wingman’ but are now being described either as ‘Collaborative Combat Aircraft (CCA) or ‘Autonomous Collaborative Platforms (ACP).  As always, the nomenclature around ‘drones’ is a battlefield itself.  The concept for this type of drone is for one or more to fly alongside, or in the vicinity of, a piloted military aircraft with the drones carrying out specifically designated tasks such as surveillance, electronic warfare, guiding weapons onto targets, or carrying out air-to-air or air-to-ground strikes.  Rather than being directly controlled by individual on the ground such as current armed drones like the Reaper or Bayraktar, these drones will fly autonomously. According to DARPA officials (using the beloved sports metaphor) these drone will allow pilots to direct squads of unmanned aircraft “like a football coach who chooses team members and then positions them on the field to run plays.”

Next Generation

In May, the US Air Force issued a formal request for US defence companies to bid to build a new piloted aircraft to replace the F-22.  However, equally important for the ‘Next Generation Air Dominance (NGAD)’ program is the development of new autonomous drones and a ‘combat cloud’ communication network.  While the development of the drones is a covert programme, US Air Force Secretary Frank Kendall said they will be built “in parallel” to the piloted aircraft. Kendall publicly stated that the competition to develop CCA was expected to begin in Fiscal Year 2024 (note this runs from Oct 2023 to Sept 2024).

While it is planning to build around 200 of the new crewed aircraft, Kendall told reporters that the USAF is expecting to build around 1,000 of the drones. “This figure was derived from an assumed two CCAs per 200 NGAD platforms and an additional two for each of 300 F-35s for a total of 1,000,” Kendall explained. Others expect even more of these drones to be built.  While the NGAD fighter aircraft itself is not expected to be operational until the 2030s, CCA’s are expected to be deployed by the end of the 2020’s.

It’s important to be aware that there will not be one type of drone built under this programme, but a range with different capabilities able to carry out different tasks.  Some of them will be ‘expendable’ – that is designed for just one mission – something like the ‘one-way attack’ drones that we have seen increasing used in Ukraine and elsewhere; some will be ‘attritable’, that is designed that if they are lost in combat it would not be severely damaging, while others, described as ‘exquisite’ will be more capable and specifically designed not to be lost during combat.  A number of companies have set out their proposals, with some even building prototypes and undertaking test flights. Read more

Small drones, big problem. Two new reports examine the rise and rise of armed ‘one-way attack’ drones 

From top: Israeli Harop, Iranian Shaded 136, Polish Warmate.

Over the past few years  – and particularly in the on-going war in Ukraine – we have seen the rise in use of what has become known as ‘kamikaze’ or  ‘suicide’ drones.  Two new excellent reports have just been released which examine these systems.  ‘One-Way Attack Drones: Loitering Munitions of the Past and Present’ written by Dan Gettinger, formerly of the Bard Drone Center and ‘Loitering Munitions and Unpredictability’ by Ingvild Bode & Tom Watts,  examine between them the history, current use, and growing concern about the increasing autonomy of such systems.

A drone by any other name…?

Firstly, to address the elephant in the room: are these systems ‘drones’?

Naming has always been a keenly fought aspect of the debate about drones, with sometimes bitter conflict over whether such platforms should be named ‘unmanned aerial vehicles’ (UAVS), ‘remotely piloted air systems’ (RPAS) or ‘drones’. ‘Drones’ has been the term that has stuck, particularly in mainstream media, but is regularly used interchangeably with UAV (with ‘unmanned’ being replaced in recent years by ‘uncrewed’ for obvious reasons).  While many in the military now accept the term ‘drone’ and are happy to use it depending on the audience, some continue to insist that it belittles both capabilities of the system and those who operate them.

Whichever term is used, a further aspect of the naming debate is that an increasing number and type of military aerial systems are being labelled as ‘drones’.  While all these systems have significant characteristics in common (aerial systems, unoccupied, used for surveillance/intelligence gathering and/or attack), they can also be very different in terms of size and range; can carry out very different missions; have different effects; and raise different legal and ethical issues.

One type of such system is the so-called ‘suicide’ or ‘kamikaze’ drone  – perhaps better labelled ‘one way attack’ drone.  There are several different categories of this type of drone, and while they are are used to carry out remote lethal attacks and therefore have significant aspects in common with the much larger Reaper or Bayraktar drones, they are significantly different in that they are not designed to be re-used, but rather are expendable as the warhead is part of the structure of the system which is destroyed in use. Importantly, while loitering munitions are a sub-set of ‘one way attack’ drones, not all one-way attack drones are loitering munitions.

Dan Gettinger’s report ‘One-Way Attack Drones: Loitering Munitions of the Past and Present’ helpfully sets out a history of the development of these systems and identifies three sub categories: anti-radar systems, portable or ‘backpackable’ systems and Iranian systems.  He has compiled a helpful dataset of over 200 such systems (although not all are currently in operation). All of these, he argues, can be traced back to “the transition from the era of jet-powered target drones to that of remotely piloted vehicles.” Read more

The UK, accountability for civilian harm, and autonomous weapon systems

Second evidence session. Click to watch video

The second public session of the House of Lords inquiry into artificial intelligence (AI) in weapon systems took place at the end of March.  The session examined how the development and deployment of autonomous weapons might impact upon the UK’s foreign policy and its position on the global stage and heard evidence from Yasmin Afina, Research Associate at Chatham House, Vincent Boulanin, Director of Governance of Artificial Intelligence at the Stockholm International Peace Research Institute, and Charles Ovink, Political Affairs Officer at United Nations Office for Disarmament.

Among the wide range of issues covered in the two-hour session was the question of who could be held accountable if human rights abuses were committed by a weapon system acting autonomously.  A revealing exchange took place between Lord Houghton, a former Chief of Defence Staff (the most senior officer of the UK’s armed forces), and Charles Ovink.  Houghton asked whether it might be possible for an autonomous weapon system to comply with the laws of war under certain circumstances (at 11.11 in the video of the session):

“If that fully autonomous system has been tested and approved in such a way that it doesn’t rely on a black box technology, that constant evaluation has proved that the risk of it non-complying with the parameters of international humanitarian law are accepted, that then there is a delegation effectively from a human to a machine, why is that not then compliant, or why would you say that that should be prohibited?”

This is, of course, a highly loaded question that assumes that a variety of improbable circumstances would apply, and then presents a best-case scenario as the norm.  Ovink carefully pointed out that any decision on whether such a system should be prohibited would be for United Nations member states to decide, but that the question posed ‘a big if’, and it was not clear what kind of test environment could mimic a real-life warzone with civilians present and guarantee that the laws of war would be followed.  Even if this was the case, there would still need to be a human accountable for any civilian deaths that might occur.  Read more

Fine words, Few assurances: Assessing new MoD policy on the military use of Artificial Intelligence

Drone Wars UK is today publishing a short paper analysing the UK’s approach to the ethical issues raised by the use of artificial intelligence (AI) for military purposes in two recently policy documents.  The first part of the paper reviews and critiques the Ministry of Defence’s (MoD’s) Defence Artificial Intelligence Strategy published in June 2022, while the second part considers the UK’s commitment to ‘responsible’ military artificial intelligence capabilities, presented in the document ‘Ambitious, Safe, Responsible‘  published alongside the strategy document.

What was once the realm of science fiction, the technology needed to build autonomous weapon systems is currently under development by in a number of nations, including the United Kingdom.  Due to recent advances in unmanned aircraft technology, it is likely that the first autonomous weapons will be a drone-based system.

Drone Wars UK believes that the development and deployment of AI-enabled autonomous weapons would give rise to a number of grave risks, primarily the loss of human values on the battlefield.  Giving machines the ability to take life crosses a key ethical and legal Rubicon.  Lethal autonomous drones would simply lack human judgment and other qualities that are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.

In the short term it is likely that the military applications of autonomous technology will be in low risk areas, such logistics and the supply chain, where, proponents argue, there are cost advantages and minimal implications for combat situations.  These systems are likely to be closely supervised by human operators.  In the longer term, as technology advances and AI becomes more sophisticated, autonomous technology is increasingly likely to become weaponised and the degree of human supervision can be expected to drop.

The real issue perhaps is not the development of autonomy itself but the way in which this milestone in technological development is controlled and used by humans.  Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities.   These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing autonomous weapons systems.  Read more