While there appears to be a consensus between mainstream political parties, officials and defence commentators that a significant increase in spending on drone and military AI systems would be a positive development, there are serious questions about the basis on which this decision is being made and the likely impact on global security.
New military technology in general, and uncrewed systems in particular, are being presented by politicians and the media as a quick and simple, cost-effective way for the armed forces to increase ‘mass’ and ‘lethality’ without having to procure hugely expensive kit that can take years to produce. Drones are also seen as an alternative to deploying troops in significant numbers at a time when recruitment has become increasingly difficult.
However, far from aiding security, increased spending on drones, autonomous weapons and other emerging military technology will simply lead to a further degrading of UK and global security. Remote and autonomous military systems lower the threshold for the use of armed force, making it much easier for state and non-state groups alike to engage in armed attack. Such systems encourage war as the first rather than the last option.
KEY QUESTIONS
Does the war in Ukraine really demonstrate that ‘drones are the future’?
- It seems to be taken for granted that the ongoing war in Ukraine has demonstrated the effectiveness of drone and autonomous warfare and that therefore the UK must ‘learn the lesson’ and increase funding for such technology. However, while drones are being used extensively by both Russia and Ukraine – and causing very substantial numbers of casualties – it is far from clear that they are having any strategic impact.
- Larger drones such as the Turkish Bayraktar TB2 operated by Ukraine – hailed as the saviour of Ukraine at the beginning of the war – and Russia’s Orion MALE armed drone have virtually disappeared above the battlefield as they are easily shot down. Larger one-way attack (sometimes called ‘suicide’) drones are being fired at each other’s major cities by both sides and are causing considerable harm. While these strikes are mainly for propaganda effect, again it is not clear if this will change the outcome of the war.
- Short range surveillance/attack drones are being used very extensively on the battlefield, and the development in particular of First Person View (FPV) drones to carry out attacks on troops and vehicles has been a significant development. However, counter measures such as electronic jamming means that thousands of these drones are simply lost or crash. In many ways, drone warfare in Ukraine has become a long-term ‘cat and mouse’ fight between drones and counter-drone measures and this is only likely to continue.
Is ‘cutting edge military technology’ a silver bullet for UK Defence?
- The capabilities of future military systems are frequently overstated and regularly underdelivered. Slick industry videos showcasing new weapons are more often than not the product of graphic designers creative imaginings rather than real world demonstrations of a new capability.
-

Click to open the briefing The hype surrounding trials of so-called ‘swarming drones’ is a good example. There is a world of difference between a ‘drone swarm’ in its true, techno-scientific meaning and a group of drones being deployed at the same time. A true drone swarm sees individual systems flying autonomously, communicating with each other and following a set of rules without a central controller. While manufacturers and militaries regularly claim they are testing or trialling ‘a drone swarm’, in reality they just operating a group of drones at the same time controlled by a group of operators.
- While there have been considerable developments in the field of AI and machine learning over the past decade, the technology is still far from mature. Anyone using a chatbot, for example, will quickly discover that there can be serious mistakes in the generated output. Trusting data generated by AI systems in a military context, without substantial human oversight and checking, is likely to result in very serious errors. The need for ongoing human oversight of AI systems is likely to render any financial of human resources saving from using AI virtually redundant.
Will funding new autonomous drones actually keep us safe?
- Perhaps the key question about plans to heavily invest in future military AI and drone warfare is whether it will actually keep the UK safe. Just over a decade ago, armed drones were the preserve of just three states: the US, the UK and Israel. Today, many states and non-state groups are using armed drones to launch remote attacks, resulting in large numbers of civilian casualties. In essence, as they enable both states and non-state groups to engage in armed attack with little or no risk to themselves, remote and autonomous drones lower the threshold for the use of armed force, making warfare much more likely.
- Given the global proliferation of such technology, it seems inevitable that any new developments in drone warfare funded by the UK over the next few years will inevitable proliferate and be used by other state and non-state groups. In many ways, it seems only a matter of time before drone warfare comes to the UK.
- Rather than funding the development of new lethal autonomous drones, the UK should be at the forefront of efforts to curb and control the use of these systems, working with other states, NGOs and international experts to put in place globally accepted rules to control their proliferation and use.
Is the development and use of autonomous weapons inevitable?
- Although the realm of science fiction until relatively recently, plans are now being developed by a number of states, including the UK, to develop and deploy lethal autonomous weapon systems. It is highly likely that the first fully autonomous weapons will be a drone-based system.
- The real issue here is not the development of AI itself, but the way it is used. Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities. These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing AI weapons systems.
- While some argue the inevitability of the development of these systems, there are a range of measures which could be used to prevent their development including establishing international treaties and norms, developing confidence-building measures, introducing international legal instruments, and adopting unilateral control measures. Given how much we have seen drone warfare spread and create global insecurity over the past decade, now is the time for the UK to be fully involved in international discussions to control the development of lethal fully autonomous weapon systems.

In other respects the Lords have challenged MoD’s approach more substantially, and in such cases these challenges are rejected in the government response. This is so in relation to the Lords’ recommendation that the government should adopt a definition for autonomous weapons systems (AWS). The section of the response dealing with this point lays bare the fact that the government’s priority “is to maximise our military capability in the face of growing threats”. A rather unconvincing assertion that “the irresponsible and unethical behaviours and outcomes about which the Committee is rightly concerned are already prohibited under existing legal mechanisms” is followed by the real reason for the government’s opposition: “there is a strong tendency in the ongoing debate about autonomous weapons to assert that any official AWS definition should serve as the starting point for a new legal instrument prohibiting certain types of systems”. Any international treaty which would outlaw autonomous weapon systems “represents a threat to UK Defence interests” the government argues. The argument ends with a side-swipe at Russia and an attempt to shut down further debate by claiming that the debate is taking place “at the worst possible time, given Russia’s action in Ukraine and a general increase in bellicosity from potential adversaries.” This basically seems to be saying that in adopting a definition for autonomous weapon systems the UK would be making itself more vulnerable to Russian military action. Really? 


