Drone Wars UK’s latest briefing looks at where and how artificial intelligence is currently being applied in the military context and considers the legal and ethical, operational and strategic risks posed.

Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and offer immense opportunities to dramatically improve society. Smartphones, internet search engines, AI personal assistants, and self-driving cars are among the many products and services that rely on AI to function. However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.
In current AI applications, machines perform a specific task for a specific purpose. The umbrella term ‘computational methods’ may be a better way of describing such systems, which fall far short of human intelligence but have wider problem-solving capabilities than conventional software. Hypothetically, AI may eventually be able to perform a range of cognitive functions, respond to a wide variety of input data, and understand and solve any problem that a human brain can. Although this is a goal of some AI research programmes, it remains a distant prospect.
AI does not operate in isolation, but functions as a ‘backbone’ in a broader system to help the system achieve its purpose. Users do not ‘buy’ the AI itself; they buy products and services that use AI or upgrade a legacy system with new AI technology. Autonomous systems, which are machines able to execute a task without human input, rely on artificial intelligence computing systems to interpret information from sensors and then signal actuators, such as motors, pumps, or weapons, to cause an impact on the environment around the machine.
AI is seen by the world’s military powers as a way to revolutionise warfare and gain an advantage over enemies. Military applications of AI have begun to enter operational use and new systems with worrying characteristics are rapidly being rolled out. Business and academia have led, and continue to lead, the development of AI since they are better placed to invest capital and access resources needed for research than the military and public sectors. As a result, it is likely that future military applications of AI will be adaptations of technologies developed in the commercial sector. At the current time, AI is being adopted in the following military applications:
- Intelligence, Surveillance, and Reconnaissance.
- Cyber operations.
- Electronic warfare.
- Command and control and decision support.
- Drone Swarms
- Autonomous weapon systems.
AI and the UK military
The Integrated Review and other government statements leave no doubt that the government attaches immense importance to the military applications of AI and intends to race ahead with its development. However, although publications outlining doctrine on the use of automated systems have been published, to date the Ministry of Defence (MoD) has remained silent on the ethical framework governing the use of its AI and autonomous systems, despite already having taken some significant decisions on the future use of military AI.
The MoD has repeatedly promised to publish its Defence AI Strategy, which is expected to lay out a set of high level ethical principles to control military AI systems across their life cycle. The strategy has been prepared following discussion with selected experts from academia and industry, although no open consultation on ethical and other issues associated with military uses of AI has yet been undertaken by the government. One of the principal purposes of the strategy will be to reassure industry and the public that MoD is a responsible partner for collaboration on AI projects.
In the meantime, programmes and policies are rapidly moving forward in the absence of any ethical compass, and major questions remain unanswered. Under what circumstances will British armed forces employ AI technology? What degree of human control does the government think appropriate? How will risks be addressed? And how will the UK demonstrate to allies and adversaries that the UK intends to take a principled approach to the use of military AI technology?
Risks posed by military AI systems
Each of the different military applications of AI set out above poses different elements of risk. An algorithm sorting through data as part of a back-office operation at the MoD headquarters would raise different issues and concerns and require a different level of scrutiny than an autonomous weapon system.
Nevertheless, AI systems currently in development undoubtedly pose threats to lives, human rights and well-being. The risks posed by military AI systems can be grouped into three categories: ethical and legal, operational, and strategic.
Ethical and legal risks
- Compliance with the laws of war: It is not clear how robotic systems, and in particular autonomous weapons, would be able to meet the standards set by the laws of war on making lethal decisions and protecting non-combatants.
- Accountability: It is not clear who would be held responsible if things went wrong: it makes no sense to punish a computer if it operates unpredictably and, as a result, war crimes are committed.
- Human rights and privacy: AI systems pose potential threats to human rights and individual privacy.
- Inappropriate use: Forces under pressure in battle environments may be tempted to modify technologies to overcome safety features and controls.
Operational risks
- Technical sources of bias: AI systems are only as good as their training data and a small amount of corrupted training data can have large impacts on the performance of the system.
- Human sources of bias: Bias may result when humans misuse systems or misunderstand their output. It can also happen when operators under-trust a system or when systems are so complex that their outputs are unexplainable.
- Manipulation with malicious intent: Military AI systems, like all networked systems, are vulnerable to attacks from malicious actors who may attempt to jam, hack, or spoof the system.
Strategic risks
- Lowering thresholds: AI systems introduce the risk that political leaders will resort to using autonomous military systems in conflict rather than pursuing non-military options.
- Escalation management: The speed at which AI-involved military action can be executed decreases the space for deliberation and negotiation, leading possibly to rapid accidental escalation with severe consequences.
- Arms racing and proliferation: The pursuit of military AI already appears to be causing arms racing, with major and regional powers vying to develop their capabilities in order to stay ahead of their rivals.
- Strategic Stability: Should advanced AI systems develop to the point that they are able to predict enemy tactics or the deployment of forces, this could have highly destabilising consequences.
This briefing sets out the various military applications which have been envisioned for AI, and also highlights their potential for causing harm. It argues that proposals for mitigating the risks posed by military AI systems must be based on the principle of ensuring that AI systems remain at all times under human supervision.
As yet there appears to be little public appreciation of the changes and risks that society is facing as a result of advances in AI and robotics. This briefing is intended, in part, as a wake-up call. AI can and should be used to improve conditions in the workplace and services to the public, and not to increase the lethality of war-fighting.