Despite ethical and human rights concerns, the military use of AI is advancing rapidly, leading to a fear of autonomous ‘killer robots’
While drones have become familiar over recent years, the next technological leap – utilizing AI and autonomous technology – is likely to see the development not only of drones that are able to fly themselves and stay aloft for extended periods, but those which may also be able to select, identify, and destroy targets without human intervention.
Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and offer immense opportunities.

However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.
AI is seen by the world’s military powers as a way to revolutionise warfare and gain an advantage over enemies. Military applications of AI have begun to enter operational use and new systems with worrying characteristics are rapidly being rolled out. Business and academia have led, and continue to lead, the development of AI since they are better placed to invest capital and access resources needed for research than the military and public sectors. As a result, it is likely that future military applications of AI will be adaptations of technologies developed in the commercial sector.
Human Control
The extent to which autonomy within a weapons system is a concern depends upon the level of human control over the targeting and launch of weapons and the use of force. Although existing armed drones have a degree of autonomy in some of their functions – for instance in relation to flight control – at present human control is maintained over the use of force, and so today’s armed drones do not qualify as fully autonomous weapons. Many question whether weapons with the capability to make autonomous targeting decisions would ever be able to comply with the laws of war, and make the complex and subjective judgements needed to ensure that the use of force was necessary, proportional, and undertaken so as to avoid civilian casualties.

Elements of concern: autonomy and the critical functions of an armed drone
At the current time, AI is being adopted in a variety of military applications:
- Intelligence, Surveillance, and Reconnaissance.
- Cyber operations.
- Electronic warfare.
- Command and control and decision support.
- Drone Swarms
- Autonomous weapon systems.
Risks posed by military AI systems
Each of the different military applications of AI set out above poses different elements of risk. An algorithm sorting through data as part of a back-office operation at the MoD headquarters would raise different issues and concerns and require a different level of scrutiny than an autonomous weapon system.
Nevertheless, AI systems currently in development undoubtedly pose threats to lives, human rights and well-being. The risks posed by military AI systems can be grouped into three categories: ethical and legal, operational, and strategic.
Ethical and legal risks
- Compliance with the laws of war: It is not clear how robotic systems, and in particular autonomous weapons, would be able to meet the standards set by the laws of war on making lethal decisions and protecting non-combatants.
- Accountability: It is not clear who would be held responsible if things went wrong: it makes no sense to punish a computer if it operates unpredictably and, as a result, war crimes are committed.
- Human rights and privacy: AI systems pose potential threats to human rights and individual privacy.
- Inappropriate use: Forces under pressure in battle environments may be tempted to modify technologies to overcome safety features and controls.
Operational risks
- Technical sources of bias: AI systems are only as good as their training data and a small amount of corrupted training data can have large impacts on the performance of the system.
- Human sources of bias: Bias may result when humans misuse systems or misunderstand their output. It can also happen when operators under-trust a system or when systems are so complex that their outputs are unexplainable.
- Manipulation with malicious intent: Military AI systems, like all networked systems, are vulnerable to attacks from malicious actors who may attempt to jam, hack, or spoof the system.
Strategic risks
- Lowering thresholds: AI systems introduce the risk that political leaders will resort to using autonomous military systems in conflict rather than pursuing non-military options.
- Escalation management: The speed at which AI-involved military action can be executed decreases the space for deliberation and negotiation, leading possibly to rapid accidental escalation with severe consequences.
- Arms racing and proliferation: The pursuit of military AI already appears to be causing arms racing, with major and regional powers vying to develop their capabilities in order to stay ahead of their rivals.
- Strategic Stability: Should advanced AI systems develop to the point that they are able to predict enemy tactics or the deployment of forces, this could have highly destabilising consequences.
The development and deployment of lethal autonomous drones would give rise to a number of grave risks, primarily the loss of humanity and compassion on the battlefield. Letting machines ‘off the leash’ and giving them the ability to take life crosses a key ethical and legal Rubicon.
Autonomous lethal drones would simply lack human judgement and other qualities that are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. Other risks from the deployment of autonomous weapons include unpredictable behaviour, loss of control, ‘normal’ accidents, and misuse.
Fore more information view our reports and briefings below


