Artificial intelligence (AI) projects that could help to unleash new lethal weapons systems requiring little or no human control are being undertaken by the Ministry of Defence (MoD), according to information released to Drone Wars UK through a Freedom of Information Act request.
The development of lethal autonomous military systems – sometimes described as ‘killer robots’ – is deeply contentious and raises major ethical and human rights issues. Last year the MoD published its Defence Artificial Intelligence Strategy setting out how it intends to adopt AI technology in its activities.
Drone Wars UK asked the MoD to provide it with the list of “over 200 AI-related R&D programmes” which the Strategy document stated the MoD was working on. Details of these programmes were not given in the Strategy itself, and MoD evaded questions from Parliamentarians who have asked for more details of its AI activities.
Although the Defence Artificial Intelligence Strategy claimed that over 200 programmes are underway, only 73 are shown on the list provided to Drone Wars. Release of the names of some projects were refused on defence, security and /or national security grounds.
However, MoD conceded that a list of “over 200” projects was never held when the strategy document was prepared in 2022, explaining that “our assessment of AI-related projects and programmes drew on a data collection exercise that was undertaken in 2019 that identified approximately 140 activities underway across the Front-Line Commands, Defence Science and Technology Laboratory (Dstl), Defence Equipment and Support (DE&S) and other organisations”. The assessment that there were at least 200 programmes in total “was based on our understanding of the totality of activity underway across the department at the time”.
The list released includes programmes for all three armed forces, including a number of projects related to intelligence analysis systems and to drone swarms, as well as more mundane ‘back office’ projects. It covers major multi-billion pound projects stretching over several decades, such as the Future Combat Air System (which includes the proposed new Tempest aircraft), new spy satellites, uncrewed submarines, and applications for using AI in everyday tasks such as predictive equipment maintenance, a repository of research reports, and a ‘virtual agent’ for administration.
However, the core of the list is a scheme to advance the development of AI-powered autonomous systems for use on the battlefield. Many of these are based around the use of drones as a platform – usually aerial systems, but also maritime drones and autonomous ground vehicles. A number of the projects on the list relate to the computerised identification of military targets by analysis of data from video feeds, satellite imagery, radar, and other sources. Using artificial intelligence / machine learning for target identification is an important step towards the development of autonomous weapon systems – ‘killer robots’ – which are able to operate without human control. Even when they are under nominal human control, computer-directed weapons pose a high risk of civilian casualties for a number of reasons including the rapid speed at which they operate and difficulties in understanding the often un-transparent ways in which they make decisions.
The government claims it “does not possess fully autonomous weapons and has no intention of developing them”. However, the UK has consistently declined to support proposals put forward at the United Nations to ban them.
Among the initiatives on the list are the following projects. All of them are focused on developing technologies that have potential for use in autonomous weapon systems. Read more