Proceed with caution: Lords warn over development of military AI and killer robots

Click to open report

The use of artificial intelligence (AI) for the purposes of warfare through the development of AI-powered autonomous weapon systems – ‘killer robots’ –  “is one of the most controversial uses of AI today”, according to a new report by an influential House of Lords Committee.

The committee, which spent ten months investigating the application of AI to weapon systems and probing the UK government’s plans to develop military AI systems, concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.

Echoing concerns which Drone Wars UK has repeatedly raised, the Lords found that the stated aspiration of the Ministry of Defence (MoD) to be “ambitious, safe, responsible” in its use of AI “has not lived up to reality”, and that although MoD has claimed that transparency and challenge are central to its approach, “we have not found this yet to be the case”.

The cross-party House of Lords Committee on AI in Weapon Systems was set up in January 2023 at the suggestion of Liberal Democrat peer Lord Clement-Jones, and started taking evidence in March.    The committee heard oral evidence from 35 witnesses and received nearly 70 written evidence submissions, including evidence from Drone Wars UK.

The committee’s report is entitled ‘Proceed with Caution: Artificial Intelligence in Weapon Systems’ and ‘proceed with caution’ gives a fair summary of its recommendations.  The panel was drawn entirely from the core of the UK’s political and military establishment, and at times some members appeared to have difficulty in grasping the technical concepts underpinning the technologies behind autonomous weapons.  Under the circumstances the committee was never remotely likely to recommend that the government should not commit to the development of new weapons systems based on advanced technology, and in many respects its report provides a road-map setting out the committee’s views on how the MoD should go ahead in integrating AI into weapons systems and build public support for doing this.

Nevertheless, the committee has taken a sceptical view of the advantages claimed for autonomous weapons systems; has recognised the very real risks that they pose; and has proposed safeguards to mitigate the worst of the risks alongside a robust call for the government to “lead by example in international engagement on regulation of AWS [autonomous weapon systems]”.  Despite hearing from witnesses who argued that autonomous weapons “could be faster, more accurate and more resilient than existing weapon systems, could limit the casualties of war, and could protect “our people from harm by automating ‘dirty and dangerous’ tasks””, the committee was apparently unconvinced, concluding that “although a balance sheet of benefits and risks can be drawn, determining the net effect of AWS is difficult” – and that “this was acknowledged by the Ministry of Defence”.

Perhaps the  most important recommendation in the committee’s report relates to human control over autonomous weapons.  The committee found that:

The Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis on information obtained from sensors. But it is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.

Read more

MoD AI projects list shows UK is developing technology that allows autonomous drones to kill

Omniscient graphic: ‘High Level Decision Making Module’ which integrates sensor information using deep probabilistic algorithms to detect, classify, and identify targets, threats, and their behaviours. Source: Roke

Artificial intelligence (AI) projects that could help to unleash new lethal weapons systems requiring little or no human control are being undertaken by the Ministry of Defence (MoD), according to information released to Drone Wars UK through a Freedom of Information Act request.

The development of lethal autonomous military systems – sometimes described as ‘killer robots’ – is deeply contentious and raises major ethical and human rights issues.  Last year the MoD published its Defence Artificial Intelligence Strategy setting out how it intends to adopt AI technology in its activities.

Drone Wars UK asked the MoD to provide it with the list of “over 200 AI-related R&D programmes” which the Strategy document stated the MoD was  working on.  Details of these programmes were not given in the Strategy itself, and MoD evaded questions from Parliamentarians who have asked for more details of its AI activities.

Although the Defence Artificial Intelligence Strategy claimed that over 200 programmes  are underway, only 73 are shown on the list provided to Drone Wars.  Release of the names of some projects were refused on defence, security and /or national security grounds.

However, MoD conceded that a list of “over 200” projects was never held when the strategy document was prepared in 2022, explaining that “our assessment of AI-related projects and programmes drew on a data collection exercise that was undertaken in 2019 that identified approximately 140 activities underway across the Front-Line Commands, Defence Science and Technology Laboratory (Dstl), Defence Equipment and Support (DE&S) and other organisations”.  The assessment that there were at least 200 programmes in total “was based on our understanding of the totality of activity underway across the department at the time”.

The list released includes programmes for all three armed forces, including a number of projects related to intelligence analysis systems and to drone swarms, as well as  more mundane  ‘back office’ projects.  It covers major multi-billion pound projects stretching over several decades, such as the Future Combat Air System (which includes the proposed new Tempest aircraft), new spy satellites, uncrewed submarines, and applications for using AI in everyday tasks such as predictive equipment maintenance, a repository of research reports, and a ‘virtual agent’ for administration.

However, the core of the list is a scheme to advance the development of AI-powered autonomous systems for use on the battlefield.  Many of these are based around the use of drones as a platform – usually aerial systems, but also maritime drones and autonomous ground vehicles.  A number of the projects on the list relate to the computerised identification of military targets by analysis of data from video feeds, satellite imagery, radar, and other sources.  Using artificial intelligence / machine learning for target identification is an important step towards the  development of autonomous weapon systems – ‘killer robots’ – which are able to operate without human control.  Even when they are under nominal human control, computer-directed weapons pose a high risk of civilian casualties for a number of reasons including the rapid speed at which they operate and difficulties in understanding the often un-transparent ways in which they make decisions.

The government claims it “does not possess fully autonomous weapons and has no intention of developing them”. However, the UK has consistently declined to support proposals put forward at the United Nations to ban them.

Among the initiatives on the list are the following projects.  All of them are focused on developing technologies that have potential for use in autonomous weapon systems.  Read more

The UK and the Ukraine War: Drones vs Diplomacy

Custom-built British ‘suicide-drone’ reportedly bound for Ukraine.     Pic: QinetiQ

The UK is to supply Ukraine with “hundreds of new long-range attack drones” a government spokesperson told the media on Monday as the Prime Minister Rishi Sunak welcomed President Volodymyr Zelenskiy to Britain for a brief visit.

“Today the prime minister will confirm the further UK provision of hundreds of air defence missiles and further unmanned aerial systems including hundreds of new long-range attack drones with a range of over 200km. These will all be delivered over the coming months as Ukraine prepares to intensify its resistance to the ongoing Russian invasion.”

It is not at all clear what theses ‘long range attack drones’ are, although there has been some reports of the UK funding the development of a ‘suicide-drone’ to supply to Ukraine.

This latest news comes on top of the announcement in the last few weeks that the UK is supplying Storm Shadow cruise missiles to Ukraine following the export of UK Challenger 2 tanks.

Some will no doubt welcome the supply of attack drones and cruise missiles to Ukraine as a counter to Russia’s military aggression. It goes without saying that Russia’s invasion of Ukraine and continuing use of lethal force is unlawful and must be resisted.   However, there are real questions to be asked now about how such a strategy of supplying evermore lethal military hardware risks expanding rather than ending this war. It is becoming increasingly easier to see the UK and other NATO countries being drawn more directly into an armed conflict with Russia.  Any such escalation would be disastrous for the people of Ukraine and the wider region as well as seriously risking a catastrophic nuclear event.

Rather than escalating the conflict by supplying ever more lethal arms, the UK should be urging negotiations to end the war as it is inevitable that this will have to happen at some point.  While some western military analysts urge that the war should be prolonged in order to weaken Russia in the long term, Ukraine and its people suffer.

Negotiations are of course a matter for the Ukrainian people, but it should be remembered that a settlement  was seemingly very close last March with a Turkish-backed plan for Russian forces to withdraw to their pre-24 February positions without Ukraine giving up its claim to any of its territory.  Unfortunately the moment passed (with suggestions that the then British PM Boris Johnson personally lobbied Zelenskiy to reject the plan (for more on this see  Ukraine One Year On: Time to Negotiate Peace).

While it is easy for the current PM to grab a few headlines and play to the crowd by supplying lethal attack drones to Ukraine, the harder but more rewarding long-term work of diplomacy in order to end this awful war is being neglected.

Fine words, Few assurances: Assessing new MoD policy on the military use of Artificial Intelligence

Drone Wars UK is today publishing a short paper analysing the UK’s approach to the ethical issues raised by the use of artificial intelligence (AI) for military purposes in two recently policy documents.  The first part of the paper reviews and critiques the Ministry of Defence’s (MoD’s) Defence Artificial Intelligence Strategy published in June 2022, while the second part considers the UK’s commitment to ‘responsible’ military artificial intelligence capabilities, presented in the document ‘Ambitious, Safe, Responsible‘  published alongside the strategy document.

What was once the realm of science fiction, the technology needed to build autonomous weapon systems is currently under development by in a number of nations, including the United Kingdom.  Due to recent advances in unmanned aircraft technology, it is likely that the first autonomous weapons will be a drone-based system.

Drone Wars UK believes that the development and deployment of AI-enabled autonomous weapons would give rise to a number of grave risks, primarily the loss of human values on the battlefield.  Giving machines the ability to take life crosses a key ethical and legal Rubicon.  Lethal autonomous drones would simply lack human judgment and other qualities that are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.

In the short term it is likely that the military applications of autonomous technology will be in low risk areas, such logistics and the supply chain, where, proponents argue, there are cost advantages and minimal implications for combat situations.  These systems are likely to be closely supervised by human operators.  In the longer term, as technology advances and AI becomes more sophisticated, autonomous technology is increasingly likely to become weaponised and the degree of human supervision can be expected to drop.

The real issue perhaps is not the development of autonomy itself but the way in which this milestone in technological development is controlled and used by humans.  Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities.   These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing autonomous weapons systems.  Read more

Technology and the future of UK Foreign Policy – Our submission to the Foreign Affairs Committee Inquiry

Click to open

In a timely and welcome move, the House of Commons Foreign Affairs Select Committee has recently launched an investigation into ‘Tech and the future of UK foreign policy‘.  Recognising that new and emerging technologies are fundamentally altering the nature of international relations and the rapidly growing influence of private technology companies, the Committee’s inquiry intends to focus on how the government, and particularly the Foreign, Commonwealth, and Development Office (FCDO) should respond to the opportunities and challenges presented by new technologies.

A broad selection of stakeholders have already provided written evidence to the Committee, ranging from big technology companies such as Microsoft, Oracle, and BAE Systems, to academics and industry groups with specialist interests in the field.  Non-government organisations, including ourselves, as well as the International Committee of the Red Cross, Amnesty International UK, and the UK Campaign to Stop Killer Robots have also provided evidence.

Not surprisingly, submissions from industry urge the government to support and push ahead with the development of new technologies, with Microsoft insisting that the UK “must move more quickly to advance broad-based technology innovation, which will require “an even closer partnership between the government and the tech sector”.  BAE Systems calls for “a united front [which] can be presented in promoting the UK’s overseas interests across both the public and private sectors”.  Both BAE and Microsoft see roles for new technology in the military: BAE point out that “technology is also reshaping national security”, while Microsoft calls for “cooperation with the private sector in the context of NATO”. Read more

No Space for Peace in the Integrated Security Review

The UK government sees space technology as being of fundamental importance to global power projection.

In March, the UK government published their ‘Integrated Review of Security, Defence, Development and Foreign Policy’, titled ‘Global Britain in a Competitive Age’ it describes the government vision for the UK’s role in the world over the next decade.

There has been a lot of discussions on various parts of the review – especially the increases in the UK nuclear arsenal and military spending – but not so much about the parts that deal with the UK military space policy.  This is also an important part of the Review that needs closer examination.

Boris Johnson makes an interesting comment in the Forward:

“…we will continue to defend the integrity of our nation against state threats, whether in the form of illicit finance or coercive economic measures, disinformation, cyber-attacks, electoral interference or even … the use of chemical or other weapons of mass destruction.

The emphasis in the above has been added to highlight parts relating to what has become known as ‘hybrid warfare’, operations carried out in a ‘grey zone’ between war and peace, which uses political warfare, conventional warfare, cyberwarfare and other subversive influencing methods. This form of covert warfare is now a common component of security strategies.

An Integrated Strategy Serving Military and Commercial Interests

The Review stresses the perceived need to develop “a dynamic space programme” to be underwritten by “the credibility of our deterrent and our ability to project power.” This is to be partially achieved by the development of “an integrated space strategy which brings together military and civil space policy.

UKSpace, the trade association of the British space industry, and the RAF have established a Commercial Integration Cell (CIC) at the MoD’s Space Operations Centre (SpOC) in High Wycombe to work on programmes that jointly serve commercial and military interests.

This civil/military collaboration has already begun – in 2008 the government awarded Surrey Satellite Technology Ltd (SSTL) over £4 million to develop Carbonite 2, a small, low-orbit satellite launched in 2018 to provide high-resolution reconnaissance for intelligence gathering for the MoD. This evolved into Artemis, a project led by the RAF with Airbus, its subsidiary SSTL, Raytheon, the US government and Virgin Orbit as partners. In addition, in 2019 the MoD announced a £30 million military space programme for the development of small satellites and US aerospace giant Lockheed-Martin received £23.5 million to help develop spaceports in the UK. Other defence contractors, such as Raytheon and BAE Systems, are also wanting to become more involvedRead more