
The use of artificial intelligence (AI) for the purposes of warfare through the development of AI-powered autonomous weapon systems – ‘killer robots’ – “is one of the most controversial uses of AI today”, according to a new report by an influential House of Lords Committee.
The committee, which spent ten months investigating the application of AI to weapon systems and probing the UK government’s plans to develop military AI systems, concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.
Echoing concerns which Drone Wars UK has repeatedly raised, the Lords found that the stated aspiration of the Ministry of Defence (MoD) to be “ambitious, safe, responsible” in its use of AI “has not lived up to reality”, and that although MoD has claimed that transparency and challenge are central to its approach, “we have not found this yet to be the case”.
The cross-party House of Lords Committee on AI in Weapon Systems was set up in January 2023 at the suggestion of Liberal Democrat peer Lord Clement-Jones, and started taking evidence in March. The committee heard oral evidence from 35 witnesses and received nearly 70 written evidence submissions, including evidence from Drone Wars UK.
The committee’s report is entitled ‘Proceed with Caution: Artificial Intelligence in Weapon Systems’ and ‘proceed with caution’ gives a fair summary of its recommendations. The panel was drawn entirely from the core of the UK’s political and military establishment, and at times some members appeared to have difficulty in grasping the technical concepts underpinning the technologies behind autonomous weapons. Under the circumstances the committee was never remotely likely to recommend that the government should not commit to the development of new weapons systems based on advanced technology, and in many respects its report provides a road-map setting out the committee’s views on how the MoD should go ahead in integrating AI into weapons systems and build public support for doing this.
Nevertheless, the committee has taken a sceptical view of the advantages claimed for autonomous weapons systems; has recognised the very real risks that they pose; and has proposed safeguards to mitigate the worst of the risks alongside a robust call for the government to “lead by example in international engagement on regulation of AWS [autonomous weapon systems]”. Despite hearing from witnesses who argued that autonomous weapons “could be faster, more accurate and more resilient than existing weapon systems, could limit the casualties of war, and could protect “our people from harm by automating ‘dirty and dangerous’ tasks””, the committee was apparently unconvinced, concluding that “although a balance sheet of benefits and risks can be drawn, determining the net effect of AWS is difficult” – and that “this was acknowledged by the Ministry of Defence”.
Perhaps the most important recommendation in the committee’s report relates to human control over autonomous weapons. The committee found that:
The Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis on information obtained from sensors. But it is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.