Proceed in Harmony: The Government replies to the Lords on AI in Weapon Systems

Click to open

Last December a select committee of the House of Lords published ‘Proceed with Caution’: a report setting out the findings of a year-long investigation into the use of artificial intelligence (AI) in weapon systems.

Members of the Lords committee were drawn entirely from the core of the UK’s political and security establishment, and their report was hardly radical in its conclusions.  Nevertheless, their report made a number of useful recommendations and concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.  The Lords found that Ministry of Defence (MoD) claims to be “ambitious, safe, responsible” in its use of AI had “not lived up to reality”.

The government subsequently pledged to reply to the Lords report, and on 21 February published its formal response.  Perhaps the best way of summarising the tone of the response is to quote from its concluding paragraph:  ““Proceed with caution”, the overall message of this [Lords] report, mirrors the MoD’s approach to AI adoption.”   There is little new in the government response and nothing in it will be of any surprise to observers and analysts of UK government policy on AI and autonomous technologies.  The response merely outlines how the government intends to follow the course of action it had already planned to take, reiterating the substance of past policy statements such as the Defence Artificial Intelligence Strategy and puffing up recent MoD activity and achievements in the military AI field.

As might be imagined, the response takes a supportive approach to recommendations from the Lords which are aligned to its own agenda, such as developing high-quality data sets, improving MoD’s AI procurement arrangements, and undertaking research into potential future AI capabilities.  On the positive side, it is encouraging to see that in some areas concerns over the risks and limitations of AI technologies are highlighted, for example in the need for review and rigorous testing of new systems.  MoD acknowledges that rigorous testing would be required before an operator could be confident in an AI system’s use and effect, that current procedures, including the Article 36 weapons review process, will need to be adapted and updated, and that changes in operational environment may require weapon systems to be retested.

The response also reveals that the government is working on a Joint Service Publication covering all the armed forces to give more concrete directions and guidance on implementing MoD’s AI ethical principles.  The document, ‘Dependable AI in Defence’, will set out the governance, accountabilities, processes and reporting mechanisms needed to translate ethical policies into tangible actions and procedures.  Drone Wars UK and other civil society organisations have long called for MoD to formulate such guidance as a priority.

In some areas the MoD has relatively little power to meet the committee’s recommendations, such as in adjusting government pay scales to match market rates and attract qualified staff to work on MoD AI projects.  Here the rejoinder is little more than flannel, mentioning that “a range of steps” are being taken “to make Defence AI an attractive and aspirational choice.”

In other respects the Lords have challenged MoD’s approach more substantially, and in such cases these challenges are rejected in the government response.  This is so in relation to the Lords’ recommendation that the government should adopt a definition for autonomous weapons systems (AWS).  The section of the response dealing with this point lays bare the fact that the government’s priority “is to maximise our military capability in the face of growing threats”.  A rather unconvincing assertion that “the irresponsible and unethical behaviours and outcomes about which the Committee is rightly concerned are already prohibited under existing legal mechanisms” is followed by the real reason for the government’s opposition: “there is a strong tendency in the ongoing debate about autonomous weapons to assert that any official AWS definition should serve as the starting point for a new legal instrument prohibiting certain types of systems”.  Any international treaty which would outlaw autonomous weapon systems “represents a threat to UK Defence interests” the government argues.  The argument ends with a side-swipe at Russia and an attempt to shut down further debate by claiming that the debate is taking place “at the worst possible time, given Russia’s action in Ukraine and a general increase in bellicosity from potential adversaries.”  This basically seems to be saying that in adopting a definition for autonomous weapon systems the UK would be making itself more vulnerable to Russian military action.  Really?

The Lords were keen for the government to become a leader in the responsible development and governance of AI, and one of their recommendations asked the government to set out its plans in this respect.  The response from the MoD outlines a number of international initiatives in which the UK has participated, largely linked to programmes led by partners from NATO, the Five Eyes alliance, and other US allies – and indeed, acknowledges that these dialogues have taken place with “likeminded nations”.  Convincing those who already have similar views to one’s own takes little effort and is likely to pay limited dividends.  What is much harder, but essential if genuine change is to result, is engaging with those who have differing views to one’s own.  The UK has clearly made no effort to do this on the international stage.  The response is peppered with judgements such as “we know some adversaries may seek to misuse advanced AI technologies, deploying them in a manner which is malign, unsafe and unethical”, indicating that the UK intends to take an antagonistic approach to these adversaries and has no interest in engaging in dialogue with them.

In other respects some of the statements in the response are open to challenge.  The introduction to the paper claims that: “The MoD is actively engaging with a very wide range of experts (including technologists, ethicists, legal advisers and civil society stakeholders) to understand the issues and concerns associated with the use of AI in weapons, and to develop appropriate policies and control frameworks.”  Later on, there is acknowledgement that “It is not widespread practice within Defence to engage with the public at large (i.e. by way of public consultation or polling)”.  MoD is willing to engage with experts and the technical community who largely support its position, but opinions from anyone else – especially if they are critical of current policy – will not be sought and will be politely ignored if they are offered.

The introduction also states that: “We are working with allies and partners through international forums to develop norms and standards for military AI and to ensure that any illegal, unsafe or unethical use of these technologies is identified, attributed and held to account”.  This will come as a surprise to those concerned about the risks to civilians during Israel’s ongoing invasion of Gaza, where the International Court of Justice has called for action to be taken to prevent genocidal acts.  It has been widely reported that Israel – explicitly identified in the government response as one of the UK’s ‘partners’ – is using machine-learning based systems to identify targets.  Rather than hold Israel to account for its use of targeting and warfighting methods which have led to inflated levels of civilian casualties, the UK is wholeheartedly supporting its war effort.

Gaps like this between the government polices set out in the response to the Lords Committee and the reality of what is being experienced during the attack on Gaza expose the response for what it really is – a public relations document.  The ultimate purpose of the response is not to convince members of the House of Lords that the government is listening to their concerns, but to ‘sell’ existing MoD policy and investment in military AI to the media and the public.  There is no intention whatsoever to change course to address concerns over where AI technology may be taking us.

Leave a Reply