The third evidence session for the House of Lords Select Committee on Artificial Intelligence (AI) in weapon systems heard views on the development and impact of autonomous weapons from the perspective of the military technology sector.
Witnesses giving evidence at the session were former RAF officer and Ministry of Defence (MoD) advisor Dr Keith Dear, now at Fujitsu Defence and Security; James Black of RAND Europe, Kenneth Payne of Kings College London and the MoD’s Defence Academy at Shrivenham, and Courtney Bowman of US tech company Palantir Technologies. Palantir specialises in the development of AI technologies for surveillance and military purposes and has been described as a “pro-military arm of Silicon Valley”. The company boasts that its software is “responsible for most of the targeting in Ukraine”, supporting the Ukrainian military in identifying tanks, artillery, and other targets in the war against Russia, and its Chief Technology Officer recently told the US Senate’s Armed Services Committee that: “If we want to effectively deter those that threaten US interests, we must spend at least 5% of our budget on capabilities that will terrify our adversaries”.
Not surprisingly, the witnesses tended to take a pro-industry view towards the development of AI and autonomous weapon systems, arguing that incentives, not regulation, were required to encourage technology companies to engage with concerns over ethics and impacts, and taking the fatalistic view that there is no way of stopping the AI juggernaut. Nevertheless, towards the end of the session an interesting discussion on the hazards of arms racing took place, with the witnesses suggesting some positive steps which could help to reduce such a risk.
Arms racing and the undermining of global peace and security becomes a risk when qualitatively new technologies promising clear military advantages seem close at hand. China, Russia, and the United States of America are already investing heavily in robotic and artificial intelligence technologies with the aim of exploiting their military potential. Secrecy over military technology, and uncertainty and suspicion over the capabilities that a rival may have further accelerates arms races.
Competition between these rivals to gain an advantage over each other in autonomous technology and its military capabilities already meets the definition of an arms race – ‘the participation of two or more nation-states in apparently competitive or interactive increases in quantity or quality of war material and/or persons under arms’ – and has the potential to escalate. This competition has no absolute end goal: merely the relative goal of staying ahead of other competitors. Should one of these states, or another technologically advanced state, develop and deploy autonomous weapon systems in the field, it is very likely that others would follow suit. The ensuing race can be expected to be highly destabilising and dangerous.
An arms race over autonomous military technology by the world’s major powers is likely to have serious consequences. These include ‘copycatting’ by other nations, resulting in the proliferation of autonomous technology, and its use by criminal and terrorist groups for malicious purposes. We have already seen this happening in the development of conventional drones, where nations such as Iran, India, Pakistan, Turkey, and others – as well as non-state actors such as Hezbollah and ISIS – have followed the world’s leading military nations in developing surveillance and armed drones. The ultimate result of such proliferation is the creation of an international norm where such conduct is eventually seen as normal and acceptable. Given that much of artificial intelligence technology is dual use, and in some cases available from open sources, it will be extremely difficult to control proliferation in this field.
These concerns were raised at the committee hearing by Lord Hamilton, who asked the witnesses whether an AI arms race was pressuring the government to develop autonomous weapons systems, and asked whether they agreed that although the public position of the Ministry of Defence is that we are having nothing to do with them, in practice the UK is going ahead and developing them anyway.
Kenneth Payne accepted that “there is an arms race afoot”, acknowledging concerns that framing matters that way will accelerate and exacerbate tensions between states when it comes to developing AI, but pointed out that: “We kind of project our fears on to China a little bit. The view from the other side of the fence, evidently, is quite similar. China feels that it has shortcomings in AI, and in science and technology more broadly, relative to the West, so we should be cognisant of that”. He also noted that Russia’s invasion of Ukraine has shown that “the Russian military was not as mighty as we perhaps feared”, and that “we would do well to keep that in mind and guard against the temptation to demonise people on the other side of the arms race.”
James Black drew parallels with the space sector, where there is similar competition for superiority, and pointed to positive steps the UK is taking to cool matters down by placing a moratorium on its own development of certain military capabilities which are neither responsible nor ethical, and by trying to position itself as a convening power between the US and China. The UK is not such a big player as the US and China and is less open to accusations that it is trying to defend its own interests, but at the same time is “well enough informed and has enough capability to be credible. It can also draw on its broader soft power capabilities and lineage, legal expertise, financial expertise and these sorts of things to act as a broker for those middle nations that are not the US or China but are still going to vote in the UN”. He argued that the UK could use this leverage to act as a convening power and play an important role in shaping international norms on autonomous weapons. Courtney Bowman also felt that “the arms race should not be about winning just in terms of rhetoric and hype,” recognising that “there is also a race towards ethical, responsible and trustworthy deployment of these technologies”, and pointed out that the UK is well positioned to help drive these considerations.
Although not the most radical of suggestions, these proposals have merit because they have been put forward by representatives from the IT and military technology sector. Civil society would certainly also support such measures, indicating that there is an across-the-board consensus that the government should engage constructively at the international level to shape norms to prevent an arms race in military AI. The Lords Select Committee would do well to remember this when making its recommendations to government.