The arms race towards autonomous weapons – industry acknowledge concerns

(L to R) Courtney Bowman, Palantir Technologies UK; Dr Kenneth Payne, Professor of Strategy, King’s College London; James Black, Assistant Director of the Defence and Security Research Group, RAND Europe; Keith Dear, Director of Artificial Intelligence Innovation, Fujitsu;

The third evidence session for the House of Lords Select Committee on Artificial Intelligence (AI) in weapon systems heard views on the development and impact of autonomous weapons from the perspective of the military technology sector.

Witnesses giving evidence at the session were former RAF officer and Ministry of Defence (MoD) advisor Dr Keith Dear, now at Fujitsu Defence and Security; James Black of RAND Europe, Kenneth Payne of Kings College London and the MoD’s Defence Academy at Shrivenham, and Courtney Bowman of US tech company Palantir Technologies.  Palantir specialises in the development of AI technologies for surveillance and military purposes and has been described as a “pro-military arm of Silicon Valley”.  The company boasts that its software is “responsible for most of the targeting in Ukraine”, supporting the Ukrainian military in identifying tanks, artillery, and other targets in the war against Russia, and its Chief Technology Officer recently told the US Senate’s Armed Services Committee that: “If we want to effectively deter those that threaten US interests, we must spend at least 5% of our budget on capabilities that will terrify our adversaries”.

Not surprisingly, the witnesses tended to take a pro-industry view towards the development of AI and autonomous weapon systems, arguing that incentives, not regulation, were required to encourage technology companies to engage with concerns over ethics and impacts, and taking the fatalistic view that there is no way of stopping the AI juggernaut.  Nevertheless, towards the end of the session an interesting discussion on the hazards of arms racing took place, with the witnesses suggesting some positive steps which could help to reduce such a risk.

Arms racing and the undermining of global peace and security becomes a risk when qualitatively new technologies promising clear military advantages seem close at hand.  China, Russia, and the United States of America are already investing heavily in robotic and artificial intelligence technologies with the aim of exploiting their military potential.  Secrecy over military technology, and uncertainty and suspicion over the capabilities that a rival may have further accelerates arms races.

Competition between these rivals to gain an advantage over each other in autonomous technology and its military capabilities already meets the definition of an arms race –  ‘the participation of two or more nation-states in apparently competitive or interactive increases in quantity or quality of war material and/or persons under arms’ – and has the potential to escalate.  This competition has no absolute end goal: merely the relative goal of staying ahead of other competitors. Should one of these states, or another technologically advanced state, develop and deploy autonomous weapon systems in the field, it is very likely that others would follow suit. The ensuing race can be expected to be highly destabilising and dangerous. Read more

The UK, accountability for civilian harm, and autonomous weapon systems

Second evidence session. Click to watch video

The second public session of the House of Lords inquiry into artificial intelligence (AI) in weapon systems took place at the end of March.  The session examined how the development and deployment of autonomous weapons might impact upon the UK’s foreign policy and its position on the global stage and heard evidence from Yasmin Afina, Research Associate at Chatham House, Vincent Boulanin, Director of Governance of Artificial Intelligence at the Stockholm International Peace Research Institute, and Charles Ovink, Political Affairs Officer at United Nations Office for Disarmament.

Among the wide range of issues covered in the two-hour session was the question of who could be held accountable if human rights abuses were committed by a weapon system acting autonomously.  A revealing exchange took place between Lord Houghton, a former Chief of Defence Staff (the most senior officer of the UK’s armed forces), and Charles Ovink.  Houghton asked whether it might be possible for an autonomous weapon system to comply with the laws of war under certain circumstances (at 11.11 in the video of the session):

“If that fully autonomous system has been tested and approved in such a way that it doesn’t rely on a black box technology, that constant evaluation has proved that the risk of it non-complying with the parameters of international humanitarian law are accepted, that then there is a delegation effectively from a human to a machine, why is that not then compliant, or why would you say that that should be prohibited?”

This is, of course, a highly loaded question that assumes that a variety of improbable circumstances would apply, and then presents a best-case scenario as the norm.  Ovink carefully pointed out that any decision on whether such a system should be prohibited would be for United Nations member states to decide, but that the question posed ‘a big if’, and it was not clear what kind of test environment could mimic a real-life warzone with civilians present and guarantee that the laws of war would be followed.  Even if this was the case, there would still need to be a human accountable for any civilian deaths that might occur.  Read more

Lords Committee on AI in Weapons Systems: AI harms, humans vs computers, and unethical Russians

First evidence session. Click to watch video

A special investigation set up by the House of Lords is now taking evidence on the development, use and regulation of artificial intelligence (AI) in weapon systems.  Chaired by crossbench peer Lord Lisvane, a former Clerk of the House of  Commons, a stand-alone Select Committee is considering the utility and risks arising from military uses of AI.

The committee is seeking written evidence from members of the public and interested parties, and recently conducted the first of its oral evidence sessions.  Three specialists in international law, Noam Lubell of the University of Essex, Georgia Hinds, Legal Advisor at International Committee of the Red Cross (ICRC), and Daragh Murray of Queen Mary University of London, answered a variety of questions about whether autonomous weapon systems might be able to comply with international law and how they could be controlled at the international level.

One of the more interesting issues raised during the discussion was the point that, regardless of military uses, AI has the potential to wreak a broad range of harms across society, and there is a need to address this concern rather than racing on blindly with the development and roll-out of ever more powerful AI systems.  This is a matter which is beginning to attract wider attention.  Last month the Future of Life Institute published an open letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months.  Over 30,000 researchers and tech sector workers have signed the letter to date, including Stuart Russell, Steve Wozniak, Elon Musk, and Yuval Noah Harari.

Leaving aside whether six months could be long enough to resolve issues around AI safety, there is an important question to be answered here.  There are already numerous examples of cases where existing computerised and AI systems have caused harm, regardless of what the future might hold.  Why, then, are we racing forward in this field?  Has the combination of tech multinationals and unrestrained capitalism become such an unstoppable juggernaut that humanity is literally no longer able to control where the forces we have created are taking us?  If not, then why won’t governments intervene to put the brakes on the development and use of AI, and what interests are they actually working to protect?  This is unlikely to be a line of inquiry the Lords Committee will be pursuing.  Read more

None too clever? Military applications of artificial intelligence

Drone Wars UK’s latest briefing looks at where and how artificial intelligence is currently being applied in the military context and considers the legal and ethical, operational and strategic risks posed.

Click to open

Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and offer immense opportunities to dramatically improve society.  Smartphones, internet search engines, AI personal assistants, and self-driving cars are among the many products and services that rely on AI to function.  However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.

In current AI applications, machines perform a specific task for a specific purpose.  The umbrella term ‘computational methods’ may be a better way of describing such systems, which fall far short of human intelligence but have wider problem-solving capabilities than conventional software.  Hypothetically, AI may eventually be able to perform a range of cognitive functions, respond to a wide variety of input data, and understand and solve any problem that a human brain can.  Although this is a goal of some AI research programmes, it remains a distant  prospect.

AI does not operate in isolation, but functions as a ‘backbone’ in a broader system to help the system achieve its purpose.  Users do not ‘buy’ the AI itself; they buy products and services that use AI or upgrade a legacy system with new AI technology.  Autonomous systems, which are machines able to execute a task without human input, rely on artificial intelligence computing systems to interpret information from sensors and then signal actuators, such as motors, pumps, or weapons, to cause an impact on the environment around the machine.  Read more

Technology and the future of UK Foreign Policy – Our submission to the Foreign Affairs Committee Inquiry

Click to open

In a timely and welcome move, the House of Commons Foreign Affairs Select Committee has recently launched an investigation into ‘Tech and the future of UK foreign policy‘.  Recognising that new and emerging technologies are fundamentally altering the nature of international relations and the rapidly growing influence of private technology companies, the Committee’s inquiry intends to focus on how the government, and particularly the Foreign, Commonwealth, and Development Office (FCDO) should respond to the opportunities and challenges presented by new technologies.

A broad selection of stakeholders have already provided written evidence to the Committee, ranging from big technology companies such as Microsoft, Oracle, and BAE Systems, to academics and industry groups with specialist interests in the field.  Non-government organisations, including ourselves, as well as the International Committee of the Red Cross, Amnesty International UK, and the UK Campaign to Stop Killer Robots have also provided evidence.

Not surprisingly, submissions from industry urge the government to support and push ahead with the development of new technologies, with Microsoft insisting that the UK “must move more quickly to advance broad-based technology innovation, which will require “an even closer partnership between the government and the tech sector”.  BAE Systems calls for “a united front [which] can be presented in promoting the UK’s overseas interests across both the public and private sectors”.  Both BAE and Microsoft see roles for new technology in the military: BAE point out that “technology is also reshaping national security”, while Microsoft calls for “cooperation with the private sector in the context of NATO”. Read more

The iWars Survey: Mapping the IT sector’s involvement in developing autonomous weapons

A new survey by Drone Wars has begun the process of mapping the involvement of information technology corporations in military artificial intelligence (AI) and robotics programmes, an area of rapidly increasing focus for the military.  ‘Global Britain in a Competitive Age’, the recently published integrated review of security, defence, development, and foreign policy, highlighted the key roles that new military technologies will play in the government’s vision for the future of the armed forces and aspirations for the UK to become a “science superpower”.

Although the integrated review promised large amounts of public funding and support for research in these areas, co-operation from the technology sector will be essential in delivering ‘ready to use’ equipment and systems to the military.  Senior military figures are aware that ‘Silicon Valley’ is taking the lead in  the development of autonomous systems for both civil and military use’. Speaking at a NATO-organised conference aimed at fostering links between the armed forces and the private sector, General Sir Chris Deverell, the former Commander of Joint Forces Command explained:

“The days of the military leading scientific and technological research and development have gone. The private sector is innovating at a blistering pace and it is important that we can look at developing trends and determine how they can be applied to defence and security”

The Ministry of Defence is actively cultivating technology sector partners to work on its behalf through schemes like the Defence and Security Accelerator (DASA). However, views on co-operation with the military by those within the commercial technology sector are mixed. Over the past couple of  years there are been regular reports of opposition by tech workers to their employer’s military contacts including those at Microsoft and GoogleRead more