Military AI: MoD’s timid approach to challenging ethical issues will not be enough to prevent harm

Papers released to Drone Wars UK by the Ministry of Defence (MoD) under the Freedom of Information Act reveal that progress in preparing ethical guidance for Ministry of Defence (MoD) staff working on military artificial intelligence (AI) projects is proceeding at a snail’s pace.  As a result, MoD’s much vaunted AI strategy and ethical principles are at risk of failing as the department races ahead to develop AI as a key military technology.

Minutes of meetings of MoD’s Ethical Advisory Panel show that although officials have repeatedly stressed the need to focus on implementation of AI programmes, the ethical framework and guidelines needed to ensure that AI systems are safe and responsible are still only in draft form and there is “not yet a distinct sense of a clear direction” as to how they will be developed.

The FOI papers also highlight concerns about the transparency of the panel’s work.  Independent members of the panel have repeatedly stressed the need for the panel to work in an open and transparent manner, yet MoD refuses to publish the terms of membership, meeting minutes, and reports prepared for the panel.  With the aim of remedying this situation, Drone Wars UK is publishing the panel documents released in response to our FOI request as part of this blog article (see pdf files at the end of the article).

The Ministry of Defence AI Ethics Advisory Panel

One of the aims of the Defence Artificial Intelligence Strategy, published in June 2022, was to set out MoD’s “clear commitment to lawful and ethical AI use in line with our core values”.  To help meet this aim MoD published a companion document, entitled ‘Ambitious, safe, responsible‘ alongside the strategy to represent “a positive blueprint for effective, innovative and responsible AI adoption”.

‘Ambitious, safe, responsible’ had two main foundations: a set of ethical principles to guide MoD’s use of AI and an Ethics Advisory Panel, described as “an informal advisory board” to assist with policy relating to the safe and responsible development and use of AI.  The document stated that the panel had assisted in formulating the ethical principles and listed the members of the panel, who are drawn from within Ministry of Defence and the military, industry, and universities and civil society.

The terms of reference for the panel were not published in the ‘Ambitious, safe, responsible’ document, but the FOI papers provided to Drone Wars UK show that it is tasked with advising on:

  • “The development, maintenance and application of a set of ethical principles for AI in Defence, which will demonstrate the MOD’s position and guide our approach to responsible AI across the department.
  • “A framework for implementing these principles and related policies / processes across Defence.
  • “Appropriate governance and decision-making processes to assure ethical outcomes in line with the department’s principles and policies”.

The ethical principles were published alongside the Defence AI Strategy, but more than two years after the panel first met – and despite a constant refrain at panel meetings on the need to focus on implementation – it has yet to make substantial progress on the second and third of these objectives.  An implementation framework and associated policies and governance and decision-making processes have yet to appear.  This appears in no way to be due to shortcomings on behalf of the panel, who seem to have a keen appetite for their work, but rather is the result of slow progress by MoD.  In the meantime work is proceeding at full speed ahead on the development of AI systems in the absence of these key ethical tools.

The work of the panel

The first meeting of the panel, held in March 2021, was chaired by Stephen Lovegrove, the then Permanent Secretary at the Ministry of Defence.  The panel discussed the MoD’s work to date on developing an AI Ethics framework and the panel’s role and objectives.  The panel was to be a “permanent and ongoing source of scrutiny” and “should provide expert advice and challenge” to MoD, working through a  regular quarterly meeting cycle.  Read more

Lords Committee on AI in Weapons Systems: AI harms, humans vs computers, and unethical Russians

First evidence session. Click to watch video

A special investigation set up by the House of Lords is now taking evidence on the development, use and regulation of artificial intelligence (AI) in weapon systems.  Chaired by crossbench peer Lord Lisvane, a former Clerk of the House of  Commons, a stand-alone Select Committee is considering the utility and risks arising from military uses of AI.

The committee is seeking written evidence from members of the public and interested parties, and recently conducted the first of its oral evidence sessions.  Three specialists in international law, Noam Lubell of the University of Essex, Georgia Hinds, Legal Advisor at International Committee of the Red Cross (ICRC), and Daragh Murray of Queen Mary University of London, answered a variety of questions about whether autonomous weapon systems might be able to comply with international law and how they could be controlled at the international level.

One of the more interesting issues raised during the discussion was the point that, regardless of military uses, AI has the potential to wreak a broad range of harms across society, and there is a need to address this concern rather than racing on blindly with the development and roll-out of ever more powerful AI systems.  This is a matter which is beginning to attract wider attention.  Last month the Future of Life Institute published an open letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months.  Over 30,000 researchers and tech sector workers have signed the letter to date, including Stuart Russell, Steve Wozniak, Elon Musk, and Yuval Noah Harari.

Leaving aside whether six months could be long enough to resolve issues around AI safety, there is an important question to be answered here.  There are already numerous examples of cases where existing computerised and AI systems have caused harm, regardless of what the future might hold.  Why, then, are we racing forward in this field?  Has the combination of tech multinationals and unrestrained capitalism become such an unstoppable juggernaut that humanity is literally no longer able to control where the forces we have created are taking us?  If not, then why won’t governments intervene to put the brakes on the development and use of AI, and what interests are they actually working to protect?  This is unlikely to be a line of inquiry the Lords Committee will be pursuing.  Read more

Book Review: Navigating a way through the ethical maze of new technologies

  • Technology Is Not Neutral: A Short Guide To Technology Ethics, Stephanie Hare, London Publishing Partnership, Feb 2022
  • The Political Philosophy of AI: Mark Coeckelbergh, Polity Press, Feb 2022

New technologies such as artificial intelligence (AI) raise formidable political and ethical challenges, and these two books each provide a different kind of practical toolkit for examining and analysing these challenges.  Through investigating a range of viewpoints and examples they thoroughly disprove the claim that ‘technology is neutral’, often used as a cop-out by those who refuse to take responsibility for the technologies they have developed or promoted.

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and his book ‘The Political Philosophy of AI’ encourages us to reflect on what advanced technologies such as AI are already doing to us, in order to prevent us from becoming their helpless victims.  In many ways the book is more about political philosophy than about AI, and is none the worse for that.  Coeckelbergh points out that although a great deal has been written about the technology and the ethics of AI, there has been little thought on the impacts of AI from the perspective of political philosophy, and he sets out to correct this omission.

Political theorist Langdon Winner has argued that technology is political and observes that instead of bringing greater democratisation and equality, new technologies may well give even more power to those who already have a great deal of it.  Coeckelbergh’s book exposes the political power that AI wields alongside its technical power and shows how new technologies such as AI are fundamentally entangled with changes in society.  He explains how the political issues we care about in society are changed and take on new meanings and urgency in the light of technological developments such as advances in robotics, AI, and biotechnology, arguing that to understand the rights and wrongs of new technologies we need to consider them from the perspective of political philosophy as well as ethics, and that this will help us to clarify the questions and issues which the technologies raise.

‘The Political Philosophy of AI’ sets out the theories of political philosophy chapter-by-chapter as they relate to the major elements of politics today – freedom, justice, equality, democracy, power, and the environment – and for each element explores the consequences that we can expect as AI becomes established in society.  This serves to frame the challenges that the technology will bring and act as an evaluative framework to assess its impacts.  Coeckelbergh also uses the analysis to develop a political philosophy for AI itself, which helps us to not only understand and question our political values but also gain a deeper insight into the nature of politics and humanity.

Coeckelbergh’s book asks questions rather than gives answers, and this may disappoint some readers.  But this approach is in line with the philosophical approach that politics should be publicly discussed in a participative and inclusive way, rather than subject to autocratic decisions made by a powerful minority.  That there is virtually no public debate about the wishes of the UK government and others to use AI to transform society says as much about our political system as it does about AI.

“That there is virtually no public debate
about the wishes of the UK government and others to use AI
to transform society says as much about our political system
as it does about AI.”

Read more

Military applications at centre of Britain’s plans to be AI superpower

The UK government published its National AI Strategy in mid-September, billed as a “ten-year plan to make Britain a global AI superpower”.  Despite the hype, the strategy has so far attracted curiously little comment and interest from the mainstream media.  This is a cause for concern  because if the government’s proposals bear fruit, they will dramatically change UK society and the lives of UK Citizens.  They will also place military applications of AI at the centre of the UK’s AI sector.

The Strategy sets out the government’s ambitions to bring about a transition to an “AI-enabled economy” and develop the UK’s AI industry, building on a number of previously published documents – the 2017 Industrial Strategy and 2018 AI Sector Deal, and the ‘AI Roadmap‘ published by the AI Council earlier this year.  It sets out a ten year plan based around three ‘pillars’: investing in the UK’s AI sector, placing AI at the mainstream of the UK’s economy by introducing it across all economic sectors and regions of the UK, and governing the use of AI effectively.

Unsurprisingly, in promoting the Strategy the government makes much of the potential of AI technologies to improve people’s lives and solve global challenges such as climate change and public health crises – although making no concrete commitments in this respect.  Equally unsurprisingly it has far less to say up front about the military uses of AI.  However, the small print of the document states that “defence should be a natural partner for the UK AI sector” and reveals that the Ministry of Defence is planning to establishment a new Defence AI Centre, which will be “a keystone piece of the modernisation of Defence”, to champion military AI development and use and enable the rapid development of AI projects.  A Defence AI Strategy, expected to be published imminently, will outline how to “galvanise a stronger relationship between industry and defence”.  Read more

Meaning-less human control: Lessons from air defence systems for lethal autonomous weapons

Click to open report

A new report co-published today by Drone Wars UK and the Centre for War Studies; University of Southern Denmark examines the lessons to be learned from the diminishing human control of air defence systems for the debate about lethal autonomous weapons systems (LAWS) – ‘Killer Robots’ as they are colloquially called.

In an autonomous weapons system, autonomous capabilities are integrated into critical functions that relate to the selection and engagement of targets without direct human intervention. Subject expert Professor Noel Sharkey, suggests that a Lethal Autonomous Weapon System can be defined as “systems that, once activated, can track, identify and attack targets with violent force without further human intervention”. Examples of such systems include BAE Systems’ Taranis drone, stationary sentries such as the Samsung Techwin SGR-A1, and ground vehicles such as the Kalashnikov Concern Uran-9.

Air Defence Systems are an important area of study in relation to the development of LAWS as, they are already in operation and, while not completely autonomous due to having a human operator in control, they have automated and increasingly autonomous features. Vincent Boulanin and Maaike Verbruggen’s study for the Stockholm International Peace Research Institute (SIPRI) estimates that 89 states operate air defence systems. These includes global military powers such as the US, the UK, France, Russia, and China but also regional powers such as Brazil, India, and Japan.  Read more

Humans First: A Manifesto for the Age of Robotics. A review of Frank Pasquale’s ‘New Laws of Robotics’

In 2018, the hashtag #ThankGodIGraduatedAlready began trending on China’s Weibo social media platform.  The tag reflected concerns among Chinese students that schools had begun to install the ‘Class Care System’, developed by the Chinese technology company Hanwang.  Cameras monitor pupils’ facial expressions with deep learning algorithms identifying each student, and then classifying their behaviour into various categories – “focused”, “listening”, “writing”, “answering questions”, “distracted”, or “sleeping”. Even in a country where mass surveillance is common, students reacted with outrage.

There are many technological, legal, and ethical barriers to overcome before machine learning can be widely deployed in such ways but China, in its push to overtake the US as world’s leader in artificial intelligence (AI), is racing ahead to introduce such technology before addressing these concerns.  And China is not the only culprit.

Frank Pasquale’s book ‘The New Laws of Robotics: Defending Human Expertise in the Age of AI’ investigates the rapidly advancing use of AI and intelligent machines in an era of automation, and uses a wide range of examples – among which the ‘Class Care System’ is far from the most sinister – to highlight the threats that the rush to robotics poses for human societies.  In a world dominated by corporations and governments with a disposition for centralising control, the adoption of AI is being driven by the dictates of neoliberal capitalism, with the twin aims of increasing profit for the private sector and cutting costs in the public sector.  Read more