Military AI: MoD’s timid approach to challenging ethical issues will not be enough to prevent harm

Papers released to Drone Wars UK by the Ministry of Defence (MoD) under the Freedom of Information Act reveal that progress in preparing ethical guidance for Ministry of Defence (MoD) staff working on military artificial intelligence (AI) projects is proceeding at a snail’s pace.  As a result, MoD’s much vaunted AI strategy and ethical principles are at risk of failing as the department races ahead to develop AI as a key military technology.

Minutes of meetings of MoD’s Ethical Advisory Panel show that although officials have repeatedly stressed the need to focus on implementation of AI programmes, the ethical framework and guidelines needed to ensure that AI systems are safe and responsible are still only in draft form and there is “not yet a distinct sense of a clear direction” as to how they will be developed.

The FOI papers also highlight concerns about the transparency of the panel’s work.  Independent members of the panel have repeatedly stressed the need for the panel to work in an open and transparent manner, yet MoD refuses to publish the terms of membership, meeting minutes, and reports prepared for the panel.  With the aim of remedying this situation, Drone Wars UK is publishing the panel documents released in response to our FOI request as part of this blog article (see pdf files at the end of the article).

The Ministry of Defence AI Ethics Advisory Panel

One of the aims of the Defence Artificial Intelligence Strategy, published in June 2022, was to set out MoD’s “clear commitment to lawful and ethical AI use in line with our core values”.  To help meet this aim MoD published a companion document, entitled ‘Ambitious, safe, responsible‘ alongside the strategy to represent “a positive blueprint for effective, innovative and responsible AI adoption”.

‘Ambitious, safe, responsible’ had two main foundations: a set of ethical principles to guide MoD’s use of AI and an Ethics Advisory Panel, described as “an informal advisory board” to assist with policy relating to the safe and responsible development and use of AI.  The document stated that the panel had assisted in formulating the ethical principles and listed the members of the panel, who are drawn from within Ministry of Defence and the military, industry, and universities and civil society.

The terms of reference for the panel were not published in the ‘Ambitious, safe, responsible’ document, but the FOI papers provided to Drone Wars UK show that it is tasked with advising on:

  • “The development, maintenance and application of a set of ethical principles for AI in Defence, which will demonstrate the MOD’s position and guide our approach to responsible AI across the department.
  • “A framework for implementing these principles and related policies / processes across Defence.
  • “Appropriate governance and decision-making processes to assure ethical outcomes in line with the department’s principles and policies”.

The ethical principles were published alongside the Defence AI Strategy, but more than two years after the panel first met – and despite a constant refrain at panel meetings on the need to focus on implementation – it has yet to make substantial progress on the second and third of these objectives.  An implementation framework and associated policies and governance and decision-making processes have yet to appear.  This appears in no way to be due to shortcomings on behalf of the panel, who seem to have a keen appetite for their work, but rather is the result of slow progress by MoD.  In the meantime work is proceeding at full speed ahead on the development of AI systems in the absence of these key ethical tools.

The work of the panel

The first meeting of the panel, held in March 2021, was chaired by Stephen Lovegrove, the then Permanent Secretary at the Ministry of Defence.  The panel discussed the MoD’s work to date on developing an AI Ethics framework and the panel’s role and objectives.  The panel was to be a “permanent and ongoing source of scrutiny” and “should provide expert advice and challenge” to MoD, working through a  regular quarterly meeting cycle. 

This was followed by a longer discussion over a set of draft ethical principles prepared by MoD staff to govern the department’s use of AI (as subsequently published in the MoD’s ‘Ambitious, safe, responsible’ document).  Panellists took the view that this was an area where the UK could potentially show real global leadership, and the notes of the discussion indicate that their input was constructive, and in general very much along the lines that Drone Wars UK would wish to see.

At the panel’s second meeting, which took place later in the same month, there was further discussion of the ethical principles, centred on an updated draft document prepared by MoD, and a presentation from the government’s Centre for Date Ethics and Innovation (CDEI) on implementation of the principles, with examples of how they might be applied to various use cases.

This was followed by an open discussion on “potential constraints for Defence” which might be introduced by adopting the principles.

The third meeting, six months later in October 2021, was chaired by Laurence Lee, the Second Permanent Secretary at the MoD.  In his introduction the chair stressed that MoD’s work in the area “must now turn from high level policy and principles to implementation across the organisation”, and the main business of the meeting was a discussion of key issues for implementing the ethical principles across MoD.  MoD informed the meeting that procurement and purchasing, education, research and development, and operations were its primary focus for implementing AI projects, and the panel considered whether an independent board should be set up to undertake AI assurance within MoD.  There was also an update on recent MoD activity relating to AI ethics, in which members showed a refreshing commitment to transparency about the panel’s work.

The fourth meeting of the panel took place in July 2022 following the publication of the Defence AI Strategy the previous month (postponed from the originally planned publication date in October 2021).  During reflections on the launch of the strategy (with another pledge that the task ahead was to “focus on implementation”), the view that the UK needs to lead in shaping the global landscape on accepted norms was again expressed, with one panel member pointing out that the UK’s position was, in fact, cautious and not world leading.

A discussion on implementing AI ethics principles took up most of the rest of the meeting.  This has been partly redacted in the papers released to Drone Wars UK.  The notes of the discussion state that “The Armed Forces need to set the right policy, permissions and constraints frameworks that comply with our legal and ethical obligations but also do not impede our ability to fight effectively”, raising the question as to whether there is a tension between these two objectives and, if it came to the crunch, which of the two MoD would prioritise.  One panel member argued that “principles couldn’t be seen as obstacles”.  Drone Wars UK would argue that ethical principles which do not act as obstacles to prevent unethical conduct will be meaningless and pointless.

Following the discussion MoD’s AI Advisory Unit was tasked with preparing a paper for the next panel meeting outlining its plans and progress on implementing an ethical framework.  The last few minutes of the meeting were spent considering the future role of the panel following publication of the Defence AI Strategy and the accompanying ‘Ambitious, safe, responsible’ paper.  It was agreed that the AI Advisory Unit would set out options for the panel’s future role for discussion at the next meeting.

The fifth, and most recent, meeting of the panel did not take place until ten months later in April 2023.  This meeting was chaired by Damian Parminter, MoD’s Director General Strategy and International, who yet again exhorted a focus on implementation, informing the panel that “strategic implementation of AI was one of the Secretary of State’s biggest priorities, especially considering how digital technologies have been used effectively in Ukraine” and that “the UK wants to be a thought leader in this space”.

The session began with reflections on the ‘Responsible Use of AI in the Military Domain’ (REAIM) summit hosted by the Netherlands and South Korea at the Hague in February, at which MoD had hosted a workshop on the UK’s approach to adopting AI.  The discussion then turned to implementing MoD’s AI ethics principles.  A draft document to the topic was presented to the panel, but details of the presentation and feedback received from the panel have been redacted from the documents provided to Drone Wars UK.  Following this came a presentation on methodology for interpreting the AI ethics principles from panel member Mariarosario Taddeo of the Oxford Internet Institute and Alan Turing Institute, based on research commissioned by the Defence Science and Technology Laboratory (DSTL).  Again, details of the presentation have been redacted, as has much of the feedback from panellists.  One member noted that “there was not yet a distinct sense of a clear direction going forward”, perhaps hinting at frustrations over slow progress.

MoD’s Chris Moore-Bick pledged to report back to the panel with a further iteration of the paper at the next meeting, scheduled to take place in the summer, and accepted that “this needed to be done with a clear sense of where we needed to get to by the end of this year”.

Reflections on the role of the panel

What can be concluded from the notes of the Ethics Advisory Panel’s first five meetings?  Firstly, it seems that MoD’s commitment to an ethical approach for its AI programmes is slowly waning.  The high level commitment to the panel’s work appears to be gradually fading away, with top officials losing interest and delegating chairing responsibilities to their subordinates, and slippage in meeting timetables.  One might argue that this is normal for new initiatives in large organisations, but at the same time we need to ask whether this would be allowed to happen for, say, a critical high profile equipment delivery project.

Fourteen months after the Defence AI Strategy was published and thirty months after the panel first started discussing ethical principles, there is still no sign of any guidance or framework for implementing these principles.  This is an acid test for MoD’s commitment to ethical AI: if AI projects are proceeding apace in the absence of an ethical implementation framework it is difficult to see how MoD can claim to be “setting an example for the safe and ethical deployment of AI through how it governs its own use of the technology”, to quote from the Defence AI Strategy.

The lack of transparency about the panel’s activities is also a matter for concern, especially given that MoD’s own ethical principles pledge that “what our systems do, how we intend to use them, and our processes for ensuring beneficial outcomes result from their use should be as transparent as possible” and that panel members have themselves called for more transparency.  With a long-standing and deeply ingrained culture of secrecy and unaccountability MoD officials evidently have no understanding of how to ‘do transparency’: they may pay lip service to the concept but they don’t really ‘get it’ or understand the implications.

This timidity also seems to extend to caution about giving outsiders too much of a say in deliberations over AI policy.  As the ‘Ambitious, safe, responsible’ document states, the panel has an advisory role only and has no formal decision-making powers, meaning that ultimately decisions on ethics will be made by the same people who are responsible for winning battles.

Mariarosaria Taddeo, Associate Professor, Oxford Internet Institute (second left) gives evidence to Lords Select Committee

An insider view of the work of the ethics panel was recently given by panel member Mariarosaria Taddeo to the House of Lords Select Committee which is currently investigating artificial intelligence in weapon systems.  While strongly supporting the need for such a panel and generally positive about the panel’s work, Professor Taddeo felt that improvements could be made, telling the Committee: “It is a step in the right direction, but it is the first one” and that “more should be done”.

Professor Taddeo said that there could be “improvement in terms of transparency of the processes, notes and records”, and that “this is mentioned whenever we meet”.  In terms of the committee’s work to date, “so far, all we have done is to be provided with a draft of, for example, the principles or the document and to give feedback”.  She stressed the need for MoD and the panel to get to grips with translating the published ethical principles into something more concrete, flagging up the risks resulting from the lack of an implementation framework:

“If left only to practitioners, the temptation could be to translate those principles – which resemble constitutional principles in their nature, being very high level and in plain language – into simple operational measures, missing the need to balance those principles against each other in some circumstances, or missing the point that those principles require ethical and critical reflections on how to better implement and interpret the spirit in specific contexts”.

As MoD races ahead on delivering its AI programmes this appears to be exactly what is happening.

Professor Taddeo felt that, although “a panel such as this is very much needed in any defence organisation”, she “would hope it is not deemed sufficient to ensure ethical behaviour of defence organisations”.  The work of the panel should be taken forward through “other panels and boards where ethics has stronger leverage” and with a different remit: “a stick, so to speak, that can veto or review operations, which is not in the remit of this panel”.

We will watch with great interest to see how MoD responds to these suggestions.

Documents released to Drone Wars UK


Leave a Reply