Almost a year ago the Ministry of Defence (MoD) launched its Defence Artificial Intelligence Strategy to explain how it would adopt and exploit artificial intelligence (AI) “at pace and scale”. Among other things, the strategy set out the aspiration for MoD to be “trusted – by the public, our partners and our people, for the safety and reliability of our AI systems, and our clear commitment to lawful and ethical AI use in line with our core values”.
An accompanying policy document, with the title ‘Ambitious, Safe, Responsible‘ explained how MoD intended to win trust for its AI systems. The document put forward five Ethical Principles for AI in Defence, and announced that MoD had convened an AI Ethics Advisory Panel: a group of experts from academia, industry, civil society and from within MoD itself to advise on the development of policy on the safe and responsible development and use of AI.
The AI Ethics Advisory Panel and its role was one of the topics of interest to the House of Lords Select Committee on AI in Weapon Systems when it met for the fourth time recently to take evidence on the ethical and human rights issues posed by the development of autonomous weapons and their use in warfare. Witnesses giving evidence at the session were Verity Coyle from Amnesty International, Professor Mariarosaria Taddeo from the Oxford Internet Institute, and Dr Alexander Blanchard from the Alan Turing Institute. As Professor Taddeo is a member of the MoD’s AI Ethics Advisory Panel, former Defence Secretary Lord Browne took the opportunity to ask her to share her experiences of the panel.
“It is the membership of the panel that really interests me. This is a hybrid panel. It has a number of people whose interests are very obvious; it has academics, where the interests are not nearly as clearly obvious, if they have them; and it has some people in industry, who may well have interests.
What are the qualifications to be a member and what is the process you went through to become a member? At any time were you asked about interests? For example, are there academics on this panel who have been funded by the Ministry of Defence or government to do research? That would be of interest to people. Where is the transparency? This panel has met three times by June 2022. I have no idea how often it has met, because I cannot find anything about what was said at it or who said it. I am less interested in who said it, but it would appear there is no transparency at all about what ethical advice was actually shared.
As an ethicist, are you comfortable about being in a panel of this nature, which is such an important element of the judgment we will have to take as to the tolerance of our society, in light of our values, for the deployment of these weapons systems? Should it be done in this hybrid, complex way, without any transparency as to who is giving the advice, what the advice is and what effect it has had on what comes out in this policy document?”
Lord Browne’s questions neatly capture some of the concerns which Drone Wars shares about the MoD’s approach to AI ethics. Professor Taddeo set out the benefits of the panel as she saw them in her reply, but clearly shared many of Lord Browne’s concerns. “These are very good questions, which the MoD should address”, she answered. She agreed that “there can be improvement in terms of transparency of the processes, notes and records”, and said that “this is mentioned whenever we meet”. She also raised questions about the effectiveness of the panel, telling the Lords that: “This discussion is one hour and a half, and there are a lot of experts in the room who are all prepared, but we did not even scratch the surface of many issues that we have to address”. The panel is an advisory panel, and “so far, all we have done is to be provided with a draft of, for example, the principles or the document and to give feedback”.
If the only role the MoD’s AI Ethics Advisory Panel has played was to advise on principles for inclusion in the Defence Artificial Intelligence Strategy, then an obvious question is what is needed instead to ensure that MoD develops and uses AI in a safe and responsible way? Professor Taddeo felt that the current panel “is a good effort in the right direction”, but “would hope it is not deemed sufficient to ensure ethical behaviour of defence organisations; more should be done”.
She suggested that there should be “some other board put together to oversee or lead efforts on translating the principles into practice”, in which “ethics has a stronger leverage”. This would be “a panel with a different remit, a panel with a stick, so to speak, that can veto or review operations, which is not in the remit of this panel”, and which “should be held to high standards of transparency and accountability”.
Professor Taddeo was also critical of the government’s approach to the regulation of AI, as set out in the recent White Paper ‘A pro-innovation approach to AI regulation‘. She expressed her concerns that the White Paper “seems to embrace the logic that regulation hinders innovation, and that is not the case, especially in a high-risk context”. A lack of regulation “will make risk more concrete” and “will lead to breaches of human rights, accountability and responsibility, which will in turn prompt scandals and lower even further the trust the public has in AI”. This, she felt, is “very problematic”, and represents a missed opportunity to have discussions about regulation of key elements of the use of AI-enabled weapons. Professor Taddeo also disparaged the idea, “which percolates in the paper, of using non-statutory measures”, which she considered “are important when we talk about actions over and above legal compliance – but there are some cases, such as weapons, where we do not have legal compliance set yet”. Following the White Paper’s approach in the defence sector “might lead us to too little regulation in this area”.
It’s important to note that, once again, these concerns are not being expressed by campaigners but by those close to the Ministry of Defence. Professor Taddeo is not only a member of the MoD’s AI Ethics Advisory Panel, but is also a Fellow at the Alan Turing Institute, which is heavily involved in the development of AI for defence and security purposes, and she receives research funding from MoD’s Defence Science And Technology Laboratory. The Lords Committee must recognise that the government’s approach to AI ethics and regulation is flawed, and speak out accordingly.