Soldiers who see in the dark, communicate telepathically, or fly a drone by thought alone all sound like characters from in a science fiction film. Yet research projects investigating all these possibilities are under way in laboratories and research centres around the globe as part of an upsurge of interest in the possibilities of human enhancement enabled largely by expanding knowledge in the field of neuroscience: the study of the human brain and nervous system.
In order to help in understanding the possibilities and hazards posed by human enhancement technology, Drone Wars UK is publishing ‘Cyborg Dawn?‘, a new study investigating the military use of human augmentation.
Human enhancement – a medical or biological intervention to the body designed to improve performance, appearance, or capability beyond what is necessary to achieve, sustain or restore health – may lead to fundamentally new concepts of warfare and can be expected to play a role in enabling the increased use of remotely operated and uncrewed systems in war.
Although military planners are eager to create ‘super soldiers’, the idea of artificially modifying humans to give them capabilities beyond their natural abilities presents significant moral, legal, and health risks. The field of human augmentation is fraught with danger, and without stringent regulation, neurotechnologies and genetic modification will lead us to an increasingly dangerous future where technology encourages and accelerates warfare. The difficulties are compounded by the dual use nature of human augmentation, where applications with legitimate medical uses could equally be used to further the use of remote lethal military force. There is currently considerable discussion about the dangers of ‘killer robot’ autonomous weapon systems, but it is also time to start discussing how to control human enhancement and cyborg technologies which military planners intend to develop. Read more →
L-R: Alexander Blanchard, Digital Ethics Research Fellow, Alan Turing Institute; Mariarosaria Taddeo, Associate Professor, Oxford Internet Institute; Verity Coyle, Senior Campaigner/Advisor, Amnesty UK
Almost a year ago the Ministry of Defence (MoD) launched its Defence Artificial Intelligence Strategy to explain how it would adopt and exploit artificial intelligence (AI) “at pace and scale”. Among other things, the strategy set out the aspiration for MoD to be “trusted – by the public, our partners and our people, for the safety and reliability of our AI systems, and our clear commitment to lawful and ethical AI use in line with our core values”.
An accompanying policy document, with the title ‘Ambitious, Safe, Responsible‘ explained how MoD intended to win trust for its AI systems. The document put forward five Ethical Principles for AI in Defence, and announced that MoD had convened an AI Ethics Advisory Panel: a group of experts from academia, industry, civil society and from within MoD itself to advise on the development of policy on the safe and responsible development and use of AI.
The AI Ethics Advisory Panel and its role was one of the topics of interest to the House of Lords Select Committee on AI in Weapon Systems when it met for the fourth time recently to take evidence on the ethical and human rights issues posed by the development of autonomous weapons and their use in warfare. Witnesses giving evidence at the session were Verity Coyle from Amnesty International, Professor Mariarosaria Taddeo from the Oxford Internet Institute, and Dr Alexander Blanchard from the Alan Turing Institute. As Professor Taddeo is a member of the MoD’s AI Ethics Advisory Panel, former Defence Secretary Lord Browne took the opportunity to ask her to share her experiences of the panel.
Lord Browne:
“It is the membership of the panel that really interests me. This is a hybrid panel. It has a number of people whose interests are very obvious; it has academics, where the interests are not nearly as clearly obvious, if they have them; and it has some people in industry, who may well have interests.
What are the qualifications to be a member and what is the process you went through to become a member? At any time were you asked about interests? For example, are there academics on this panel who have been funded by the Ministry of Defence or government to do research? That would be of interest to people. Where is the transparency? This panel has met three times by June 2022. I have no idea how often it has met, because I cannot find anything about what was said at it or who said it. I am less interested in who said it, but it would appear there is no transparency at all about what ethical advice was actually shared.
As an ethicist, are you comfortable about being in a panel of this nature, which is such an important element of the judgment we will have to take as to the tolerance of our society, in light of our values, for the deployment of these weapons systems? Should it be done in this hybrid, complex way, without any transparency as to who is giving the advice, what the advice is and what effect it has had on what comes out in this policy document?”
Lord Browne’s questions neatly capture some of the concerns which Drone Wars shares about the MoD’s approach to AI ethics. Professor Taddeo set out the benefits of the panel as she saw them in her reply, but clearly shared many of Lord Browne’s concerns. “These are very good questions, which the MoD should address”, she answered. She agreed that “there can be improvement in terms of transparency of the processes, notes and records”, and said that “this is mentioned whenever we meet”. She also raised questions about the effectiveness of the panel, telling the Lords that: “This discussion is one hour and a half, and there are a lot of experts in the room who are all prepared, but we did not even scratch the surface of many issues that we have to address”. The panel is an advisory panel, and “so far, all we have done is to be provided with a draft of, for example, the principles or the document and to give feedback”.
If the only role the MoD’s AI Ethics Advisory Panel has played was to advise on principles for inclusion in the Defence Artificial Intelligence Strategy, then an obvious question is what is needed instead to ensure that MoD develops and uses AI in a safe and responsible way? Professor Taddeo felt that the current panel “is a good effort in the right direction”, but “would hope it is not deemed sufficient to ensure ethical behaviour of defence organisations; more should be done”. Read more →
Drone Wars UK is today publishing a short paper analysing the UK’s approach to the ethical issues raised by the use of artificial intelligence (AI) for military purposes in two recently policy documents. The first part of the paper reviews and critiques the Ministry of Defence’s (MoD’s) Defence Artificial Intelligence Strategy published in June 2022, while the second part considers the UK’s commitment to ‘responsible’ military artificial intelligence capabilities, presented in the document ‘Ambitious, Safe, Responsible‘ published alongside the strategy document.
What was once the realm of science fiction, the technology needed to build autonomous weapon systems is currently under development by in a number of nations, including the United Kingdom. Due to recent advances in unmanned aircraft technology, it is likely that the first autonomous weapons will be a drone-based system.
Drone Wars UK believes that the development and deployment of AI-enabled autonomous weapons would give rise to a number of grave risks, primarily the loss of human values on the battlefield. Giving machines the ability to take life crosses a key ethical and legal Rubicon. Lethal autonomous drones would simply lack human judgment and other qualities that are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.
In the short term it is likely that the military applications of autonomous technology will be in low risk areas, such logistics and the supply chain, where, proponents argue, there are cost advantages and minimal implications for combat situations. These systems are likely to be closely supervised by human operators. In the longer term, as technology advances and AI becomes more sophisticated, autonomous technology is increasingly likely to become weaponised and the degree of human supervision can be expected to drop.
The real issue perhaps is not the development of autonomy itself but the way in which this milestone in technological development is controlled and used by humans. Autonomy raises a wide range of ethical, legal, moral and political issues relating to human judgement, intentions, and responsibilities. These questions remain largely unresolved and there should therefore be deep disquiet about the rapid advance towards developing autonomous weapons systems. Read more →
Technology Is Not Neutral: A Short Guide To Technology Ethics, Stephanie Hare, London Publishing Partnership, Feb 2022
The Political Philosophy of AI: Mark Coeckelbergh, Polity Press, Feb 2022
New technologies such as artificial intelligence (AI) raise formidable political and ethical challenges, and these two books each provide a different kind of practical toolkit for examining and analysing these challenges. Through investigating a range of viewpoints and examples they thoroughly disprove the claim that ‘technology is neutral’, often used as a cop-out by those who refuse to take responsibility for the technologies they have developed or promoted.
Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and his book ‘The Political Philosophy of AI’ encourages us to reflect on what advanced technologies such as AI are already doing to us, in order to prevent us from becoming their helpless victims. In many ways the book is more about political philosophy than about AI, and is none the worse for that. Coeckelbergh points out that although a great deal has been written about the technology and the ethics of AI, there has been little thought on the impacts of AI from the perspective of political philosophy, and he sets out to correct this omission.
Political theorist Langdon Winner has argued that technology is political and observes that instead of bringing greater democratisation and equality, new technologies may well give even more power to those who already have a great deal of it. Coeckelbergh’s book exposes the political power that AI wields alongside its technical power and shows how new technologies such as AI are fundamentally entangled with changes in society. He explains how the political issues we care about in society are changed and take on new meanings and urgency in the light of technological developments such as advances in robotics, AI, and biotechnology, arguing that to understand the rights and wrongs of new technologies we need to consider them from the perspective of political philosophy as well as ethics, and that this will help us to clarify the questions and issues which the technologies raise.
‘The Political Philosophy of AI’ sets out the theories of political philosophy chapter-by-chapter as they relate to the major elements of politics today – freedom, justice, equality, democracy, power, and the environment – and for each element explores the consequences that we can expect as AI becomes established in society. This serves to frame the challenges that the technology will bring and act as an evaluative framework to assess its impacts. Coeckelbergh also uses the analysis to develop a political philosophy for AI itself, which helps us to not only understand and question our political values but also gain a deeper insight into the nature of politics and humanity.
Coeckelbergh’s book asks questions rather than gives answers, and this may disappoint some readers. But this approach is in line with the philosophical approach that politics should be publicly discussed in a participative and inclusive way, rather than subject to autocratic decisions made by a powerful minority. That there is virtually no public debate about the wishes of the UK government and others to use AI to transform society says as much about our political system as it does about AI.
“That there is virtually no public debate
about the wishes of the UK government and others to use AI
to transform society says as much about our political system
as it does about AI.”
Images of Bayraktar TB2 strike in Ukraine – undated.
Putin’s invasion of Ukraine has rightly been condemned across the globe. The on-going war is horrific, with verified reports of indiscriminate bombing of civilian areas and a number of reports of killings which amount to war crimes. At the time of writing, the UN reports that around 2,000 civilians have been killed since the invasion began although the actual figure may be much higher. It is good to see so see such widespread condemnation of the war, although it is hard not to ask why there is little condemnation of other wars and not come to the obvious conclusion.
After seven weeks, there is a great deal that can be said about this awful war and the initial reaction to it. But our primary focus, as always, is on the use of armed drones and the ethical debate that surrounds their growing use.
Bayraktar drone use in Ukraine
While a variety of small unarmed drones have been used in Ukraine by both sides for surveillance and intelligence gathering, it is the use of the Turkish Bayraktar TB2 drone by Ukrainian forces that has gained most attention. Multiple news articles have reported that the Bayraktar drone has been used to deadly effect against Russian heavy weapons with headlines such as ‘Ukraine’s Drones Are Wreaking Havoc On The Russian Army’ and ‘Ukraine’s Secret Weapon Against Russia: Turkish Drones’. Read more →
In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military. The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat. It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.
The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software. Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield. Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021. In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending. Read more →