Book Review: Navigating a way through the ethical maze of new technologies

  • Technology Is Not Neutral: A Short Guide To Technology Ethics, Stephanie Hare, London Publishing Partnership, Feb 2022
  • The Political Philosophy of AI: Mark Coeckelbergh, Polity Press, Feb 2022

New technologies such as artificial intelligence (AI) raise formidable political and ethical challenges, and these two books each provide a different kind of practical toolkit for examining and analysing these challenges.  Through investigating a range of viewpoints and examples they thoroughly disprove the claim that ‘technology is neutral’, often used as a cop-out by those who refuse to take responsibility for the technologies they have developed or promoted.

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and his book ‘The Political Philosophy of AI’ encourages us to reflect on what advanced technologies such as AI are already doing to us, in order to prevent us from becoming their helpless victims.  In many ways the book is more about political philosophy than about AI, and is none the worse for that.  Coeckelbergh points out that although a great deal has been written about the technology and the ethics of AI, there has been little thought on the impacts of AI from the perspective of political philosophy, and he sets out to correct this omission.

Political theorist Langdon Winner has argued that technology is political and observes that instead of bringing greater democratisation and equality, new technologies may well give even more power to those who already have a great deal of it.  Coeckelbergh’s book exposes the political power that AI wields alongside its technical power and shows how new technologies such as AI are fundamentally entangled with changes in society.  He explains how the political issues we care about in society are changed and take on new meanings and urgency in the light of technological developments such as advances in robotics, AI, and biotechnology, arguing that to understand the rights and wrongs of new technologies we need to consider them from the perspective of political philosophy as well as ethics, and that this will help us to clarify the questions and issues which the technologies raise.

‘The Political Philosophy of AI’ sets out the theories of political philosophy chapter-by-chapter as they relate to the major elements of politics today – freedom, justice, equality, democracy, power, and the environment – and for each element explores the consequences that we can expect as AI becomes established in society.  This serves to frame the challenges that the technology will bring and act as an evaluative framework to assess its impacts.  Coeckelbergh also uses the analysis to develop a political philosophy for AI itself, which helps us to not only understand and question our political values but also gain a deeper insight into the nature of politics and humanity.

Coeckelbergh’s book asks questions rather than gives answers, and this may disappoint some readers.  But this approach is in line with the philosophical approach that politics should be publicly discussed in a participative and inclusive way, rather than subject to autocratic decisions made by a powerful minority.  That there is virtually no public debate about the wishes of the UK government and others to use AI to transform society says as much about our political system as it does about AI.

“That there is virtually no public debate
about the wishes of the UK government and others to use AI
to transform society says as much about our political system
as it does about AI.”

Read more

Military applications at centre of Britain’s plans to be AI superpower

The UK government published its National AI Strategy in mid-September, billed as a “ten-year plan to make Britain a global AI superpower”.  Despite the hype, the strategy has so far attracted curiously little comment and interest from the mainstream media.  This is a cause for concern  because if the government’s proposals bear fruit, they will dramatically change UK society and the lives of UK Citizens.  They will also place military applications of AI at the centre of the UK’s AI sector.

The Strategy sets out the government’s ambitions to bring about a transition to an “AI-enabled economy” and develop the UK’s AI industry, building on a number of previously published documents – the 2017 Industrial Strategy and 2018 AI Sector Deal, and the ‘AI Roadmap‘ published by the AI Council earlier this year.  It sets out a ten year plan based around three ‘pillars’: investing in the UK’s AI sector, placing AI at the mainstream of the UK’s economy by introducing it across all economic sectors and regions of the UK, and governing the use of AI effectively.

Unsurprisingly, in promoting the Strategy the government makes much of the potential of AI technologies to improve people’s lives and solve global challenges such as climate change and public health crises – although making no concrete commitments in this respect.  Equally unsurprisingly it has far less to say up front about the military uses of AI.  However, the small print of the document states that “defence should be a natural partner for the UK AI sector” and reveals that the Ministry of Defence is planning to establishment a new Defence AI Centre, which will be “a keystone piece of the modernisation of Defence”, to champion military AI development and use and enable the rapid development of AI projects.  A Defence AI Strategy, expected to be published imminently, will outline how to “galvanise a stronger relationship between industry and defence”.  Read more

Meaning-less human control: Lessons from air defence systems for lethal autonomous weapons

Click to open report

A new report co-published today by Drone Wars UK and the Centre for War Studies; University of Southern Denmark examines the lessons to be learned from the diminishing human control of air defence systems for the debate about lethal autonomous weapons systems (LAWS) – ‘Killer Robots’ as they are colloquially called.

In an autonomous weapons system, autonomous capabilities are integrated into critical functions that relate to the selection and engagement of targets without direct human intervention. Subject expert Professor Noel Sharkey, suggests that a Lethal Autonomous Weapon System can be defined as “systems that, once activated, can track, identify and attack targets with violent force without further human intervention”. Examples of such systems include BAE Systems’ Taranis drone, stationary sentries such as the Samsung Techwin SGR-A1, and ground vehicles such as the Kalashnikov Concern Uran-9.

Air Defence Systems are an important area of study in relation to the development of LAWS as, they are already in operation and, while not completely autonomous due to having a human operator in control, they have automated and increasingly autonomous features. Vincent Boulanin and Maaike Verbruggen’s study for the Stockholm International Peace Research Institute (SIPRI) estimates that 89 states operate air defence systems. These includes global military powers such as the US, the UK, France, Russia, and China but also regional powers such as Brazil, India, and Japan.  Read more

Humans First: A Manifesto for the Age of Robotics. A review of Frank Pasquale’s ‘New Laws of Robotics’

In 2018, the hashtag #ThankGodIGraduatedAlready began trending on China’s Weibo social media platform.  The tag reflected concerns among Chinese students that schools had begun to install the ‘Class Care System’, developed by the Chinese technology company Hanwang.  Cameras monitor pupils’ facial expressions with deep learning algorithms identifying each student, and then classifying their behaviour into various categories – “focused”, “listening”, “writing”, “answering questions”, “distracted”, or “sleeping”. Even in a country where mass surveillance is common, students reacted with outrage.

There are many technological, legal, and ethical barriers to overcome before machine learning can be widely deployed in such ways but China, in its push to overtake the US as world’s leader in artificial intelligence (AI), is racing ahead to introduce such technology before addressing these concerns.  And China is not the only culprit.

Frank Pasquale’s book ‘The New Laws of Robotics: Defending Human Expertise in the Age of AI’ investigates the rapidly advancing use of AI and intelligent machines in an era of automation, and uses a wide range of examples – among which the ‘Class Care System’ is far from the most sinister – to highlight the threats that the rush to robotics poses for human societies.  In a world dominated by corporations and governments with a disposition for centralising control, the adoption of AI is being driven by the dictates of neoliberal capitalism, with the twin aims of increasing profit for the private sector and cutting costs in the public sector.  Read more

Watch: ‘Drone Warfare: Today, Tomorrow, Forever?’

Here’s a recording of the webinar to mark our 10th anniversary ‘Drone Warfare: Today, Tomorrow, Forever?’

The event featured:

  • Aditi Gupta, Coordinator for the All-Party Parliamentary Group
  • Chris Cole, Director of Drone Wars UK
  • Ella Knight, campaigner at Amnesty International
  • Rachel Stohl, Vice President at the Stimson Center
  • Elke Schwarz, Lecturer in Political Theory at Queen Mary’s, University of London

Drone Wars at Ten #3: What’s next? A peek at the future

In this final post to mark our 10th birthday, I want to peer a little into the future, looking at what we are facing in relation to drone warfare in the coming years. Of course predicting the future is always a little foolish – perhaps especially so in the middle of a global pandemic – but four areas of work are already fairly clear: public accountability over the deployment of armed drones; the push to open UK skies to military drones;  monitoring the horizontal and vertical proliferation of military drones and opposing the development of lethal autonomous weapons, aka ‘killer robots’. Read more