Book Review: Navigating a way through the ethical maze of new technologies

  • Technology Is Not Neutral: A Short Guide To Technology Ethics, Stephanie Hare, London Publishing Partnership, Feb 2022
  • The Political Philosophy of AI: Mark Coeckelbergh, Polity Press, Feb 2022

New technologies such as artificial intelligence (AI) raise formidable political and ethical challenges, and these two books each provide a different kind of practical toolkit for examining and analysing these challenges.  Through investigating a range of viewpoints and examples they thoroughly disprove the claim that ‘technology is neutral’, often used as a cop-out by those who refuse to take responsibility for the technologies they have developed or promoted.

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and his book ‘The Political Philosophy of AI’ encourages us to reflect on what advanced technologies such as AI are already doing to us, in order to prevent us from becoming their helpless victims.  In many ways the book is more about political philosophy than about AI, and is none the worse for that.  Coeckelbergh points out that although a great deal has been written about the technology and the ethics of AI, there has been little thought on the impacts of AI from the perspective of political philosophy, and he sets out to correct this omission.

Political theorist Langdon Winner has argued that technology is political and observes that instead of bringing greater democratisation and equality, new technologies may well give even more power to those who already have a great deal of it.  Coeckelbergh’s book exposes the political power that AI wields alongside its technical power and shows how new technologies such as AI are fundamentally entangled with changes in society.  He explains how the political issues we care about in society are changed and take on new meanings and urgency in the light of technological developments such as advances in robotics, AI, and biotechnology, arguing that to understand the rights and wrongs of new technologies we need to consider them from the perspective of political philosophy as well as ethics, and that this will help us to clarify the questions and issues which the technologies raise.

‘The Political Philosophy of AI’ sets out the theories of political philosophy chapter-by-chapter as they relate to the major elements of politics today – freedom, justice, equality, democracy, power, and the environment – and for each element explores the consequences that we can expect as AI becomes established in society.  This serves to frame the challenges that the technology will bring and act as an evaluative framework to assess its impacts.  Coeckelbergh also uses the analysis to develop a political philosophy for AI itself, which helps us to not only understand and question our political values but also gain a deeper insight into the nature of politics and humanity.

Coeckelbergh’s book asks questions rather than gives answers, and this may disappoint some readers.  But this approach is in line with the philosophical approach that politics should be publicly discussed in a participative and inclusive way, rather than subject to autocratic decisions made by a powerful minority.  That there is virtually no public debate about the wishes of the UK government and others to use AI to transform society says as much about our political system as it does about AI.

“That there is virtually no public debate
about the wishes of the UK government and others to use AI
to transform society says as much about our political system
as it does about AI.”

Read more

Ukraine and the ethical debate on armed drones: some early reflections

Images of Bayraktar TB2 strike in Ukraine – undated.

Putin’s invasion of Ukraine has rightly been condemned across the globe.  The on-going war is horrific, with verified reports of indiscriminate bombing of civilian areas and a number of reports of killings which amount to war crimes.  At the time of writing, the UN reports that around 2,000 civilians have been killed since the invasion began although the actual figure may be much higher.  It is good to see so see such widespread condemnation of the war, although it is hard not to ask why there is little condemnation of other wars and not come to the obvious conclusion.

After seven weeks, there is a great deal that can be said about this awful war and the initial reaction to it. But our primary focus, as always, is on the use of armed drones and the ethical debate that surrounds their growing use.

Bayraktar drone use in Ukraine

While a variety of small unarmed drones have been used in Ukraine by both sides for surveillance and intelligence gathering, it is the use of the Turkish Bayraktar TB2 drone by Ukrainian forces that has gained most attention.  Multiple news articles have reported that the Bayraktar drone has been used to deadly effect against Russian heavy weapons with headlines such as ‘Ukraine’s Drones Are Wreaking Havoc On The Russian Army’ and ‘Ukraine’s Secret Weapon Against Russia: Turkish Drones’Read more

Military AI Audit: Congress scrutinises how the US is developing its warfighting AI capabilities

Click to open report

In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military.  The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat.  It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.

The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software.  Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield.  Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021.  In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending.  Read more

MoD report urges embrace of human augmentation to fully exploit drones and AI for warfighting

Click to open report from MoD website.

The MoD’s internal think-tank, the Development, Concepts and Doctrine Centre (DCDC) along with the German Bundeswehr Office for Defence Planning (BODP) has published a disturbing new report urging greater investigation of – and investment in – human augmentation for military purposes. The following is a brief summary of the 100+ page document with short comment at the end.

Human Augmentation – The Dawn of a New Paradigm’ argues that humans are the ‘weakest link’ in modern warfare, and that there is a need to exploit scientific advances to improve human capabilities.

“Increasing use of autonomous and unmanned systems – from the tactical to the strategic level – could significantly increase the combat effect that an individual can bring to bear, but to realise this potential, the interfaces between people and machines will need to be significantly enhanced. Human augmentation will play an important part in enabling this interface.”

Suggested human augmentation to explore for military purposes includes the use of brain interfaces, pharmaceuticals and gene therapy.  Humans, argues the report, should be seen as a ‘platform’ in the same way as vehicles, aircraft and ships, with three elements of ‘the human platform’ to be developed: the physical, the psychological and the social (see image below). Read more

The iWars Survey: Mapping the IT sector’s involvement in developing autonomous weapons

A new survey by Drone Wars has begun the process of mapping the involvement of information technology corporations in military artificial intelligence (AI) and robotics programmes, an area of rapidly increasing focus for the military.  ‘Global Britain in a Competitive Age’, the recently published integrated review of security, defence, development, and foreign policy, highlighted the key roles that new military technologies will play in the government’s vision for the future of the armed forces and aspirations for the UK to become a “science superpower”.

Although the integrated review promised large amounts of public funding and support for research in these areas, co-operation from the technology sector will be essential in delivering ‘ready to use’ equipment and systems to the military.  Senior military figures are aware that ‘Silicon Valley’ is taking the lead in  the development of autonomous systems for both civil and military use’. Speaking at a NATO-organised conference aimed at fostering links between the armed forces and the private sector, General Sir Chris Deverell, the former Commander of Joint Forces Command explained:

“The days of the military leading scientific and technological research and development have gone. The private sector is innovating at a blistering pace and it is important that we can look at developing trends and determine how they can be applied to defence and security”

The Ministry of Defence is actively cultivating technology sector partners to work on its behalf through schemes like the Defence and Security Accelerator (DASA). However, views on co-operation with the military by those within the commercial technology sector are mixed. Over the past couple of  years there are been regular reports of opposition by tech workers to their employer’s military contacts including those at Microsoft and GoogleRead more

Humans First: A Manifesto for the Age of Robotics. A review of Frank Pasquale’s ‘New Laws of Robotics’

In 2018, the hashtag #ThankGodIGraduatedAlready began trending on China’s Weibo social media platform.  The tag reflected concerns among Chinese students that schools had begun to install the ‘Class Care System’, developed by the Chinese technology company Hanwang.  Cameras monitor pupils’ facial expressions with deep learning algorithms identifying each student, and then classifying their behaviour into various categories – “focused”, “listening”, “writing”, “answering questions”, “distracted”, or “sleeping”. Even in a country where mass surveillance is common, students reacted with outrage.

There are many technological, legal, and ethical barriers to overcome before machine learning can be widely deployed in such ways but China, in its push to overtake the US as world’s leader in artificial intelligence (AI), is racing ahead to introduce such technology before addressing these concerns.  And China is not the only culprit.

Frank Pasquale’s book ‘The New Laws of Robotics: Defending Human Expertise in the Age of AI’ investigates the rapidly advancing use of AI and intelligent machines in an era of automation, and uses a wide range of examples – among which the ‘Class Care System’ is far from the most sinister – to highlight the threats that the rush to robotics poses for human societies.  In a world dominated by corporations and governments with a disposition for centralising control, the adoption of AI is being driven by the dictates of neoliberal capitalism, with the twin aims of increasing profit for the private sector and cutting costs in the public sector.  Read more