The European Ombudsman has ruled that the European Border and Coast Guard Agency, Frontex, should reform its access to information arrangements following complaints about difficulties in obtaining information made by Drone Wars UK and German open government platform FragDenStaat.
The Ombudsman’s ruling follows a two year investigation which examined how Frontex deals with requests for public access to documents, and particularly requests submitted by email and through civil society access to information websites such as FragDenStaat and AskTheUK.org. At present Frontex only accepts communications through its own difficult-to-use communication portal and refuses to communicate by e-mail or third party information access websites – a complicated and unnecessary hurdle for anyone seeking information about the organisation.
As well as investigating the portal requirement and the ability to submit and to receive documents by e-mail the Ombudsman, Emily O’Reilly also inquired into concerns about restrictions imposed by the copyright of Frontex documents, long-term accessibility of documents through the portal, and Frontex’s requirement for those requesting information to submit personal identification and the lack of routes to allow this.
Drone Wars UK submitted an information request to Frontex in July 2020 as part of our ‘Crossing A Line‘ investigation, in which we highlighted the growing use of drones for border control operations and the threats to human rights which this poses. Read more →
Technology Is Not Neutral: A Short Guide To Technology Ethics, Stephanie Hare, London Publishing Partnership, Feb 2022
The Political Philosophy of AI: Mark Coeckelbergh, Polity Press, Feb 2022
New technologies such as artificial intelligence (AI) raise formidable political and ethical challenges, and these two books each provide a different kind of practical toolkit for examining and analysing these challenges. Through investigating a range of viewpoints and examples they thoroughly disprove the claim that ‘technology is neutral’, often used as a cop-out by those who refuse to take responsibility for the technologies they have developed or promoted.
Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and his book ‘The Political Philosophy of AI’ encourages us to reflect on what advanced technologies such as AI are already doing to us, in order to prevent us from becoming their helpless victims. In many ways the book is more about political philosophy than about AI, and is none the worse for that. Coeckelbergh points out that although a great deal has been written about the technology and the ethics of AI, there has been little thought on the impacts of AI from the perspective of political philosophy, and he sets out to correct this omission.
Political theorist Langdon Winner has argued that technology is political and observes that instead of bringing greater democratisation and equality, new technologies may well give even more power to those who already have a great deal of it. Coeckelbergh’s book exposes the political power that AI wields alongside its technical power and shows how new technologies such as AI are fundamentally entangled with changes in society. He explains how the political issues we care about in society are changed and take on new meanings and urgency in the light of technological developments such as advances in robotics, AI, and biotechnology, arguing that to understand the rights and wrongs of new technologies we need to consider them from the perspective of political philosophy as well as ethics, and that this will help us to clarify the questions and issues which the technologies raise.
‘The Political Philosophy of AI’ sets out the theories of political philosophy chapter-by-chapter as they relate to the major elements of politics today – freedom, justice, equality, democracy, power, and the environment – and for each element explores the consequences that we can expect as AI becomes established in society. This serves to frame the challenges that the technology will bring and act as an evaluative framework to assess its impacts. Coeckelbergh also uses the analysis to develop a political philosophy for AI itself, which helps us to not only understand and question our political values but also gain a deeper insight into the nature of politics and humanity.
Coeckelbergh’s book asks questions rather than gives answers, and this may disappoint some readers. But this approach is in line with the philosophical approach that politics should be publicly discussed in a participative and inclusive way, rather than subject to autocratic decisions made by a powerful minority. That there is virtually no public debate about the wishes of the UK government and others to use AI to transform society says as much about our political system as it does about AI.
“That there is virtually no public debate
about the wishes of the UK government and others to use AI
to transform society says as much about our political system
as it does about AI.”
Loitering munitions are now hitting the headlines in the media as a result of their use in the Ukraine war. Vivid descriptions of ‘kamikaze drones’ and ‘suicide drones’ outline the way in which these weapons operate: they are able to find targets and fly towards them before crashing into them and exploding. Both Russia and Ukraine are deploying loitering munitions, which allow soldiers to fire on targets such as tanks and heavy armour without the predictability of a mortar or artillery round firing on a set trajectory. Under some circumstances these ‘fire and forget’ weapons may be able operate with a high degree of autonomy. For example they can programmed to fly around autonomously in a defined search area and highlight possible targets such as tanks to the operator. In these circumstances they can be independent of human control. This trend towards increasing autonomy in weapons systems raising questions about how they might shape the future of warfare and the morality of their use.
Loitering munitions such as these have previously been used to military effect in Syria and the 2020 Nagorno-Karabakh war. Although they are often described as drones, they are in many ways more like a smart missile than an uncrewed aircraft. Loitering munitions were first developed in the 1980s and can be thought of as a ‘halfway house’ between drones and cruise missiles. They differ from drones in that they are expendable, and unlike cruise missiles, have the ability to loiter passively in the target area and search for a target. Potential targets are identified using radar, thermal imaging, or visual sensor data and, to date, a human operator selects the target and executes the command to destroy the target. They are disposable, one-time use weapons intended to hunt for a target and then destroy it, hence their tag as ‘kamikaze’ weapons. Dominic Cummings, former chief advisor to the Prime Minister describes a loitering munition as a “drone version of the AK-47: a cheap anonymous suicide drone that flies to the target and blows itself up – it’s so cheap you don’t care”. Read more →
In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military. The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat. It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.
The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software. Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield. Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021. In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending. Read more →
Soon after, the Defence Select Committee announced that it was to scrutinize the decision and sought submissions from interested parties:
“The Government’s decision that the Royal Navy should take over operations in the Channel has taken Parliament (and it seems the MOD) by surprise. There are significant strategic and operational implications surrounding this commitment which need to be explored.”
Shockingly, both the Ministry of Defence and the Home Office refused to submit evidence or send ministers to answer questions from the Committee.
Our full submission to the Committee on this issue – looking in particular at how drones are often seen as a ‘solution’ – is available on their website, while here we offer a short summary.
Drone Wars argues that the military should not be involved in day-to-day border control operations in the absence of any threat of military invasion. This role is primarily a policing and enforcement role centred on dealing with civilians which should be conducted by civilian agencies. Military forces are not principally trained or equipped to deal with humanitarian or policing situations. The UK borders are not a war zone, and civilians attempting to enter and leave the country are not armed combatants.
The influential State of AI Report 2021, published in October, makes the alarming observation that the adoption of artificial intelligence (AI) for military purposes is now moving from research into the production phase. The report highlights three indicators which it argues shows this development, one of which is the progress that the US Air Force Research Laboratory is making in testing its autonomous ‘Skyborg’ system to control military drones.
Skyborg (the name is a play on the word ‘cyborg’ – a biological lifeform that has been augmented with technology such as bionic implants) is intended to be an AI ‘brain’ capable of controlling an aircraft in flight. Initially, the technology is planned to assist a human pilot in flying the aircraft.
As is often the case with publicity material for military equipment programmes, it is not always easy to distinguish facts from hype or to penetrate the technospeak in which statements from developers are written. However, news reports and press statements show that over the past year the US Air Force has for the first time succeeded in demonstrating an “active autonomy capability” during test flights of the Skyborg system, as a first step towards being able to use the system in combat.
Official literature on the system states that Skyborg is an “autonomous aircraft teaming architecture”, consisting of a core autonomous control system (ACS): a ‘brain’ comprised of both hardware and software components which can be used to both assist the pilot of a crewed combat aircraft and fly a swarm of uncrewed drones. The system is being designed by the military IT contractor Leidos, with input from the US Air Force and other Skyborg contractors. It would allow the aircraft to autonomously avoid other aircraft, terrain, obstacles, and hazardous weather, and take off and land on its own. Read more →