MoD to hold ‘duel of drones’ to choose new armed unmanned system

Artist conception of Loyal Wingman drones

The Ministry of Defence (MoD) will launch a series of competitions this autumn to progress the selection of an armed loyal wingman drone culminating in a duel between the two finalist – “an operational fly-off” as Sir Mike Wigston, Chief of Air Staff described it.  The initiative comes after the abrupt cancellation of Project Mosquito (to develop a loyal wingman drone technology demonstrator for the RAF)  earlier this summer.  The RAF’s Rapid Capabilities Office (RCO) will run the new process, open to both UK and international industry , and aimed at acquiring a “Mosquito type autonomous combat vehicle” after the Mosquito project itself was cancelled as it was not  thought able to achieve an operational drone within the desired timeframe.

Loyal Wingman

The concept of loyal wingman drones is for one or more to fly alongside, or in the vicinity of, a piloted military aircraft  – currently for the UK that would be  Typhoon and F-35, but in the future, Tempest – with the drones carrying out specific tasks such as surveillance, electronic warfare (i.e. radar jamming), laser guiding weapons onto targets, or air-to-air or air-to-ground strikes.   Rather than being directly controlled by an individual pilot on the ground as the UK’s current fleet of Reaper drones are, these drone fly autonomously, sharing data and information with commanders on the ground via the main aircraft.

In addition, loyal wingman drones are supposed to be cheap enough that they can be either entirely expendable or ‘attritable’ (that is not quite expendable, but cheap enough so that it is not a significant event if it is shot down or crashes).  However, Aviation International News, who spoke to an RCO insider, said that the focus would now centre on exploring a drone that fits somewhere between Category 1 (expendable airframes) and Category 2 (attritable airframes). According to the source, there is also a Category 3, which is survivable, indicating a larger airframe with stealth and other advanced technology and no doubt much more expensive.

Which drones will win out to take part in the ‘fly-off’ and come out on top as the UK’s loyal wingman drone is hard to predict, not least because the MoD’s criteria appears yet to be fixed.  However a few of the likely competitors are already emerging:  Read more

Book Review: Navigating a way through the ethical maze of new technologies

  • Technology Is Not Neutral: A Short Guide To Technology Ethics, Stephanie Hare, London Publishing Partnership, Feb 2022
  • The Political Philosophy of AI: Mark Coeckelbergh, Polity Press, Feb 2022

New technologies such as artificial intelligence (AI) raise formidable political and ethical challenges, and these two books each provide a different kind of practical toolkit for examining and analysing these challenges.  Through investigating a range of viewpoints and examples they thoroughly disprove the claim that ‘technology is neutral’, often used as a cop-out by those who refuse to take responsibility for the technologies they have developed or promoted.

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and his book ‘The Political Philosophy of AI’ encourages us to reflect on what advanced technologies such as AI are already doing to us, in order to prevent us from becoming their helpless victims.  In many ways the book is more about political philosophy than about AI, and is none the worse for that.  Coeckelbergh points out that although a great deal has been written about the technology and the ethics of AI, there has been little thought on the impacts of AI from the perspective of political philosophy, and he sets out to correct this omission.

Political theorist Langdon Winner has argued that technology is political and observes that instead of bringing greater democratisation and equality, new technologies may well give even more power to those who already have a great deal of it.  Coeckelbergh’s book exposes the political power that AI wields alongside its technical power and shows how new technologies such as AI are fundamentally entangled with changes in society.  He explains how the political issues we care about in society are changed and take on new meanings and urgency in the light of technological developments such as advances in robotics, AI, and biotechnology, arguing that to understand the rights and wrongs of new technologies we need to consider them from the perspective of political philosophy as well as ethics, and that this will help us to clarify the questions and issues which the technologies raise.

‘The Political Philosophy of AI’ sets out the theories of political philosophy chapter-by-chapter as they relate to the major elements of politics today – freedom, justice, equality, democracy, power, and the environment – and for each element explores the consequences that we can expect as AI becomes established in society.  This serves to frame the challenges that the technology will bring and act as an evaluative framework to assess its impacts.  Coeckelbergh also uses the analysis to develop a political philosophy for AI itself, which helps us to not only understand and question our political values but also gain a deeper insight into the nature of politics and humanity.

Coeckelbergh’s book asks questions rather than gives answers, and this may disappoint some readers.  But this approach is in line with the philosophical approach that politics should be publicly discussed in a participative and inclusive way, rather than subject to autocratic decisions made by a powerful minority.  That there is virtually no public debate about the wishes of the UK government and others to use AI to transform society says as much about our political system as it does about AI.

“That there is virtually no public debate
about the wishes of the UK government and others to use AI
to transform society says as much about our political system
as it does about AI.”

Read more

Loitering munitions, the Ukraine war, and the drift towards ‘killer robots’.

Switchblade loitering munition flies towards target area. The operator views video feed and then designates which  target the munition should strike.

Loitering munitions are now hitting the headlines in the media as a result of their use in the Ukraine war.  Vivid descriptions of ‘kamikaze drones’ and ‘suicide drones’ outline the way in which these weapons operate: they are able to find targets and fly towards them before crashing into them and exploding.  Both Russia and Ukraine are deploying loitering munitions, which allow soldiers to fire on targets such as tanks and heavy armour without the predictability of a mortar or artillery round firing on a set trajectory.   Under some circumstances these ‘fire and forget’ weapons may be able operate with a high degree of autonomy.  For example they can programmed to fly around autonomously in a defined search area and highlight possible targets such as tanks to the operator.  In these circumstances they can be independent of human control. This trend towards increasing autonomy in weapons systems raising questions about how they might shape the future of warfare and the morality of their use.

Loitering munitions such as these have previously been used to military effect in Syria and the 2020 Nagorno-Karabakh war.  Although they are often described as drones, they are in many ways more like a smart missile than an uncrewed aircraft.  Loitering munitions were first developed in the 1980s and can be thought of as a ‘halfway house’ between drones and cruise missiles.  They differ from drones in that they are expendable, and unlike cruise missiles, have the ability to loiter passively in the target area and search for a target.  Potential targets are identified using radar, thermal imaging, or visual sensor data and, to date, a human operator selects the target and executes the command to destroy the target.  They are disposable, one-time use weapons intended to hunt for a target and then destroy it, hence their tag as ‘kamikaze’ weapons.  Dominic Cummings, former chief advisor to the Prime Minister describes a loitering munition as a “drone version of the AK-47: a cheap anonymous suicide drone that flies to the target and blows itself up – it’s so cheap you don’t care”.  Read more

Military AI Audit: Congress scrutinises how the US is developing its warfighting AI capabilities

Click to open report

In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military.  The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat.  It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.

The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software.  Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield.  Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021.  In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending.  Read more

Skyborg: AI control of military drones begins to take off

In June 2021, Skyborg took control of an MQ-20 Avenger drone during a military exercise in California.

The influential State of AI Report 2021, published in October, makes the alarming observation that the adoption of artificial intelligence (AI) for military purposes is now moving from research into the production phase.  The report highlights three indicators which it argues shows this development, one of which is the progress that the US Air Force Research Laboratory is making in testing its autonomous ‘Skyborg’ system to control military drones.

Skyborg (the name is a play on the word ‘cyborg’ – a biological lifeform that has been augmented with technology such as bionic implants)  is intended to be an AI ‘brain’ capable of controlling an aircraft in flight.  Initially, the technology is planned to assist a human pilot in flying the aircraft.

As is often the case with publicity material for military equipment programmes, it is not always easy to distinguish facts from hype or to penetrate the technospeak in which statements from developers are written.  However, news reports and press statements show that over the past year the US Air Force has for the first time succeeded in demonstrating an “active autonomy capability” during test flights of the Skyborg system, as a first step towards being able to use the system in combat.

Official literature on the system states that Skyborg is an “autonomous aircraft teaming architecture”, consisting of a core autonomous control system (ACS): a ‘brain’ comprised of both hardware and software components which can be used to both assist the pilot of a crewed combat aircraft and fly a swarm of uncrewed drones. The system is being designed by the military IT contractor Leidos, with input from the US Air Force and other Skyborg contractors.  It would allow the aircraft to autonomously avoid other aircraft, terrain, obstacles, and hazardous weather, and take off and land on its own. Read more

None too clever? Military applications of artificial intelligence

Drone Wars UK’s latest briefing looks at where and how artificial intelligence is currently being applied in the military context and considers the legal and ethical, operational and strategic risks posed.

Click to open

Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and offer immense opportunities to dramatically improve society.  Smartphones, internet search engines, AI personal assistants, and self-driving cars are among the many products and services that rely on AI to function.  However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.

In current AI applications, machines perform a specific task for a specific purpose.  The umbrella term ‘computational methods’ may be a better way of describing such systems, which fall far short of human intelligence but have wider problem-solving capabilities than conventional software.  Hypothetically, AI may eventually be able to perform a range of cognitive functions, respond to a wide variety of input data, and understand and solve any problem that a human brain can.  Although this is a goal of some AI research programmes, it remains a distant  prospect.

AI does not operate in isolation, but functions as a ‘backbone’ in a broader system to help the system achieve its purpose.  Users do not ‘buy’ the AI itself; they buy products and services that use AI or upgrade a legacy system with new AI technology.  Autonomous systems, which are machines able to execute a task without human input, rely on artificial intelligence computing systems to interpret information from sensors and then signal actuators, such as motors, pumps, or weapons, to cause an impact on the environment around the machine.  Read more