Military AI Audit: Congress scrutinises how the US is developing its warfighting AI capabilities

Click to open report

In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military.  The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat.  It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.

The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software.  Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield.  Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021.  In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending.  Read more

Drone Wars Select Committee submission on use of the military drones in countering migrant crossings

In Sept 2021 the prototype of the UK’s new armed drone flew from Scotland to undertake a mission involving a search pattern over the Channel.

Boris Johnson announced in mid-January that the armed forces was to take charge of limiting migrants crossing the English Channel. The announcement was described by The Times as one of a series of populist announcements by the embattled PM to save his premiership.

Soon after, the Defence Select Committee announced that it was to scrutinize the decision and sought submissions from interested parties:

“The Government’s decision that the Royal Navy should take over operations in the Channel has taken Parliament (and it seems the MOD) by surprise.  There are significant strategic and operational implications surrounding this commitment which need to be explored.”

Shockingly, both the Ministry of Defence and the Home Office refused to submit evidence or send ministers to answer questions from the Committee.

Our full submission to the Committee on this issue – looking in particular at how drones are often seen as a ‘solution’ – is available on their website, while here we offer a short summary.

  • Drone Wars argues that the military should not be involved in day-to-day border control operations in the absence of any threat of military invasion. This role is primarily a policing and enforcement role centred on dealing with civilians which should be conducted by civilian agencies.  Military forces are not principally trained or equipped to deal with humanitarian or policing situations.  The UK borders are not a war zone, and civilians attempting to enter and leave the country are not armed combatants.

Read more

Skyborg: AI control of military drones begins to take off

In June 2021, Skyborg took control of an MQ-20 Avenger drone during a military exercise in California.

The influential State of AI Report 2021, published in October, makes the alarming observation that the adoption of artificial intelligence (AI) for military purposes is now moving from research into the production phase.  The report highlights three indicators which it argues shows this development, one of which is the progress that the US Air Force Research Laboratory is making in testing its autonomous ‘Skyborg’ system to control military drones.

Skyborg (the name is a play on the word ‘cyborg’ – a biological lifeform that has been augmented with technology such as bionic implants)  is intended to be an AI ‘brain’ capable of controlling an aircraft in flight.  Initially, the technology is planned to assist a human pilot in flying the aircraft.

As is often the case with publicity material for military equipment programmes, it is not always easy to distinguish facts from hype or to penetrate the technospeak in which statements from developers are written.  However, news reports and press statements show that over the past year the US Air Force has for the first time succeeded in demonstrating an “active autonomy capability” during test flights of the Skyborg system, as a first step towards being able to use the system in combat.

Official literature on the system states that Skyborg is an “autonomous aircraft teaming architecture”, consisting of a core autonomous control system (ACS): a ‘brain’ comprised of both hardware and software components which can be used to both assist the pilot of a crewed combat aircraft and fly a swarm of uncrewed drones. The system is being designed by the military IT contractor Leidos, with input from the US Air Force and other Skyborg contractors.  It would allow the aircraft to autonomously avoid other aircraft, terrain, obstacles, and hazardous weather, and take off and land on its own. Read more

None too clever? Military applications of artificial intelligence

Drone Wars UK’s latest briefing looks at where and how artificial intelligence is currently being applied in the military context and considers the legal and ethical, operational and strategic risks posed.

Click to open

Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and offer immense opportunities to dramatically improve society.  Smartphones, internet search engines, AI personal assistants, and self-driving cars are among the many products and services that rely on AI to function.  However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.

In current AI applications, machines perform a specific task for a specific purpose.  The umbrella term ‘computational methods’ may be a better way of describing such systems, which fall far short of human intelligence but have wider problem-solving capabilities than conventional software.  Hypothetically, AI may eventually be able to perform a range of cognitive functions, respond to a wide variety of input data, and understand and solve any problem that a human brain can.  Although this is a goal of some AI research programmes, it remains a distant  prospect.

AI does not operate in isolation, but functions as a ‘backbone’ in a broader system to help the system achieve its purpose.  Users do not ‘buy’ the AI itself; they buy products and services that use AI or upgrade a legacy system with new AI technology.  Autonomous systems, which are machines able to execute a task without human input, rely on artificial intelligence computing systems to interpret information from sensors and then signal actuators, such as motors, pumps, or weapons, to cause an impact on the environment around the machine.  Read more

Reclaiming the technology juggernaut: A review of Azeem Azhar’s ‘Exponential’

  • Azeem Azhar, Exponential: How Accelerating Technology Is Leaving us Behind and What to Do About It, Cornerstone, 2021
Azeem Azhar

The central message of Azeem Azhar’s new book, ‘Exponential’, is that technology is a force that humanity can direct, rather than a force which will enslave us.  This may seem optimistic, given the alarmingly fast rate of change which new technologies are bringing about in the world, but as well as explaining in clear terms why these changes are happening so fast and why this is a problem, the book also sets out a manifesto for how we can match technology to meet human needs and begin to address some of the social impacts of rapid change.

‘Exponential’ identifies four key technology domains which form the bedrock of the global economy and where capabilities are accelerating at ever-increasing rates while, at the same time costs are plummeting.  The four technologies are computer science, where improvements are driven by faster processors and access to vast data sets; energy, where renewables are causing the price of generating power to drop rapidly; the life sciences, where gene sequencing and synthetic biology are allowing us to develop novel biological components and systems, and manufacturing, where 3D printing is enabling the rapid, localized production of anything from a concrete building to plant-based steaks.  These are all ‘general purpose technologies’: just like electricity, the printing press, and the car, they have broad utility and the potential to change just about everything.

However, while these technologies are taking off at an exponential rate, society has been unable to keep up.  Businesses, laws, markets, working patterns, and other human institutions have at the same time been able to evolve only incrementally and are struggling to adapt.  Azhar calls this the ‘exponential gap’ – the rift between the potential of the technologies and the different types of management that they demand.  Understanding the exponential gap can help explain why we are now facing technology-induced problems like market domination by ‘winner takes all’ businesses such as Amazon, the gig economy, and the spread of misinformation on social media.

The book detail the impacts of the exponential growth in technology on business and employment as well as on geopolitical issues such as trade, conflict, and the global balance of power.  It shows how the ‘exponential gap’ is shaping relations between citizens and society through the power of tech giants which increasingly provide platforms for our conversations and relationships while collecting and commodifying data about us in order to manipulate our choices. Read more

Military applications at centre of Britain’s plans to be AI superpower

The UK government published its National AI Strategy in mid-September, billed as a “ten-year plan to make Britain a global AI superpower”.  Despite the hype, the strategy has so far attracted curiously little comment and interest from the mainstream media.  This is a cause for concern  because if the government’s proposals bear fruit, they will dramatically change UK society and the lives of UK Citizens.  They will also place military applications of AI at the centre of the UK’s AI sector.

The Strategy sets out the government’s ambitions to bring about a transition to an “AI-enabled economy” and develop the UK’s AI industry, building on a number of previously published documents – the 2017 Industrial Strategy and 2018 AI Sector Deal, and the ‘AI Roadmap‘ published by the AI Council earlier this year.  It sets out a ten year plan based around three ‘pillars’: investing in the UK’s AI sector, placing AI at the mainstream of the UK’s economy by introducing it across all economic sectors and regions of the UK, and governing the use of AI effectively.

Unsurprisingly, in promoting the Strategy the government makes much of the potential of AI technologies to improve people’s lives and solve global challenges such as climate change and public health crises – although making no concrete commitments in this respect.  Equally unsurprisingly it has far less to say up front about the military uses of AI.  However, the small print of the document states that “defence should be a natural partner for the UK AI sector” and reveals that the Ministry of Defence is planning to establishment a new Defence AI Centre, which will be “a keystone piece of the modernisation of Defence”, to champion military AI development and use and enable the rapid development of AI projects.  A Defence AI Strategy, expected to be published imminently, will outline how to “galvanise a stronger relationship between industry and defence”.  Read more