
In February, the US Government Accountability Office (GAO), which audits and evaluates government activities on behalf of the US Congress, published a study examining the Department of Defense’s approach to developing and deploying artificial intelligence (AI) capabilities in weapon systems and assessing the current status of ‘war-fighting’ AI in the US military. The GAO report gives an important insight into how the world’s most powerful military plans to use AI in combat. It also raises a number of important ethical issues which our own Parliament should also be investigating in relation to the UK’s own military AI programmes.
The GAO study concludes that although the US Department of Defense (DoD) is “actively pursuing AI capabilities,” the majority of AI activities supporting warfighting (as opposed to undertaking business and maintenance tasks) remain at the research and development stage as DoD attempts to address the differences between ‘AI’ and traditional computer software. Research efforts are currently focused on developing autonomy for drones and other uncrewed systems, recognizing targets, and providing recommendations to commanders on the battlefield. Reflecting the US’ interest in military AI, the budget for the DOD’s Joint AI Center has increased dramatically from $89 million in 2019 to $278 million in 2021. In total the Joint AI Center has spent approximately $610 million on AI programmes over the past three years, although the GAO considers that it is too soon to assess the effectiveness of this spending.
The potential uses of AI in warfighting include analysing intelligence, surveillance, and reconnaissance information; fusing data to provide a common operating picture on the battlefield; supporting semi-autonomous and autonomous vehicles; and operating lethal autonomous weapon systems. To date, AI has been used by military forces mainly for analysing intelligence, enhancing weapon platforms such as aircraft and ships that do not require human operators, and making recommendations on battlefield tactics, such as where to move troops.
Current Programmes
The DoD had identified 685 AI projects which it was working on as of April 2021 including, but not limited to, those supporting warfighting, but was unable to provide estimates of funding associated with these projects. The total of 685 projects includes schemes where an element of AI is part of a broader programme, but does not include security classified projects or projects funded through operations and maintenance budgets. 88% of the projects in the inventory were identified from research and development budget documentation, providing evidence to support the view that most of the US military’s AI capabilities are still under development. Most of the 685 identified projects are not yet aligned to specific weapon systems but have the potential to be broadly applied to multiple weapon platforms.
However, the report does gives examples of some of the specific AI projects that the US military is currently working on. For example, The Joint AI Center is developing a targeting AI capability known as project ‘Smart Sensor’ which is a video processing AI prototype that can be fitted on drones and is trained to identify threats and transmit video of the threats back to human analysts. During the last fiscal year approximately $50 million was committed to this project. Meanwhile the US Air Force has demonstrated an AI capability named Artuµ that was able to act as a sensor operator for a U-2 spy plane searching for enemy missile launchers during a simulated mission. Developed in just 35 days, Artuµ was successful in its initial demonstration flight but would require significantly more training to be operational in a real-world environment.
Project Maven – formally known as the Algorithmic Warfare Cross-Functional Team – was set up in 2017 to use computer vision machine learning AI capabilities to analyse large volumes of full-motion video and identify objects of interest. Project Maven’s capabilities were subsequently used in combat operations in the Middle East. The US Army is now developing an AI target recognition capability, known as Scarlet Dragon, that uses data from Project Maven to support airborne combat operations. The programme is mainly funded through Project Maven and is being used in live fire drills every ninety days. A demonstration conducted in October 2021 used the Scarlet Dragon system across various Army, Air Force, and Navy weapon platforms to identify and destroy targets. The Army is developing a similar AI capability, known as Prometheus, to sense and identify targets using satellite imagery and space-based capabilities, while the Marine Corps has been also working to incorporate algorithms developed as part of Project Maven into their capabilities and to modernize existing weapon systems, for example by integrating AI target sensors onto uncrewed aerial vehicles.
The DoD is now starting to explore the development of deep learning neural networks, which work in the same way as Apple’s Siri and Amazon’s Alexa virtual assistants in using complex neural networks to deliver results, although it does not yet have such systems in use. The GAO report states that, in the view of Army Research Laboratory officials, DoD’s AI is currently “not anywhere near being able to outthink a human.”
Difficulties
The GAO study predicts that DoD can expect to face difficulties in transitioning new AI capabilities into day-to-day service. This is partly because the armed forces require a higher level of technology maturity than the research and development community is willing to fund and develop. Other difficulties include challenges in protecting weapons and systems from cyber threats as they become increasingly dependent on software. Commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploitation of traditional software flaws to compromise AI systems and render them ineffective. Other problems are a ‘digital talent deficit’ which limits the ability of contractors to hire staff with the necessary expertise in software development and hampers military personnel in developing, buying, or using AI systems, and a lack of cross-service digital infrastructure to support AI at scale within the military.
In addition to these challenges, GAO has highlighted risks from a shortage of data sets suitable for developing AI systems: even when data are available, they are often in the form of raw unlabelled data which are unusable for training.
Integration of AI into existing weapon platforms also poses difficulties. In future combat situations enemies are expected to use electronic warfare techniques to prevent communications between digital infrastructure, meaning that AI capabilities embedded in weapon platforms must be able to function without this type of access. However, this adds to space, weight, and electrical power requirements which it may not always be possible to meet.
More fundamentally, the GAO points out that military personnel may hesitate to trust AI systems. Complex systems may be unable to provide the information a user needs to understand and trust a decision or recommendation. The more advanced an AI capability is, the harder it is to understand and explain why it is producing a certain output, and users may not have the necessary understanding of the technology, its development processes and operational methods to be able to trust the output. There may also be concerns about resilience, security, and privacy, raising questions of an ethical nature about the system’s use.
Implications for the UK
The UK government, like the US, sees AI as playing an important role in the future of warfighting. The UK’s 2021 Integrated Review of Security, Defence, Development and Foreign Policy sets out the government’s priority for “identifying, funding, developing and deploying new technologies and capabilities faster than our potential adversaries,” presenting AI and other scientific advances as “battle-winning technologies”. As yet this position has received little or no scrutiny from Parliamentarians, either through direct questions to the government, scrutiny from Parliamentary Committees, or investigations by the National Audit Office.
The Integrated Review also pledged to “publish a defence AI strategy and invest in a new centre to accelerate adoption of this technology”. A year later there is still no sign of this strategy, and its publication has been deferred until sometime in “the first half of 2022.” In the meantime, important decisions on the adoption of military AI are being taken in the apparent absence of any strategy or ethical guidelines.
The GAO study has highlighted that the implementation of AI programmes for warfighting is not a straightforward matter and does not bring immediate results. Given the importance that the UK government has attached to the development of military AI, then surely there is a need for Parliament to investigate the matter.
A good start would be for the House of Commons Defence Committee to show more leadership in the area. The Committee could enquire into what military AI capabilities the government wishes to acquire, and how these will be used, especially in the long term. An important part of such an investigation would be consideration of whether AI capabilities could be developed and regulated so that they are used by the armed forces in an ethically acceptable way. Funding is also an important area for scrutiny, and the Ministry of Defence should be asked to provide a costed inventory of the AI projects that it is currently working on. It is important for both MPs and peers to scrutinise and question the Ministry of Defence’s intentions with regards to warfighting AI now – while initiatives are still at an early stage and while it is still possible to establish ground rules and principles – rather than wait until it is too late.