Military applications at centre of Britain’s plans to be AI superpower

The UK government published its National AI Strategy in mid-September, billed as a “ten-year plan to make Britain a global AI superpower”.  Despite the hype, the strategy has so far attracted curiously little comment and interest from the mainstream media.  This is a cause for concern  because if the government’s proposals bear fruit, they will dramatically change UK society and the lives of UK Citizens.  They will also place military applications of AI at the centre of the UK’s AI sector.

The Strategy sets out the government’s ambitions to bring about a transition to an “AI-enabled economy” and develop the UK’s AI industry, building on a number of previously published documents – the 2017 Industrial Strategy and 2018 AI Sector Deal, and the ‘AI Roadmap‘ published by the AI Council earlier this year.  It sets out a ten year plan based around three ‘pillars’: investing in the UK’s AI sector, placing AI at the mainstream of the UK’s economy by introducing it across all economic sectors and regions of the UK, and governing the use of AI effectively.

Unsurprisingly, in promoting the Strategy the government makes much of the potential of AI technologies to improve people’s lives and solve global challenges such as climate change and public health crises – although making no concrete commitments in this respect.  Equally unsurprisingly it has far less to say up front about the military uses of AI.  However, the small print of the document states that “defence should be a natural partner for the UK AI sector” and reveals that the Ministry of Defence is planning to establishment a new Defence AI Centre, which will be “a keystone piece of the modernisation of Defence”, to champion military AI development and use and enable the rapid development of AI projects.  A Defence AI Strategy, expected to be published imminently, will outline how to “galvanise a stronger relationship between industry and defence”. 

Much of the support for the UK’s AI sector is based around creating “a pro-innovation business environment and capital market” and “attracting the best people and firms to set up shop in the UK”.  To support this, the government will launch a National AI Research and Innovation programme for academia and industry and will begin the process of training employees to use AI in a business setting.

More controversial are proposals for “ensuring innovators have access to the data and computing resources necessary”.  To do this, the government intends to “unlock the value of data across the economy”.  Reading between the lines, this means using personal data to train AI systems, spot anomalies, and allow the development of new data-enabled products and services.    This is an area of concern to many members of the public, as evidenced by the widespread opt-out of NHS patients from plans to lodge their clinical data in a new centralised NHS database where it would be available for analysis.  Steps are already under way to pave the way for the use personal data in AI applications: the government has published proposals to amend the current data protection regime (the UK GDPR) to allow data to be used as a “strategic asset” and reform the Information Commissioner’s role to encourage the use of people’s data ‘responsibly’ to achieve economic and social goals. This view of data equates people with oil: a commodity to be mined and extracted for economic gain without regard for the consequences.

With the emphasis on supporting the AI industry, the Strategy leaves open questions as to how the government intends to manage the risks and harms presented by AI and how AI will be regulated.

It states the government’s intention to build “the most pro-innovation regulatory environment in the world” – a phrase which should set alarm bells ringing for anyone who wishes to see meaningful control over the AI industry to prevent new technologies from harming the public in currently unforeseen ways.  The UK’s approach to AI regulation will be set out in a white paper to be published early in 2022 although it is not yet clear whether the government will introduce new AI-specific regulations and a dedicated AI regulator.  Whereas the European Union is known for its hard line against Big Tech, and currently envisages introducing a new risk-based regulation to protect human rights and safety from the challenges of AI, the UK government has to date soft-pedalled in its dealings with the sector.

If the government gets its way, the measures set out in the National AI Strategy to develop AI and expand the ways in which it is used will irreversibly shape the future of the UK.  However, the vision set out in the Strategy is not one which appears to put people before profit or make the world a fairer, more peaceful and more equal place.  The Strategy plays lip service to responsibility and ethics, but it’s clear that its aim is principally to serve business and the economy, and our rights as individuals will take second place.  We need a much better vision for the future: a vision where technology works for humanity, not the other way around.

Leave a Reply