In a timely and welcome move, the House of Commons Foreign Affairs Select Committee has recently launched an investigation into ‘Tech and the future of UK foreign policy‘. Recognising that new and emerging technologies are fundamentally altering the nature of international relations and the rapidly growing influence of private technology companies, the Committee’s inquiry intends to focus on how the government, and particularly the Foreign, Commonwealth, and Development Office (FCDO) should respond to the opportunities and challenges presented by new technologies.
A broad selection of stakeholders have already provided written evidence to the Committee, ranging from big technology companies such as Microsoft, Oracle, and BAE Systems, to academics and industry groups with specialist interests in the field. Non-government organisations, including ourselves, as well as the International Committee of the Red Cross, Amnesty International UK, and the UK Campaign to Stop Killer Robots have also provided evidence.
Not surprisingly, submissions from industry urge the government to support and push ahead with the development of new technologies, with Microsoft insisting that the UK “must move more quickly to advance broad-based technology innovation, which will require “an even closer partnership between the government and the tech sector”. BAE Systems calls for “a united front [which] can be presented in promoting the UK’s overseas interests across both the public and private sectors”. Both BAE and Microsoft see roles for new technology in the military: BAE point out that “technology is also reshaping national security”, while Microsoft calls for “cooperation with the private sector in the context of NATO”.
However, on a positive note there seems to be a broad consensus among all responders on the need for a responsible, ethically-based approach to the use of technology and the development of international norms and standards on the use of technology. Microsoft advocates that the UK “can lead on the responsible development of international standards” and “is also well-placed to have a prominent voice globally on the importance of ethics in technology through multilateral forums”. Civil society takes an even stronger view, with Amnesty International arguing that “what is required is much more effective oversight and regulation of the activities of technology companies to ensure that they conform to international norms in the sphere of human rights”.
In our evidence to the Committee, Drone Wars UK has focused on the likely future impacts of three areas of technology which are likely to be disruptive to international relations and problematic in their impact upon human rights. These are drones, artificial intelligence (AI), and autonomous robotic technologies. All of these technologies undermine security and human rights norms by lowering the threshold for the use of force, transferring the risks and costs of war from soldiers to civilians, expanding the use of extra-judicial assassination, and helping to enable a state of permanent war. Our view is that government has a responsibility to take action to mitigate the risks and address the ethical concerns arising from the development and use of new technologies, and in particular, that the Ministry of Defence should not be allowed to shirk its obligation to control the development and use of harmful military technologies.
While the Integrated Review of Security, Defence, Development and Foreign Policy shows that the government tends to see the development of new technology as a solution to many issues, and especially to security problems, our view is that technology should be seen as much as a problem as it is a solution. The UK should use therefore its influence to ensure that technology is used responsibly and in a way which increases human rights and freedoms, rather than undermines them.
To do this will require regulation of the development and use of new technologies. First and foremost should be a ban on lethal autonomous weapons systems and obligations in international law to ensure that weapon systems remain under meaningful human control at all times. We also propose that the UK should introduce legislation similar to the EU’s new draft regulation on AI, which proposes banning AI systems that are considered to be a clear threat to the safety, livelihoods and rights of people. Under the regulation high risk AI systems will be subject to strict obligations and would be recorded on a database managed by the European Commission. Measures to control the export of technology are also important, and instead of taking a complacent view on the effectiveness of arrangements for licensing export of arms and technology, we urge the government to undertake a much-needed root-and-branch review of the current process.
As well as replying to the specific questions asked by the Committee, we have also tried to raise broader issues of principle in our submission. We have stressed that in fields such as military action, policing, and border control a ‘technology first’ approach can give rise to significant abuses of human rights. The use of technology in these fields is often led by military thinking and approaches, resulting in an excessive reliance on militarised solutions to problems rather than softer and more effective approaches. Our submission makes the important point that the UK must consider alternative approaches for maintaining national security, based on placing the protection and well-being of people at the heart of security policy. This means working to improve the general international situation and improve relations with rival nations.
In a capitalist economy the use of technology ultimately serves two basic purposes: to increase profit to the commercial sector and cut costs for the public sector. Both approaches raise risks that technology will be used in ways which erode human dignity. Our key take-home message for the committee is that the government must act to ensure that technology is used to enrich the quality of human life and not degrade it.