A special investigation set up by the House of Lords is now taking evidence on the development, use and regulation of artificial intelligence (AI) in weapon systems. Chaired by crossbench peer Lord Lisvane, a former Clerk of the House of Commons, a stand-alone Select Committee is considering the utility and risks arising from military uses of AI.
The committee is seeking written evidence from members of the public and interested parties, and recently conducted the first of its oral evidence sessions. Three specialists in international law, Noam Lubell of the University of Essex, Georgia Hinds, Legal Advisor at International Committee of the Red Cross (ICRC), and Daragh Murray of Queen Mary University of London, answered a variety of questions about whether autonomous weapon systems might be able to comply with international law and how they could be controlled at the international level.
One of the more interesting issues raised during the discussion was the point that, regardless of military uses, AI has the potential to wreak a broad range of harms across society, and there is a need to address this concern rather than racing on blindly with the development and roll-out of ever more powerful AI systems. This is a matter which is beginning to attract wider attention. Last month the Future of Life Institute published an open letter calling for all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. Over 30,000 researchers and tech sector workers have signed the letter to date, including Stuart Russell, Steve Wozniak, Elon Musk, and Yuval Noah Harari.
Leaving aside whether six months could be long enough to resolve issues around AI safety, there is an important question to be answered here. There are already numerous examples of cases where existing computerised and AI systems have caused harm, regardless of what the future might hold. Why, then, are we racing forward in this field? Has the combination of tech multinationals and unrestrained capitalism become such an unstoppable juggernaut that humanity is literally no longer able to control where the forces we have created are taking us? If not, then why won’t governments intervene to put the brakes on the development and use of AI, and what interests are they actually working to protect? This is unlikely to be a line of inquiry the Lords Committee will be pursuing. Read more