Drone Wars UK’s latest briefing looks at where and how artificial intelligence is currently being applied in the military context and considers the legal and ethical, operational and strategic risks posed.
Artificial Intelligence (AI), automated decision making, and autonomous technologies have already become common in everyday life and offer immense opportunities to dramatically improve society. Smartphones, internet search engines, AI personal assistants, and self-driving cars are among the many products and services that rely on AI to function. However, like all technologies, AI also poses risks if it is poorly understood, unregulated, or used in inappropriate or dangerous ways.
In current AI applications, machines perform a specific task for a specific purpose. The umbrella term ‘computational methods’ may be a better way of describing such systems, which fall far short of human intelligence but have wider problem-solving capabilities than conventional software. Hypothetically, AI may eventually be able to perform a range of cognitive functions, respond to a wide variety of input data, and understand and solve any problem that a human brain can. Although this is a goal of some AI research programmes, it remains a distant prospect.
AI does not operate in isolation, but functions as a ‘backbone’ in a broader system to help the system achieve its purpose. Users do not ‘buy’ the AI itself; they buy products and services that use AI or upgrade a legacy system with new AI technology. Autonomous systems, which are machines able to execute a task without human input, rely on artificial intelligence computing systems to interpret information from sensors and then signal actuators, such as motors, pumps, or weapons, to cause an impact on the environment around the machine. Read more