The US killed 11 people in a reported drone strike on a small boat in the Caribbean Sea on 3 September. Although it has not been confirmed that the strike was carried out by a drone, President Trump shared drone footage of the strike on his social media. In August it was revealed that Trump had secretly signed a directive ordering the Pentagon to begin military operations against drug cartels.
Screen grab from drone video shared by President Trump.
While US officials alleged that the boat targeted was carrying drugs being transported by members of the Tren de Aragua cartel, multiple legal scholars and experts have argued that the strike was “manifestly unlawful.”
Professor Luke Moffett of Queens University Belfast told the BBC that while “force can be used to stop a boat, generally this should be non-lethal measures.” Any use of force must be “reasonable and necessary in self-defence where there is immediate threat of serious injury or loss of life to enforcement officials.” The US and other states regularly stop boats in international waters as part of law enforcement activity without resorting to the use of lethal force.
Much more significantly, however, is the grave violation of international law that is deliberate, premeditated targeting of civilians. Claire Finkelstein, professor of national security law at the University of Pennsylvania, said “There’s no authority for this whatsoever under international law. It was not an act of self-defense. It was not in the middle of a war. There was no imminent threat to the United States.” Finklestein went on to make the clear and obvious connection between the strike and the on-going, two-decades long US drone targeted killing programme which has significantly blurred the lines between law enforcement and armed conflict.
While the US alleges that the occupants of the boat were members of an organised criminal gang and President Trump and other administration officials have began to publicly talk about the threat of ‘Narco terrorists’, that in no way makes the targets of this strike combatants under the laws of war. While civilians are regularly and persistently victims of drone and air strikes, the deliberate targeting of non-combatants is still shocking.
New York University law professor Ryan Goodman, who previously worked as a lawyer in Pentagon, told the New York Times that “It’s difficult to imagine how any lawyers inside the Pentagon could have arrived at a conclusion that this was legal rather than the very definition of murder under international law rules that the Defense Department has long accepted.”
In the aftermath of the strike and questioning by the media, administration officials struggled to justify the legality of the strike, resorting to arguing that it was a matter of self-defence. Significantly, senior officials said that further such operations were likely.
Trump and the MTCR
Meanwhile, President Trump is reportedly returning to a plan formulated during his first administration to overturn controls on the export of US armed drones. Trump attempted in 2020, as we reported, to get the other state signatories of the Missile Technology Control Regime (MTCR) to accept that Predator/Reaper-type drones should be moved out of the most strongly controlled group (Category I) into the lesser group (Category II). Other states, however, gave this short shrift, much to Trumps annoyance.
According to the Reuters report, the new move involves “designating drones as aircraft… rather than missile systems” which will enable the US to then “sidestep” its treaty obligations. The move will aid US plans to sell hundreds of armed drones to Saudi Arabia, UAE and Qatar.
Whether this will convince other states is highly doubtful, but it is likely that Trump and his administration will not care. Such a move will of course open the flood gates for other states to unilaterally reinterpret arms control treaties in their favour in the same way and will also likely spur the proliferation of armed drones which will only further increase civilian harm.
The current government places a central emphasis on technology and innovation in its evolving national security strategy, and wider approach to governance. Labour proposes reviving a struggling British economy through investment in defence with artificial intelligence (AI) featuring as an important component. Starmer’s premiership seems to align several objectives: economic growth, defence industrial development and technological innovation.
Rachel Reeves and John Healey hold roundtable with military company bosses, in front of Reaper drone at RAF Waddington, Feb 2025. Image: MoD
Taken together, these suggest that the government is positioning AI primarily in the context of war and defence innovation. This not only risks undermining the government’s stated ambitions of stability and economic growth but a strategy that prioritises speed over scrutiny, to the neglect of important ethical concerns.
The private defence industry has been positioned as an important pillar of this strategy. Before the Strategic Defence Review (SDR) was published, Chancellor Rachel Reeves and Defence Secretary John Healey initiated a Defence and Economic Growth Task Force to drive UK growth through defence capabilities and production. Arms companies are no longer vital for the purposes of national security but now presented as engines of future prosperity. AI is central to this, consistently highlighted in government communications John Healy has explicitly acknowledged that AI will increasingly power the British military whilst Kier Starmer stated that AI ‘will drive incredible change’.
UK focusing AI on military applications
The AI Action Plan, released in January 2025, explicitly links AI to economic growth. Although this included references to ‘responsible use and leadership’, the government has now shifted emphasis on military applications at the expense of crucial policy areas. On the 4th of July, the Science and Technology Secretary Peter Kyle wrote to the Alan Turing Institute – Britain’s premier AI research organization – to refocus research on military applications of AI. The Institutes prior research agenda spanned environmental sustainability, health and national security; under this new directive priorities are fundamentally being narrowed.
BAE Systems Project Odyssey uses AI and VR to make training ‘more realistic’. Image: BAE Systems
Relatedly, the Industrial Strategy released by the government aims to ‘embolden’ the UK’s digital and technologies economy, with £500 million to be delivered through a sovereign AI unit – this however will be focused on ‘building capacity in the most strategically important areas’. Given Peter Kyles re-direction and the overwhelming emphasis the government has placed on AI’s productive capacity in war, it becomes clear that AI research in defence will be at the cost of socially beneficial research in the case of the Alan Turing Institute.
Take Britain’s bleak economic outlook: sluggish productivity; post-Brexit stagnation; strained public finances; mounting government debt repayments; surging costs of living and inflating house prices. There is little evidence to suggest that defence-led growth will yield impactful returns on this catalogue of challenges. No credible economist is going to advise, in the face of these challenges, that investing in defence and redirecting research on AI in the name of national security, is going to give a better return on investment.
Research conducted in America illustrates that investing in health, education, infrastructure and green investment is more likely to give better returns on individual income specifically and broadly, the country’s prospects. Similarly, Lord Blunkett (former minster under Blair) pointed out that without GDP growth, raising defence spending as a share of GDP may not increase the actual funding.
Concerning applications to health outcomes, in August the World Economic Forum reported AI’s striking potential in doubling accuracy in the examination of brains in stroke patients, detecting fractures often missed in overstretched departments and predicting diseases with high confidence. This is critical given the NHS’s persistent challenges: long waiting times, underfunding, regional inequality, staff shortages and bureaucratic inertia.
Health and economic growth are closely related: healthier individuals are more productive, children attend school more consistently, preventative care lowers long-term costs – fundamentally strong health systems add value to the economy and our lives . Yet health is just one example. We are in the embryonic stages of AI development, and by prioritising research on military applications over civilian ones with public value, the government risks undermining, not fuelling long-term economic growth.
Crucially, framing arms companies as a major engine of economic growth is wildly misleading and economically unfounded. Arms sales account for 0.004% of the treasury’s total revenue and the defence industrial base accounts for only 1% of UK economic output. This sector is highly monopolized and so the benefits of ‘growth’ are concentrated among a handful of dominant corporations. Even then, the profit generated will not be reinvested into the UK. The biggest arms company in the UK – BAE Systems – is essentially a joint US-UK company with most of its capital invested in the US with majority shareholders emanating from US investment companies like BlackRock.
Prioritising speed over scrutiny
Beyond the economics, this is part of a wider strategy that signals a growing dismissal of ethical concerns, prioritising speed over scrutiny. The SDR acknowledged that technology is outpacing regulatory frameworks, noting that ‘the UK’s competitors are unlikely to adhere to common ethical standards’. In April 2025, Matthew Clifford – AI advisor to the PM – has been quoted saying ‘speed is everything’. While the Ministry of Defence (in 2022) promised to take an ‘ambitious, safe and responsible’ approach to the development of military AI, the current emphasis on speed sidelines important ethical concerns in the rush for military-technological superiority.
Militarily, the SDR makes plans to invest in drones, autonomous systems and £1 billion for a ‘digital targeting web’. A key foundational principle of International Humanitarian Law is the protection of civilians and their distinction with military targets. An AI-enabled ‘digital targeting web’ – like the one proposed in the SDR – connects sensors and weapons enabling faster detection and killing of human life. These networks would be able to identify and suggest targets faster than humans ever could, leaving soldiers in the best case, minutes, and the worst case, seconds to decide whether the drone should kill.
Digital Warfare: US and UK forces at the Combined Air Operations Center (CAOC), Al Udeid Air Base, Qatar,
One notable example is the Maven Smart System, recently procured by NATO. According to the US Think Tank, the Centre of Security and Emerging Technology, the system makes possible small armies to make ‘1000 tactical decisions per hour’. Some legal scholars have pointed out that the prioritisation of speed, within AI-powered battleground technology, raises questions surrounding the preservation of meaningful human control and restraint in warfare. Israeli use of AI-powered automated targeting systems such as ‘Lavender’ during its assault and occupation of Gaza is illustrative of this point. Systems such as these have been highlighted as one of the factors behind the shockingly high civilian death toll there.
This problem is compounded by the recent research that has shown that new large language models are known to ‘hallucinate’ – producing outputs in error or made up. As these systems become embedded within military decision-making chains, the risk of escalation due to technical failure increase dramatically. A false signal, misread sensor or a corrupted database could lead to erroneous targeting, or unintended conflict escalation.
In sum, the UK’s current approach – predominantly framing AI’s utility though the lens of defence – risks squandering its broader social and economic potential. The redirection of public research institutes, the privileging of AI investment in military applications (or so-called ‘strategic areas’) and the emphasis on speed over scrutiny raises serious concerns. Ethically, the erosion of meaningful human control in battlefield decision-making, the risk of AI-driven conflict escalation and the disregard of international humanitarian principles points to a troubling trajectory. The UK risks drifting towards the ethical standards of Russia and Israel in its use of military AI. A government approach to AI grounded in human security (freedom from fear and want), not war is not only more ethical but far more likely to generate sustainable economic growth for the United Kingdom.
Matthew Croft is a postgraduate student at Kings College London studying Conflict, Security and Development with a particular interest on the ethics of national security and the politics of technology.
The use of artificial intelligence (AI) for the purposes of warfare through the development of AI-powered autonomous weapon systems – ‘killer robots’ – “is one of the most controversial uses of AI today”, according to a new report by an influential House of Lords Committee.
The committee, which spent ten months investigating the application of AI to weapon systems and probing the UK government’s plans to develop military AI systems, concluded that the risks from autonomous weapons are such that the government “must ensure that human control is consistently embedded at all stages of a system’s lifecycle, from design to deployment”.
Echoing concerns which Drone Wars UK has repeatedly raised, the Lords found that the stated aspiration of the Ministry of Defence (MoD) to be “ambitious, safe, responsible” in its use of AI “has not lived up to reality”, and that although MoD has claimed that transparency and challenge are central to its approach, “we have not found this yet to be the case”.
The cross-party House of Lords Committee on AI in Weapon Systems was set up in January 2023 at the suggestion of Liberal Democrat peer Lord Clement-Jones, and started taking evidence in March. The committee heard oral evidence from 35 witnesses and received nearly 70 written evidence submissions, including evidence from Drone Wars UK.
The committee’s report is entitled ‘Proceed with Caution: Artificial Intelligence in Weapon Systems’ and ‘proceed with caution’ gives a fair summary of its recommendations. The panel was drawn entirely from the core of the UK’s political and military establishment, and at times some members appeared to have difficulty in grasping the technical concepts underpinning the technologies behind autonomous weapons. Under the circumstances the committee was never remotely likely to recommend that the government should not commit to the development of new weapons systems based on advanced technology, and in many respects its report provides a road-map setting out the committee’s views on how the MoD should go ahead in integrating AI into weapons systems and build public support for doing this.
Nevertheless, the committee has taken a sceptical view of the advantages claimed for autonomous weapons systems; has recognised the very real risks that they pose; and has proposed safeguards to mitigate the worst of the risks alongside a robust call for the government to “lead by example in international engagement on regulation of AWS [autonomous weapon systems]”. Despite hearing from witnesses who argued that autonomous weapons “could be faster, more accurate and more resilient than existing weapon systems, could limit the casualties of war, and could protect “our people from harm by automating ‘dirty and dangerous’ tasks””, the committee was apparently unconvinced, concluding that “although a balance sheet of benefits and risks can be drawn, determining the net effect of AWS is difficult” – and that “this was acknowledged by the Ministry of Defence”.
Perhaps the most important recommendation in the committee’s report relates to human control over autonomous weapons. The committee found that:
The Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis on information obtained from sensors. But it is essential to have human control over the deployment of the system both to ensure human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.
Papers released to Drone Wars UK by the Ministry of Defence (MoD) under the Freedom of Information Act reveal that progress in preparing ethical guidance for Ministry of Defence (MoD) staff working on military artificial intelligence (AI) projects is proceeding at a snail’s pace. As a result, MoD’s much vaunted AI strategy and ethical principles are at risk of failing as the department races ahead to develop AI as a key military technology.
Minutes of meetings of MoD’s Ethical Advisory Panel show that although officials have repeatedly stressed the need to focus on implementation of AI programmes, the ethical framework and guidelines needed to ensure that AI systems are safe and responsible are still only in draft form and there is “not yet a distinct sense of a clear direction” as to how they will be developed.
The FOI papers also highlight concerns about the transparency of the panel’s work. Independent members of the panel have repeatedly stressed the need for the panel to work in an open and transparent manner, yet MoD refuses to publish the terms of membership, meeting minutes, and reports prepared for the panel. With the aim of remedying this situation, Drone Wars UK is publishing the panel documents released in response to our FOI request as part of this blog article (see pdf files at the end of the article).
The Ministry of Defence AI Ethics Advisory Panel
One of the aims of the Defence Artificial Intelligence Strategy, published in June 2022, was to set out MoD’s “clear commitment to lawful and ethical AI use in line with our core values”. To help meet this aim MoD published a companion document, entitled ‘Ambitious, safe, responsible‘ alongside the strategy to represent “a positive blueprint for effective, innovative and responsible AI adoption”.
‘Ambitious, safe, responsible’ had two main foundations: a set of ethical principles to guide MoD’s use of AI and an Ethics Advisory Panel, described as “an informal advisory board” to assist with policy relating to the safe and responsible development and use of AI. The document stated that the panel had assisted in formulating the ethical principles and listed the members of the panel, who are drawn from within Ministry of Defence and the military, industry, and universities and civil society.
The terms of reference for the panel were not published in the ‘Ambitious, safe, responsible’ document, but the FOI papers provided to Drone Wars UK show that it is tasked with advising on:
“The development, maintenance and application of a set of ethical principles for AI in Defence, which will demonstrate the MOD’s position and guide our approach to responsible AI across the department.
“A framework for implementing these principles and related policies / processes across Defence.
“Appropriate governance and decision-making processes to assure ethical outcomes in line with the department’s principles and policies”.
The ethical principles were published alongside the Defence AI Strategy, but more than two years after the panel first met – and despite a constant refrain at panel meetings on the need to focus on implementation – it has yet to make substantial progress on the second and third of these objectives. An implementation framework and associated policies and governance and decision-making processes have yet to appear. This appears in no way to be due to shortcomings on behalf of the panel, who seem to have a keen appetite for their work, but rather is the result of slow progress by MoD. In the meantime work is proceeding at full speed ahead on the development of AI systems in the absence of these key ethical tools.
The work of the panel
The first meeting of the panel, held in March 2021, was chaired by Stephen Lovegrove, the then Permanent Secretary at the Ministry of Defence. The panel discussed the MoD’s work to date on developing an AI Ethics framework and the panel’s role and objectives. The panel was to be a “permanent and ongoing source of scrutiny” and “should provide expert advice and challenge” to MoD, working through a regular quarterly meeting cycle. Read more →
From top: Israeli Harop, Iranian Shaded 136, Polish Warmate.
Over the past few years – and particularly in the on-going war in Ukraine – we have seen the rise in use of what has become known as ‘kamikaze’ or ‘suicide’ drones. Two new excellent reports have just been released which examine these systems. ‘One-Way Attack Drones: Loitering Munitions of the Past and Present’ written by Dan Gettinger, formerly of the Bard Drone Center and ‘Loitering Munitions and Unpredictability’ by Ingvild Bode & Tom Watts, examine between them the history, current use, and growing concern about the increasing autonomy of such systems.
A drone by any other name…?
Firstly, to address the elephant in the room: are these systems ‘drones’?
Naming has always been a keenly fought aspect of the debate about drones, with sometimes bitter conflict over whether such platforms should be named ‘unmanned aerial vehicles’ (UAVS), ‘remotely piloted air systems’ (RPAS) or ‘drones’. ‘Drones’ has been the term that has stuck, particularly in mainstream media, but is regularly used interchangeably with UAV (with ‘unmanned’ being replaced in recent years by ‘uncrewed’ for obvious reasons). While many in the military now accept the term ‘drone’ and are happy to use it depending on the audience, some continue to insist that it belittles both capabilities of the system and those who operate them.
Whichever term is used, a further aspect of the naming debate is that an increasing number and type of military aerial systems are being labelled as ‘drones’. While all these systems have significant characteristics in common (aerial systems, unoccupied, used for surveillance/intelligence gathering and/or attack), they can also be very different in terms of size and range; can carry out very different missions; have different effects; and raise different legal and ethical issues.
One type of such system is the so-called ‘suicide’ or ‘kamikaze’ drone – perhaps better labelled ‘one way attack’ drone. There are several different categories of this type of drone, and while they are are used to carry out remote lethal attacks and therefore have significant aspects in common with the much larger Reaper or Bayraktar drones, they are significantly different in that they are not designed to be re-used, but rather are expendable as the warhead is part of the structure of the system which is destroyed in use. Importantly, while loitering munitions are a sub-set of ‘one way attack’ drones, not all one-way attack drones are loitering munitions.
Dan Gettinger’s report ‘One-Way Attack Drones: Loitering Munitions of the Past and Present’ helpfully sets out a history of the development of these systems and identifies three sub categories: anti-radar systems, portable or ‘backpackable’ systems and Iranian systems. He has compiled a helpful dataset of over 200 such systems (although not all are currently in operation). All of these, he argues, can be traced back to “the transition from the era of jet-powered target drones to that of remotely piloted vehicles.” Read more →
On the 3rd November 2002, a US Predator drone targeted and killed Qa’id Salim Sinan al-Harithi, a Yemeni member of al-Qaeda who the CIA believed responsible for the attack on the USS Cole in which 17 US sailors were killed. While drones had previously been used in warzones, this was the first time the technology had been used to hunt down and kill a specific individual in a country in which the US was not at war – ‘beyond the battlefield’ as it has become euphemistically known. Since then, numerous US targeted killings have taken place in Yemen, Pakistan and Somalia, while other states who have acquired the technology – including the UK – have also carried out such strikes.
At first, the notion of remotely targeting and killing suspects outside of the battlefield and without due process was shocking to legal experts, politicians and the press. In an armed conflict where international humanitarian law (the Laws of War) apply, such strikes can be lawful. However, outside of the battlefield, where killing of suspects is only accepted in order to prevent imminent loss of life, such killings are almost certainly unlawful. Indeed in early reporting on the first such attack 20 years ago, journalists noted that the US State Department has condemned targeted killing of suspects by Israel (see article below).
New York Times, 6 November, 2002. Click to see original.
However, the US argued – and continues to argue today – that its targeted killings are lawful. It has put forward a number of arguments over the years which are seriously questioned by other states and international law experts. These include the notion that whenever and wherever that US undertakes military action international humanitarian law applies; that because states where the US engages in such strikes are ‘unable or unwilling’ to apprehend suspects its lethal actions are lawful; and that there should be greater ‘flexibility’ in interpreting the notion of ‘imminence’ in relation to last resort.
Here are a small sample of drone targeted killing operations undertaken by the US and others.
November 3, 2002, US drone strike on a vehicle in Marib province, Yemen.
Target: Qa’id Salim Sinan al-Harithi
The first drone targeted killing saw a CIA Predator drone operating out of Djibouti launch two missiles at a vehicle travelling through the desert in Marib province, Yemen. The drone’s target was ostensibly al-Qaeda leader Qa’id Salim Sinan al-Harithi, said by the US to be behind the lethal attack on the USS Cole two years previously. However, also in the vehicle was US citizen Kemal Darwish and four other men, all believed to be members of al-Qaeda. As Chris Woods wrote in 2012, “The way had been cleared for the killings months earlier, when President Bush lifted a 25-year ban on US assassinations just after 9/11. [Bush] wrote that ‘George Tenet proposed that I grant broader authority for covert actions, including permission for the CIA to kill or capture al Qaeda operatives without asking for my sign-off each time. I decided to grant the request.’”
Online webinar: Pandora’s box: 20 years of drone targeted killing
Drone Wars has invited a number of experts to mark 20 years of drone targeted killings by offering some reflections on the human, legal and political cost of the practice and to discuss how we can press the international community to ensure that drone operators abide by international law in this area.
Agnes Callamard, Secretary General, Amnesty International. Ex Special Rapporteur on Extrajudicial Executions (2016-2021)
Chris Woods, Founder of Airwars, author of ‘Sudden Justice: America’s Secret Drone Wars’
Bonyan Jamal, Yemen-based lawyer and Legal Support Director with Mwatana for Human Rights, Yemen
Kamaran Osman, Human Rights Observer for Community Peacemaker Teams in Iraq Kurdistan