As part of the UK government’s Spring Statement, it was announced that at least 10% of the increased defence budget will be directed towards cutting-edge technologies. These include dual-use technologies—those that serve both civilian and military purposes—and AI-enabled capabilities. Uncrewed and autonomous systems, including drones, are also high on the agenda.
Professor Bernd Stahl, a BCS Fellow and member of the BCS Ethics Committee, shares his insights. A professor at the University of Nottingham, Stahl has researched the philosophical and ethical implications of technology. He also served in the German army (1987–97), including as a artillery reconnaissance platoon commander which involved the analysis of aerial footage taken by drones.
Claire Penketh, BCS’ Senior Policy and Public Affairs Manager, asked Professor Stahl for his reaction to the government’s plans for defence.
“My initial thought was that there’s a specific focus on increasing the defence budget, which is something we're seeing not just in the UK, but across other countries too. For example, Germany is going to put an extra €100 billion into defence. This is a trend that makes sense considering the war in Ukraine and the fraying relationships within NATO. Countries, including the UK, are committing to more money for defence. However, I’m not convinced that this is the best reaction to the current global circumstances.
AI is now seen as a way to make things easier, faster, and more efficient, and the military is no different. This makes sense in many ways, but it raises significant concerns about the appropriate use of AI, the purposes for which it's used, and the controls in place. Unfortunately, it seems like these questions haven’t been thought through in great detail yet.
Concerns Around Autonomous Military Systems
Q: What are your concerns about these AI-enabled technologies, especially uncrewed military systems?
A: The key issue is what AI will be used for. If it’s about systems performing tasks without human intervention, like medical evacuation drones, that could be very beneficial and appropriate. But it gets more difficult when we talk about autonomous weapons. The debate about whether machines should make life-or-death decisions has been around for a long time, and there’s a general agreement that we need to be very careful in this area. Autonomous weapons that can independently target and kill people are concerning. I don’t believe we’re heading toward fully autonomous wars, but there is definitely a trend toward more autonomy, with less human intervention, in military decisions. This is concerning because we don’t yet know the full consequences of these technologies.
Q: What about the ethical procedures surrounding autonomous weapons?
A: Ethical discussions around AI typically revolve around things like bias, malfunction avoidance, and ensuring that systems do what they’re supposed to. These are valid concerns, but they don’t cover the deeper issue, which is whether we as a society want to create these technologies in the first place. This becomes a political question. It's not just about the technology itself, but the broader ethical considerations about the kind of society we want to live in and how we want to use technology. This should be a political discussion that leads to clear decisions about the direction we want to take.
Q: Do you think these ethical questions are being seriously considered?
A: There are certainly discussions around limiting autonomous weapons, but they don’t receive as much media attention as they deserve and haven’t been widely adopted or prioritised on an international scale. The issue is not being discussed at the level it should be, given how much focus there is on increasing defence spending.
The Risks of Faster Military Procurement
Q: The spring statement also proposes a faster, more segmented approach to defence procurement, with a focus on rapid commercial exploitation. What place is there for ethics in such a fast-paced system?
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
A: Speeding up procurement means less time for reflection. The “move fast and break things” mindset caused problems in Silicon Valley, with for instance, social media, and now we’re applying it to military technology. We’ve seen the consequences of rushing innovation before, and failing to consider ethics in military AI could have significant repercussions.
Q: Given the increasing speed of global events and the growing conflict, is there time for reflection?
A: There’s always been pressure to act quickly, but perhaps instead of speeding up even more, we should intentionally slow down and think carefully. The current defence spending in the UK isn't necessarily justified by the war in Ukraine, and I don't believe it should be the reason for a significant increase in military spending. We should consider whether speeding up is truly the right response or if we need to pause and reassess.
Military Spending and the Economy
Q:The Chancellor, Rachel Reeves mentioned that increasing defence spending could help boost the UK economy by being at the heart of the industrial strategy. What’s your take on that?
A: I understand the desire to stimulate growth, but I’m not convinced that military spending is the right path. The military, by nature, consumes resources rather than creating wealth. While it may boost the defence industry in the short term, I’m sceptical about whether it will lead to the kind of sustained economic growth that benefits society, especially the poorer sections of society. It feels like a bit of magical thinking, and I’m not convinced it will work.
Q: The focus on encouraging start-ups to innovate for the defence sector—what impact do you think this will have on the broader tech ecosystem?
A: The funding directed at defence technologies will likely steer start-ups in that direction. If there's money to be made in military applications, start-ups will naturally follow that path, potentially at the expense of other sectors like healthcare or the arts. This could lead to a long-term shift in the tech ecosystem toward military-focused innovations, which may not be the most productive or desirable outcome.
AI and the Future of Warfare
Q: Will using more AI in conflict dehumanise conflict?
A: We’re already seeing AI and autonomous technologies in the battlefield, especially in Ukraine. While drones are often controlled by humans, adding more autonomy to these systems could free up manpower and increase firepower. This could make the day-to-day experience of war more intense, with more threats coming from all directions. It could also make it easier for politicians to justify going to war, knowing that fewer soldiers would be at risk. However, the destructive potential of these technologies could increase the threshold for war as well. It's a complex issue, and it will depend on the balance of technological capabilities between nations.
The Bigger Picture: Ethics and Policy
Q: You have both a military background and expertise in technology. What advice would you give to governments considering boosting AI-enabled drones in warfare?
A: Firstly, I’d say make sure we don’t create technologies capable of killing autonomously. Humans must always remain in the loop to ensure meaningful interventions. But the bigger question is about the direction we’re heading as a society. If we want humanity to have a future, we need to think beyond warfare and focus on building international institutions that can resolve conflicts peacefully. At the moment, global institutions are under pressure and not functioning well, but that's where our efforts should go—toward creating durable, reliable systems to help manage conflict.
Q: Do you think AI could play a role in creating these peaceful solutions?
A: Not really. AI can help manage certain aspects of conflict, but the real work must come from humans. It’s about creating frameworks for international cooperation and ensuring that we’re not rushing headlong into conflict. The UN, for example, has made efforts to call for the elimination of nuclear weapons, and that’s the kind of long-term vision we need—one focused on diplomacy and reducing the risk of war.
Q: So, ultimately, you believe the solution lies in human negotiation, not in technological advancements?
A: Yes, exactly. We need to address these challenges through human collaboration, not through technology alone. AI may help in some areas, but it’s the political will and human effort that will determine our future.”