Hannah Green, Lead Data Scientist at BAE Systems Digital Intelligence, looks at how the defence sector might lay the foundations for the safe and effective use of artificial intelligence at scale.

When you think about the use of AI technology in defence, what’s the first image that springs to mind? Is it robots charging across a battlefield? Autonomous aircraft identifying and attacking key targets? An intelligent JARVIS-type system running operations from military headquarters?

Although these applications are technically possible, they currently sit in the world of science-fiction based on the defence industry’s initial experimentations with and applications of AI. While many people think AI is changing the world right now (which it is to a certain extent in some commercial contexts), the truth is that it won’t be allowed to dramatically impact the world of defence for many years.

As one of my colleagues recently wrote, it’s widely accepted that AI has huge potential in defence environments. However, operationalising it — i.e. making it an operational reality through capabilities that are used and trusted by defence users — comes with several significant challenges and is still some way off.

Understanding AI requirements

One of the cross-sector challenges surrounding AI adoption relates to mindset. People tend to get very excited about using AI technology, but often don’t fully understand what that means in practice. They decide that the latest AI tool is the answer without considering whether it’s appropriate for the business challenges they are looking to solve.

In order for AI to be used in a way that adds value, it must fit into an organisation’s wider operations — covering people and process as well as legacy technology. Businesses need to have the data and infrastructure in place for AI to work effectively, no matter what the industry.

Consider electric cars. While the cars themselves exist, the infrastructure to run them is taking time to catch up. In some cases, the infrastructure is actually more time-consuming and technically challenging to establish than the core technology – thereby hindering the broader adoption of electric cars at a national level. Large-scale adoption won’t be achieved until the infrastructure is in place to support it.

AI in defence is no different. Before acquiring the latest AI system, decision-makers must take the time to consider whether it can be supported within the existing technical infrastructure. Other key considerations should include where they are trying to fit AI into an existing process and what they want to get out of the technology. Is it to drive efficiency improvements? Automate repeatable actions to free up human effort? Provide a richer and more accurate picture of operational data?

Whatever the reason, this foundational step of defining objectives and assessing readiness is crucial on the journey to wider AI adoption. However, once the requirement for AI has been established, we then come to but even more complicated issue: understanding how it can be implemented in a safe fashion. This is primarily a risk and policy concern rather than a technology one.

Ensuring ‘explainability’

In order for solutions to be signed off as being acceptable to use in defence, they must be heavily tested. More specifically, a new technology must deliver repeatable results before it can be deployed. This is particularly challenging in the defence sector, given that it involves very rare scenarios, potentially with limited existing evidence or data as to how people or systems will react.

Throughout the defence sector, there is still an overwhelming need to be able to predict how a system will respond and then explain why the AI model has acted in a certain way – something that is unlikely to change in the short or medium-term.

However, this need for explainability presents obvious challenges around the large-scale deployment of AI systems that are designed to autonomously make decisions in operational settings. In the context of defence, we’re talking about hugely complex environments where decisions must be made rapidly based on vast amounts of data — often coming from multiple different sources and systems — and constantly-evolving risk factors. The more complex the AI, the less explainable it becomes.

And if you can’t explain the AI or guarantee that it is dependable, end users are unlikely to trust it. After all, it is unreasonable to expect your average soldier/sailor/officer to understand the inner workings of AI systems, making it even more important that the system can explain the steps and decisions behind its output.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Linked to this is the trust issue. Consider a warship captain who has to make critical operational decisions based on the outputs from an AI system. That situation — potentially a matter of life or death — is very different from use of AI in the civilian world. Giving up control evokes a more visceral reaction, requiring a huge amount of trust that the technology will work as expected and can be relied upon.

Typically, defence would address this issue through ‘man in the loop’ mitigation, which involves maintaining human oversight in relation to how the system is working and communicating with other systems. While this can help ensure trust, the additional time and personnel requirements remove a huge amount of the advantage that the AI would otherwise deliver. This could mean the difference between staying ahead of an adversary, or falling behind.

One step at a time

Ultimately, there’s still a long way to go before the use of AI in defence is widely allowed. Although the industry is already experimenting with the technology in pockets, there’s a big difference between running AI in a managed way on the back-end and allowing AI to run in an operational environment on the fly.

Bridging this gap will be no mean feat. There are technical-, policy- and risk-based hurdles that must be overcome in order to facilitate the future adoption and deployment of AI in operational defence settings. Similarly, there’s a lot of relatively mundane enablement work that must be completed before defence can unlock the transformational impact of AI.

This initial work is vital. If the industry rushes through them without due care and attention, there could be disastrous and expensive consequences further down the line. But, by taking things step by step, we can start building the right foundation to support the future of AI-driven defence.