Dr Janet Bastiman, the Chief Data Scientist at Napier, explores how trust works, why we rightly find it hard to place belief in AI and how system developers can help create and improve our relationship with artificial intelligence.
Artificial Intelligence (AI) is no longer a thing of the distant future. It’s something that permeates every aspect of our daily existence, a feature that comes as standard in everything from our cars to our mobile phones.
It’s not just in our personal lives that it’s become commonplace. Whether it’s financial organisations monitoring transactions, manufacturers predicting disruptions to the supply chain, or customer services triaging incoming requests, a variety of industries are leveraging AI-driven technology to streamline processes and improve operational efficiencies.
The pervasiveness of AI raises an important question, however, of how far it can be trusted.
Trust, or a lack thereof, is a concern that explains why businesses in some sectors have been reticent to embrace the technology beyond its basic capabilities.
So, how can we build confidence in AI, in order to unlock its full potential?
A question of trust
Whether or not we find it easy to trust depends on experience and on the context of the situation.
When considering why we find it hard to trust AI, part of the problem stems from how the field has previously measured and communicated the success of AI systems. Trustworthiness has for a long time been measured quantitatively, drawn from comparison between predicted and real outcomes.
A focus on performance is not necessarily problematic, but the sentiment that opaque systems are more effective (coined by a call for proposals for a DARPA study into explainable AI, which began in 2016), has fed into the now commonly perpetuated myth that AI cannot be both effective and explainable.
Put simply, the approach for decades has been that if you test your system thoroughly enough, then there is no reason to also be able to explain how it works. This overlooking of explainability has led to the creation of technologies that have traditionally prioritised performance and quantifiable metrics that proved AI could deliver accurate results, rather than explaining how it arrived at them.
On the surface, there’s merit to this approach. After all, we regularly use cars, trains, and planes without understanding in detail how they work, instead choosing to trust the people and processes that have been put into place to ensure they run safely.
So why can't we do the same for AI?
Reliance and trust aren’t the same thing
So-called ”opaque systems” have their time and place, as ultimately not every decision is complex or high-stakes enough to warrant an explanation. But, where high-risk industries such as healthcare or banking are concerned, outputs in the form of cold, hard data are not enough on their own to support decision making, no matter how accurate.
Think of it this way, your perception of whether 95% is a high percentage score depends entirely on context.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
When applied to the chance of rain in a weather forecast, for example, the impact of the incorrect 5% is negligible - in this case, the decision to carry an umbrella becomes little more than a minor annoyance if the day turns out to be dry.
But if, say, that AI is being used to identify and stop financial crime, that 5% could represent billions of pounds in losses, which would represent a catastrophic failure on the part of those tasked with stopping it.
This begs the question: how high would the accuracy need to be before you trusted that AI with a decision that had stakes a little higher than the weather? Indeed, is there even a value that’s high enough when it comes to critical decisions in high-risk situations?
The importance of explainability
If we want to start increasing peoples’ willingness to trust AI, there needs to be a change in approach to move beyond outputs that are just datasets and algorithmic information which need experts to interpret.
Instead, we need to ensure that AI’s end users are comfortable with understanding the information that’s being given to them.
Nobody is comfortable blindly accepting decisions. We either devolve the decision to someone who we trust is more informed to make it than ourselves, or we seek to understand and rationalise the decision that was made before agreeing to it.
That’s why explainability is key.
To return to the example of financial crime: for years institutions have been reluctant to embrace AI-powered technologies to monitor transactions. Without understanding how the AI arrived at a decision to flag or not flag a transaction as suspicious, actioning the resulting insights can be difficult.
Explainability means that analysts can understand exactly why a transaction had been flagged by AI as suspicious, which builds trust in AI so that high-risk decisions such as suspending an account, blocking a transaction, or filing a suspicious transaction report to authorities can be made more quickly, supported by AI-derived insights.
Changing our approach to AI
We devolve trust all the time. Every time we get in a vehicle that we’re not driving, or an aircraft that’s being controlled by an autopilot - even when we turn on the TV to watch the news - we relinquish control of a situation or the flow of information to someone or something else.
We’re able to do this because we understand the context, we know that we can ask for clarification if we want to, and we can hold the people and processes responsible accountable if anything goes wrong.
The same needs to happen with AI if we want to unlock its full potential to benefit more aspects of our lives. We could end distrust and reduce the reticence that some sectors, regulators, or even law makers have when it comes to decisions reached using AI and automation by prioritising accessibility and understanding for the end user.
About the author
Dr Janet Bastiman is the Chief Data Scientist at Napier, a specialist provider of anti-financial crime compliance technologies for sectors including banking, asset management, payments, FX, crypto, and more. Her full thoughts on explainable AI can be found recently published in Volume 1, Number 4 of the Journal of AI Robotics and Workplace Automation.