Should the development of advanced Artificial Intelligence be paused? Or are there other ways its risks can be managed? This was the focus of a recent BCS Policy Jam, as Claire Penketh reports.
In March, the Future of Life Institute, a non-profit campaign group, published an open letter calling for a six-month pause in the development of advanced artificial intelligence systems to consider the implications of this technology. The letter stated such AI systems posed 'profound risks to society and humanity'.
The rapid development, introduction and adoption of systems, such as ChatGPT, has alarmed some AI researchers and ethicists about the impact on, for instance, employment, public debate and humanity's ability to keep up.
The Future of Life Institute letter attracted thousands of signatures, including senior tech figures like Elon Musk, Apple co-founder Steve Wozniak, and leading academics.
To pause, or not to pause?
However, BCS has taken a more nuanced approach. In response to the Future of Life Institute stance, the BCS Fellows Technical Advisory Group (FTAG) produced a paper and an open letter to the government and industry.
BCS believes AI is not an existential threat to humanity and instead it will be a transformative force for good so long as the right decisions are taken about its development and use. BCS also thinks an international break in AI development would not work and could play into the hands of rogue regimes and organisations.
The panel on the BCS Policy Jam: ‘Helping AI grow up responsible without pressing pause’ included FTAG Chair Adam Leon Smith, who said: "We disagreed with the calls for pause for several reasons, the main one being that we didn't believe anyone would stop. We're also sceptical that we will see significant progress in the underlying algorithms of large language models [such as ChatGPT] at the moment.
"We also took the view that we support much of the government's recent white paper on how to regulate AI."
Panellist Hadrien Pouget, visiting research analyst in AI Policy at The Carnegie Endowment for International Peace, said he understood the concerns around AI, and added he was 'slightly in defence of the [Future of Life Institute] letter, if not in defence of the pause itself.'
BCS CEO Rashik Parmar said: "There was a level of sensationalism in the messaging. But the sentiment was - do we fully understand AI's ramifications, implications and unintended consequences?
"What we were trying to do with the BCS paper is provide practical ways of understanding what's happening here, which is much more helpful, important and valuable."
What is the role of regulation?
While giving evidence to the US Senate, the CEO of the company behind ChatGPT, Sam Altman, and other witnesses agreed there's a crucial role for government intervention in the development of AI. But how can governments globally strike a balance between innovation and regulation?
Adam said: "Some say the UK government needs to go further and faster, as we're not pursuing a similar approach to, for example, Europe. The UK is saying we will use the laws and the regulations we have now and augment that with a central horizon scanning regulator looking for gaps in what the other regulators are doing and accelerate an AI sandbox."
An audience member questioned whether regulation could rein in big tech companies - and whether smaller companies, who needed more resources to comply with the law, would lose out. Hadrien said: "I'm always a bit cynical when I see these companies calling for regulation. We saw it with Facebook, where they called for regulation. But then every proposal they see, they're like, no, not that one, not that regulation. And so you end up with nothing.
"However, if there are serious harms, and it seems there could be, you'll probably need to have some regulation. I appreciate it's a very delicate balance."
Cari Miller, the US-based Center for Inclusive Change founder, said people should know when AI is being used: "For me, the very minimum regulations should require that whenever a human being is interacting with AI, they should know. I should never encounter AI and not know that I'm using it or where it's being used on me, whether in a medical setting, driving on the road, or in an office while I'm working. Or when my kid is at school - wherever it is, I want to know. Then, of course, I'll have 700 other questions about it, but I least I'll know it's being used on me or about me."
Keeping up standards
Globally, there is already a move towards setting key technical standards, said Adam: "All the AI regulations are starting to gravitate towards internationally agreed technical standards. The EU and the UK are looking at the same technical standards. If you want to mitigate bias in your system, you can go to Europe, the UK and the US, and you will see the same best practices."
Steve Randall, Expert Assessor from the United Kingdom Accreditation Service, said his organisation was often involved with the UK government in free trade deals. He said many countries are already part of the existing International Standardization Organization agreements that cover AI, such as ISO 420001.
Steve said: "They are all recognised standards that you can base schemes or voluntary certification on and can underpin procurement contracts. Companies then have to comply with those standards."
Adam said it makes good business sense: "There's also market led audit schemes that are being developed to provide companies who want to achieve brand leadership and follow those best practices."
However, Hadrien pointed out that developing best practices might take time as these technologies are relatively new, and there might be trial and error before the large-scale harms are known: "How do you judge if they're good enough? Maybe there should be extra caution that goes beyond the application-specific focus."
Doing the right thing
The role of ethics was discussed with Adam saying he liked the idea of ethics, with a little 'e': "They should be part of everyone's job competencies, they should be signed up to a code of ethics we should have a tech workforce that signs up to that."
Hadrien said it shouldn't be only up to a company's ethics team to decide what is right or wrong: "There needs to be expertise around ethics and how it is applied. You have to be careful that you don't end up abdicating the responsibility of ethics down to this group so that the attitude is 'we do what we want, the ethics police will handle it'."
Cari said at its heart, ethics are about human rights; "I'm growing a little irritated with this debate over whether we should have ethics. For me, we debate whether we should protect civil rights, civil liberties and privacy.
"It gets really muddy. Sometimes those things are clear, but sometimes when we are heads down doing our work, we forget that this is a civil right or a privacy issue.
"If we just label it the 'ethics committee', we're using that label. And then we're debating that label when really we're talking about just giving people their human rights."
Professional accountability
To do the right thing, it takes the right people. This was a thread that went throughout the discussion. Rashik said tech professionals must be competent, ethical, accountable, and inclusive: "They must work and live within the right ethical foundation and framework, ensuring they can make technology good for society. At the same time, they need to be held accountable for what they do, which is important. That's what we need and we still need to think through how we ensure the accountability of our professionals.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
"Fundamentally, tech professionals must ensure that the solutions they build are genuinely inclusive, so they are fit for purpose for everyone.
"You see many areas where bias is amplified, along with other behaviours. That's why from a BCS perspective, we are championing responsible computing as a framework for all those aspects, across not just AI but the whole of tech."
Cari added that ensuring this kind of endeavour is integrated from the very earliest stages is also vital. She also said educating those designing these processes about inclusivity and diversity was important, as it wouldn't be familiar to everyone.
Are the robots going to take over?
A question from the audience asked about the hype and fears that are currently surging around mainstream and social media about AI. So, how can trust be built when AI is portrayed as a danger to humanity?
In response, Cari said: "That question is worded so interestingly to me because I would have said the opposite is true - the mainstream public blindly trusts the internet more than they trust the media saying the internet is broken."
"But there are people in the media saying 'it's great'. Some optimists don't want to say there are bad parts. It drives me crazy. But I'm an ethicist, so I see the dumpster fire."
Rashik and Adam agreed that the sci-fi portrayal of AI had much to answer for. Rashik said: "When you think about science fiction writers, many of our fears and worries come from what we've seen in movies or read in books. So you think this is the building of Hal [a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's Space Odyssey series] or whatever your favourite AI fear-related story is, and those fears are amplified. When you watch a movie, the quality video imagery makes it look authentic.
"So when people start seeing some AI in real life, they're almost blurring the reality of what's in the movies."
Emergency break glass: the human override
An audience member asked whether there should be a human override in any AI situation that went wrong. Adam cited an example where a rogue algorithm caused harm: "We only have to look to the Netherlands to see what happened when a change in an algorithm used in the child care benefits system led to over a thousand children being moved into foster care. That brought down the government over there.
"Let's stop focusing on what AI might do when they invent the next thing and look at all the harm it can cause right now if used without appropriate controls."
For Rashik, he said it was important to remember that no matter what the AI is used for - there was still a person involved: "In the end, this is the one who switches off the computer. We don't want to abdicate human activity to AI; the digitalisation technology is there to augment what humans do, not replace what humans do, so long as we do that, we're in good shape."
For Hadrien, he worried people might get too comfortable with AI: "As people interact with AI, the systems will get better. And people will get used to it, even if they're sceptical now.
"You'll have more and more systems doing entire tasks for you. If the trust builds too quickly, there may be less oversight."
Cari said it was important that managers, politicians and society understand AI, what it is capable of and how to manage it: "Executive board members and hiring managers need to understand this stuff so that they can put controls in place.
"Society and the decision makers need to understand it so that everybody knows their part and responsibilities of understanding these technologies. There is a great bias in trusting whatever spits out of the technology. So we need to develop programs that help people understand these technologies' flaws and beauty."
The BCS Policy Jam is a monthly online event, open to all, where BCS members and non- members are invited to join the conversation with our handpicked tech experts, discussing topical events. By visiting the BCS Policy Jam page you can also watch recordings of previous sessions.