Professor Stephen Hawking is quoted as telling the BBC: ‘The development of full artificial intelligence could spell the end of the human race.’ Elon Musk, the founder of PayPal and CEO of Tesla recently said, during an interview at MIT, ‘if I had to guess at what our biggest existential threat is, it’s probably (artificial intelligence)’.
He went on to make a memorable analogy: ‘With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like - yeah, he’s sure he can control the demon. Doesn’t work out.’
Are Hawking and Musk right? I will come to that. First I would like to talk about the ubiquity of artificial intelligence. Applications of AI have been around since the 1980s but until fairly recently they were mainly in specialised areas such as medical diagnosis, automatic theorem proving, chess playing and image analysis, which impacted very little with the general public.
Suddenly AI is all around us. By this I do not mean that the person sitting next to you is likely to be an intelligent robot but that the techniques originally developed in AI labs are now in daily use, even by people who have never heard the term AI or who would say that they don’t believe in AI.
Everyday AI
Let’s think about things we use every day. I first came across spell checkers and email spam filters as AI research projects reported in AI conferences. Now they are just routine applications.
When I visit my local supermarket there is a camera that reads my number plate automatically, checks whether I have bought a parking permit and if I have not looks up my details and sends me a letter requesting I pay a fine. Automatic character recognition is a long-standing AI research area.
Some other traditional AI research areas include robotics, medical monitoring systems, voice recognition, conversational agents and making decisions with partial information using heuristics. Powerful search algorithms and machine learning techniques are fundamental components of many systems.
Industrial robots have been in common use for a long time. Domestic robots are appearing that have more and more human-like movements. If I have the pleasure of contacting certain utility companies I listen to an automatically generated voice and have to speak my questions clearly so an automatic voice recognition system can work out what I want (which is usually to talk to a human being).
My tablet uses predictive typing, which is often really useful, though it can sometimes lead to unexpected results. My SATNAV uses a search algorithm to find the best route between any two places. Recommender systems (which come from the AI field, known as case-based reasoning) seem to be everywhere. ‘We see that you looked at a book on X, you might be interested in all these others.’ Conversational agents such as Siri are now widely used on smartphones.
The long-standing research field of machine learning has morphed into ‘big data’, something that is increasingly hard to avoid in a world of smartphones and social media. If I use a search engine to check on (say) flights to Spain it is a good bet that for the next few weeks I will be bombarded with advertisements for Spanish holidays and paella recipes.
Unfortunately the same technology that supermarkets can use to analyse my spending and send me discount vouchers can be used by repressive regimes to identify potential terrorists (or even just political opponents) and sometimes lock up completely innocent people.
Although I would describe AI as rapidly becoming ubiquitous I am aware that many readers will probably not associate any of these applications with AI. The problem is that AI scientists have long been their own worst enemies.
Rather than celebrating their successes they often prefer to focus on the problems that have not yet been solved. So AI can easily become the study of intractable problems with the ‘solved’ ones reclassified as standard computing.
The future
Where is AI going? There are new stories about AI developments every week - some good, some bad. It is easy to predict that in time AI-based assisted living systems will become commonplace in every house and possibly every workplace.
One application I would have labelled as science fiction not long ago is the driverless car. The amount of reasoning required to build these without serious risk of major accidents is huge, but they can now legally be put onto roads in this country. In ten years’ time will we all be passengers in driverless cars or will they have gone the way of the airship? I would predict the latter.
One company is reported as offering low cost DNA tests that can predict which illnesses you are likely to contract. Once that is available, how long will it be until you cannot get health insurance without ‘voluntarily’ using it?.
Amazon has seriously considered delivering goods to your house using drones, not apparently considering that someone who dislikes you could use exactly the same technology to deliver a bomb. Already modern warfare seems increasingly like playing a computer game remotely controlled from a command centre. How long will it be before the drones become fully autonomous using AI?
Are we heading for a golden age where AI-based advances in medicine and biology will have eliminated major diseases, AI assisted agriculture has finally solved the world’s food problems and where with help from AI, scientists have reversed the effects of climate change? One where almost all jobs will have been automated, leaving humans with little to distract them from enjoying their leisure and organising their household robots and where systems for predicting crimes before they are committed will be so reliable that crime will have virtually ceased?
Or will it be the nightmare world of Orwell’s 1984 with surveillance systems of all kinds feeding into big data applications that detect any behaviour that is considered deviant? Or perhaps before either of these happens AI-based automated trading systems will have crashed the world’s financial markets or automated ‘smart’ systems for controlling missiles will have started a nuclear war by accident. Any of these seems possible.
Even without AI there are considerable risks involved in the growing reliance on increasingly complex software systems. We tend to adapt our behaviour to what is convenient for a machine and thus we start to lose the skills to manage without them.
Imagine the consequences of a shutdown of the internet, due to either a software bug or sabotage. A one-day closure would be serious. How about a month? A year? In his interview with the BBC Hawking quoted the director of GCHQ as warning that the internet could become ‘the command centre for terrorists’. If that were already in progress how would we even know?
Adding complex AI systems into the mix raises the problems to another level. AI systems specialise in tasks for which there are no hard and fast solutions and/or where there are no known deterministic methods and so we have to rely on the use of heuristics. More and more of them make use of machine learning techniques, so they are both inscrutable and (being heuristic) are sure to give the wrong answers in some cases.
AI systems that adapt to the data they receive can change their behaviour from one moment to another, making it almost impossible to reproduce failures. Getting discount vouchers I don’t want from my supermarket doesn’t matter very much, but if someone is refused credit because of a poorly tuned heuristic algorithm it can ruin their life. Even more so if someone is wrongly identified as being a potential terrorist.
We have barely got to grips with issues of accountability for conventional software (e.g. teenage children being driven to suicide by postings on Facebook). Who will be accountable for an autonomous armed drone in a warzone that classifies a group of people on the ground as ‘armed enemies’ rather than ‘innocent civilians’? The researcher who published a machine learning algorithm in a journal 30 years earlier? How to build any kind of ethical sense into AI systems that are essentially inscrutable and uncontrollable is a big topic about which we currently know very little.
Coming back to Elon Musk and Stephen Hawking - Musk is surely right in suggesting that AI specialists will not be able to control our technology any more than we can close down the internet or the electricity supply. Hawking’s warnings may seem extreme at the moment but I believe he is right to raise concerns while there may still be some chance of acting on them.
I can make one prediction with absolute confidence. AI is going to have a major impact on the world of the future. Whether for good or ill is what we need to try to influence. Our descendants are going to live in interesting times.