There are different ways to understand artificial intelligence (AI). Often when we think of the term, we think of machines which are able to pass themselves off as ‘intelligent’. Possessing, in some way, the hard-to-define abilities of organic, sentient beings to plan, analyse, make decisions and perhaps even dream.
It’s true that we’re some way away from the science fiction vision of robots that can converse with us in a way that is indistinguishable from how we might talk to another person. Let alone ponder their own place in the cosmos, as human philosophers do, or even consciously strive to become increasingly human - in the manner, say, of the android Data in Star Trek.
In truth, machines which are classed as capable of AI today mark only the first steps towards this type of machine intelligence. And there’s good reason for that. The tremendous surge in progress and activity we’ve seen in the field of AI over the last decade has been driven by business. And business doesn’t want machines to pontificate on the nature of humanity and conscious thought. It wants them to work!
Changing the world as we know it
The AI that is changing the world today - from the healthcare industry to finance, education, recruitment and the service industry - is what is known as specialised AI. This refers to AI applications which are designed to do one task, very efficiently and very rapidly, and become increasingly good at that task as it generates and consumes more and more information.
In essence, it’s a single step forward from the ‘traditional’ computer software we’ve grown accustomed to over the last half-century. Many of us have grown up with the definition of a computer as a device which takes an input, processes it, and supplies an output. Whether the output is ‘right’ or ‘wrong’ is outside the boundaries of a traditional computer’s understanding - like a foot soldier of a despotic regime, all it does is blindly follow orders.
What’s new is essentially the addition of a feedback loop. Without input from us and based purely on the data it has access to, today’s AI ‘learns’ how accurate its results are and how to improve them. This is the basic premise of all machine learning. It’s actually nothing new - the theory has been understood for decades - but it does require enormous amounts of data and processing power to work effectively. What’s changed recently is that, thanks to the internet and cloud computing, those are things we now have in abundance.
Out of the lab and into the world
Within the space of a few short years, we’ve moved from a situation where AI was being talked about by futurists and boffins as something that was set to change the world, to a situation where it’s clearly having real and tangible effects on just about every area of industry, as well as our day-to-day lives. If you use a global system like Visa or American Express to make payments, then your transactions are being analysed by smart, learning machines that are becoming increasingly effective at determining whether your payment is valid or fraudulent.
If you are being treated for a medical condition, then its increasingly likely that the treatment you’re receiving was developed with the help of AI analysis of thousands of clinical trials and scientific papers.
The advertising you’re exposed to when you browse the internet, surf videos on Youtube or even open the junk mail that comes through your letterbox is determined by AI analysis of personal data you’ve left behind through your digital footprint.
The food you eat may well come from crops which a farmer grew with the help of AI, telling them how to efficiently use the land available to them, as well as the most economical way to deploy fertilisers and pesticides to reduce waste and boost yield.
When you take a picture with your smartphone, AI circuitry analyses lighting conditions and ‘recognises’ prominent features of the data the camera sensor is exposed to, such as faces or fast-moving objects, to return an image which will be more pleasing to the eye.
If you apply for a job with a large corporation, it’s increasingly likely your application will be pre-screened by AI algorithms to determine how good a fit your skills and personality will be for a role, before you set foot through the front door.
When you shop in a supermarket, the products you see on the shelves are determined by yet more algorithms, which are learning to take geography, demographics and meteorology into account when making stocking decisions.
Brave new world
I could very easily go on - in fact I’ve spoken to hundreds of businesses which are putting the basic concept of self-improving, data-munching software applications to work in a myriad of obvious and not-so-obvious ways.
The technological solutions in use might be vastly different, but generally the fundamental principles at work are similar. By feeding machines with data and giving them enough compute power to crunch through it, they can become better and better at making decisions, without any need for input from us.
Of course, it’s far too early to tell where all this is going to end. Doom-mongers love to point out nightmarish possible scenarios where machines work out that the biggest hindrance to their ability to work effectively is us, their human overlords. Those whose fears are perhaps slightly more rooted in reality warn of the societal damage that could be done by widespread human redundancy, as machines become a cheaper and more efficient option for businesses driven primarily by a desire for profit.
Worries of the former sort may seem far-fetched but have been voiced by some undeniably bright people - the likes of Stephen Hawking and Elon Musk, for example. On this basis alone, they probably shouldn’t be discounted (tempting as it is to assume that mandatory inclusion of an ‘off’ switch would be a simple solution to the problem). Worries of the latter form are more difficult to dismiss. Given what we know of the behaviour of corporations, it seems possible that concerns of societal well-being could well take a back seat to the potential of increasing profits and reducing staffing overheads.
There are other, well-founded concerns over what it will mean for so much data on our lives - from the gps-tracking of our day-to-day activity we voluntarily allow by carrying smartphones, to the detailed genetic blueprints that medical AI will make available - to be ‘out there’.
These are all concerns that society will have to address - and probably more quickly than we thought - in the coming years. The fact is the rise of the machines has taken place more swiftly than anticipated. And while most signs seem to point to it so far being an empowering movement, rather than a dehumanising one (for example, more jobs are forecast to be created by AI, than lost to it, over the next 10 years) there’s a lot that’s still unknown and very difficult to predict.
One thing which is certain though, is that, as with other technological quantum-leaps of the modern era - first electricity, then computers, then the internet - AI is something of a Pandora’s Box. Now it’s been opened, it won’t be closed again - and it’s far from a given that this is a bad thing. AI has already shown us that it can bring tremendous opportunities for positive change. And in the end, the deciding factor won’t be the machines themselves, but rather what we, the people, decide to do with it.