There are some excellent examples of the exploitation of AI today, but the label has also been adopted as a generic description for the latest clever version of an IT product — even when its use is not evident. The term is now widely misused and misunderstood, but it does mark the start of the next significant phase in the development of our digital world. Geoff Codd FBCS FIoD reflects.

Today, the term AI is widely misused and misunderstood, and its true significance is the subject of much conjecture and speculation. However, ‘established’ intelligence, which flows naturally from existing systems and process modelling, continues to improve product effectiveness, usually under the AI label.

When AI was first coming to the fore, there was huge focus on creative areas such as writing or composition, where there are long-established rules of association and recognisable individual styles and ‘established’ intelligence can be identified, defined and then applied creatively. Huge, little-used databases — many of which were simply accumulated reservoirs of information and debate created for a variety of purposes and by a variety of authors — were suddenly revitalised as important stimuli for creative thinking and new search engines were developed under the ‘Generic AI’ label. Most of these accumulations of data were not built for today’s claimed AI uses, and the results they yield therefore need to be treated with great caution. However, certain non-creative specialist areas such as the health sector also have huge accumulations of useful data which can yield considerable, genuine added value.

A little bit of history

Creating a computer based AI capability has been a long term objective within the computer industry since papers on ‘thinking machines’ were published by Alan Turing and Viscount Nuffield in the 1940s and ’50s. At that time quantum computing research projections promised huge future increases in computer power sufficient to enable achievement of that challenging objective. In today’s increasingly sophisticated world, such computer power has evolved naturally to meet ever more complex needs in areas such as space exploration and research, meteorological forecasting and defence.

But what is ‘artificial’ intelligence?

The first question to consider is whether the results returned by a prompt could have been achieved through human analysis — or ‘established’ intelligence — or if it can only be the result of a highly sophisticated process which dynamically establishes complex interrelationships using methods and speeds not humanly possible, and then recommends actions and options accordingly — or ‘artificial’ intelligence.

The subject of AI needs to be considered from several different angles, starting with the analytical perspective. One definition of intelligence in the Oxford English Dictionary is ‘quickness of understanding’. This can obviously be dependent on our familiarity with key elements of a subject but can also be severely inhibited by our brain’s limited ability to quickly reach conclusions in complex high volume and highly conflicting volatile scenarios.

From this analytical ‘speed of processing’ perspective the workings of computer based ‘artificial’ intelligence are broadly similar to human intelligence, except that a computer is capable of magnifying the analytical capacity in many dimensions at the same time due to the availability of massive computing memory and processing capabilities. This speed and sophistication alongside the inclusion of a huge range of interactive decision making factors produces an ‘artificial’ intelligence result that derives from an immediate understanding of all options and their range of interactive results.

Broadening these intelligence ‘perspectives’ into areas such as forming emotional and spiritual judgments are further, highly complex dimensions of the AI challenge which are already exercising many minds. This fact does not however take away the huge potential impact that is currently being made by simply addressing the analytical perspective in the AI spectrum.

An AI test

The next time that you see a product with an AI label, ask yourself the following question. Does it use established information deriving from lessons learned from ongoing experience, or does it use information which would have been impossible to derive by human means without the massive calculation and memory capacity that is only available via a computer? If the latter, that is truly ‘artificial’ intelligence.

Where to now for AI?

Massive AI capabilities are already well established in space research, meteorology, the defence sectors and in many other activities where the move towards using AI capabilities is driven by the increasing complexity and challenge in today’s decision making. The need for AI tools in such organisations will continue to grow, whilst the power that derives from the exploitation of such a sophisticated tool will also be more widely recognised in a growing number of organisations in business, commerce and government. Targets for attention are wherever decision making has shortcomings due to the lack of ability to make decisions promptly in critical areas where significant conflicts of interest are involved. This will build on increases in efficiency already brought about by the digital systems revolution already in train.

Have we learned any lessons from our IT exploitation thus far?

Firstly, many lessons were learned from notable successes and painful failures in our journey to digitisation, and new standards of technical and professional behaviour and practice have evolved as a result. However, insufficient attention was given to identifying and combatting potential criminal and antisocial threats to our and our children’s wellbeing. This omission is proving to be extremely costly and damaging, and the fact is that using AI has much greater potential to inflict major harm and misery which must be avoided at all costs.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Secondly an ideal route to a digital world needs to successfully merge best practices from two very different cultures. Firstly that of the IT Change Professionals with their enthusiasm and zeal for a new world order, and secondly the well established and proven traditional working culture based on generations of experience. In the 1980s I was invited by the Butler-Cox Foundation to produce a research report for members on the damaging culture gap that then existed in most organisations between the IT professionals and their business users. That culture gap severely inhibited deeper understanding of both sides’ priorities and pressures, thereby adversely affecting the quality of the end product and increasing development costs. This was a multi-level issue in most organisations, from the board downwards, but little was done to effectively address this issue.

How do we respond to that record?

There are already some official initiatives in place to deal with emerging AI security challenges. These could be expanded to include strategic user forums which encourage best practice but are also sensitive to potential malpractice. Variations of existing forums such as the BCS panel of practitioners could have an important part to play. Such bodies do however need to be actively coordinated by a monitoring authority with objectives which are not driven by the IT industry.

Secondly, a culture gap between the IT user community and the industry drivers of progress, which now includes venture capitalists with their own agendas, still exists and impedes common acceptance of the best way forward. One cannot effectively manage any situation if one does not properly understand all of the driving forces and potential possibilities for good and evil along the way. It could therefore be argued that a widespread programme to raise sensitivity to potential threats as well as identifying targeted opportunities would result in a sounder foundation for moving forward into a new world of AI. The time may therefore be appropriate for organisations such as BCS and the BBC to explore combining resources to produce imaginative and authoritative documentaries to captivate the public mind with the wonderful achievements to be celebrated but also the disastrous potential consequences to be avoided. A means needs to be in place to prepare for what is to come, so that society nan be sensibly prepared to encourage the good and discourage the bad.

We need a period of calm and well informed direction, rather than hype and poorly informed promotion.