Eerke Boiten, Professor of Cyber Security at De Montfort University Leicester, explains his belief that current AI should not be used for serious applications.

From the perspective of software engineering, current AI systems are unmanageable, and as a consequence their use in serious contexts is irresponsible. For foundational reasons (rather than any temporary technology deficit), the tools we have to manage complexity and scale are just not applicable.

By ‘software engineering’, I mean developing software to align with the principle that impactful software systems need to be trustworthy, which implies their development needs to be managed, transparent and accountable. I don’t suggest any particular methodologies or tools (before you know it, I’d have someone explaining why ‘waterfall’ is wrong!) — but there are some principles which I believe to be both universal and unsatisfiable with current AI systems.

When I last gave talks about AI ethics, around 2018, my sense was that AI development was taking place alongside the abandonment of responsibility in two dimensions. Firstly, and following on from what was already happening in ‘big data’, the world stopped caring about where AI got its data — fitting in nicely with ‘surveillance capitalism. And secondly, contrary to what professional organisations like BCS and ACM had been preaching for years, the outcomes of AI algorithms were no longer viewed as the responsibility of their designers — or anybody, really.

‘Explainable AI’ and some ideas about mitigating bias in AI were developed in response to this, and for a while this looked promising — but unfortunately, the data responsibility issue has not gone away, and the major developments in AI since then have made responsible engineering only more difficult.

How neural networks work

When I say ‘current AI systems’, I mean systems based on large neural networks, including most generative AI, large language models (LLMs) like ChatGPT, most of what DeepMind and OpenAI are producing and developing, and so on. An extremely optimistic view of these is what I would call ‘LLM-functionalism’: the idea that a natural language description of the required functionality fed to an LLM, possibly with some prompt engineering, establishes a meaningful implementation of the functionality.

The neural networks underlying these systems have millions of ‘nodes’ or ‘neurons’. Each has one output and multiple inputs, which either originate externally to the entire network or are taken from other nodes’ outputs. A node’s output is determined by the inputs, the weights put on each input, and an ‘activation function’ that decides how the weighted inputs translate into an output. The connection structure is fixed, using connected ‘layers’ or other structures such as recurrent networks or transformers, as is the activation function. The fixed structure is relevant to what type of problems the network can deal with, but the network’s functionality is almost entirely introduced by ‘training’, which means setting and modifying the weights of each input until the outputs achieve an objective satisfactorily on a set of training data.

The training of the huge networks that make up current AI systems has typically taken an astronomical amount of compute, measurable in the millions of dollars or kWh, and will necessarily have been mostly unsupervised or self-supervised. Put bluntly, it will have required no human input – though there may have been some human tuning afterwards (such as reinforcement learning from human feedback (RLHF) or ‘guard rails’), or when the system runs (such as context and prompt engineering).

Emergence and compositionality

Many of these neural network systems are stochastic, meaning that providing the same input will not always lead to the same output. The behaviour of such AI systems is ‘emergent’ — which means despite the fact that the behaviour of each neuron is given by a precise mathematical formula, neither this behaviour nor the way the nodes are connected are of much help in explaining the network’s overall behaviour.

My first 20 years of research were in formal methods, where mathematics and logic are used to ensure systems operate according to precise formal specifications, or at least to support verification of implemented systems. Software engineering, and particularly formal methods, has not been as successful in managing emergent behaviour or even the aspects of ‘traditional’ systems that have emergent tendencies such as resource usage or security. This is for foundational reasons rather than for a lack of scientific effort.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

A central property in formal software engineering is compositionality: the idea that composite systems can be understood in terms of the meanings of their parts and the nature of the composition, rather than by having to look at the parts themselves.

This idea lies at the heart of piecewise development: parts can be engineered (and verified) separately and hence in parallel, and reused in the form of modules, libraries and the like in a ‘black box’ way, with re-users being able to rely on any verification outcomes of the component and only needing to know their interfaces and their behaviour at an abstract level. Reuse of components not only provides increased confidence through multiple and diverse use, but also saves costs.

Issues arising from emergent over compositional

Unfortunately, my informal definitions of ‘emergent’ and ‘compositionality’ are almost exact opposites, and this raises several issues:

  • Current AI systems have no internal structure that relates meaningfully to their functionality. They cannot be developed, or reused, as components. There can be no separation of concerns or piecewise development. A related issue is that most current AI systems do not create explicit models of knowledge — in fact, many of these systems developed from techniques in image analysis, where humans have been notably unable to create knowledge models for computers to use, and all learning is by example (‘I know it when I see it’). This has multiple consequences for development and verification.
  • There are no intermediate models at different levels of abstraction to describe the system. There is no possibility for stepwise development — using either informal or formal methods.
  • Systems are not explainable, as they have no model of knowledge and no representation of any ‘reasoning’.
  • Even a ‘human in the loop’ adds little explainability, as they can only explain system outcomes (and learn from them) by doing their own reasoning on the input data from scratch.

Verification

Verification comes with a subset of issues following from the above. The only verification that is possible is of the system in its entirety; if there are no handles for generating confidence in the system during its development, we have to put all our eggs in the basket of post-hoc verification. Unfortunately, that is severely hampered, following from the issues listed above:

  • Current AI systems have input and state spaces too large for exhaustive testing.
  • A correct output on a test of a stochastic system only evidences that the system has the capability to respond correctly to this input, but not that it will do this always or frequently enough.
  • Lacking components, current AI systems do not allow verification by parts (unit testing, integration testing, etc).
  • As the entire system is involved in every computation, there are no meaningful notions of coverage to gain confidence from non-exhaustive whole system testing.

So, whole system testing is the only verification tool available, but it can never represent more than a drop in the ocean.

Faults

There are serious additional issues around faults and fixing. Faults may arise from unreliable input data, which certainly applies to many generative AI systems, but also from training data being sparse in parts of the input domain.

  • Current AI systems have faults, but even their error behaviour is likely emergent, and certainly hard to predict or eradicate.
  • Given the relative efforts of unsupervised training versus human error correction and feedback learning, there can never be confidence in correctness, arguing from scale alone.
  • Fixes of errors through retraining are not localised and regression testing is not possible, so newly introduced errors are likely, but not easily discoverable.

Conclusions

In my mind, all this puts even state-of-the-art current AI systems in a position where professional responsibility dictates the avoidance of them in any serious application. When all its techniques are based on testing, AI safety is an intellectually dishonest enterprise.

So, is there hope? I believe — though I would be happy to be proved wrong on this — that current generative AI systems represent a dead end, where exponential increases of training data and effort will give us modest increases in impressive plausibility but no foundational increase in reliability. I would love to see compositional approaches to neural networks, hard as it appears.

However, hybrids between symbolic and intuition-based AI should be possible — systems that do generate some explicit knowledge models or confidence levels, or that are coupled with more traditional data retrieval or theorem proving. Current AI systems also have a role to play as components of larger systems in limited scopes where their potentially erroneous outputs can be reliably detected and managed, or in contexts such as weather prediction where we had always expected stochastic predictions rather than certainty.