He believes his work may revolutionise the study of neuroscience and change the direction of AI research. Justin Richards, BCS, caught up with him to find out more.
Please can you provide a brief overview of what you've been doing since your college years to the present day?
Well since I'm old that wouldn't be very brief at all, so I'll summarise if possible! The fact is that since 1968 my passion has been understanding how cognition works and this is something I got into as a mathematics student, so I've always been leaning in the direction of trying to understand these things from underlying mathematical principles as implemented by neural tissue.
Now as you know neurons are very difficult to study because the brain is a three dimensional structure and you can't even get at a neuron without destroying the tissue between you and it. Another problem is being able to witness the behaviour of neurons during normal behaviour is extraordinarily difficult.
Even though centuries of study have gone on there's still very limited capability to do that. And if you can do it you can only do it for brief periods under extraordinarily tense conditions and you’re never quite sure what you're seeing. So this is an important place where theory plays an important role because there is no theory that is definitive, so without a theory to tie it all together it's not very easy to make progress. Basically, I've been focused on building a theory, of course informed by the large number of microscopic facts that exist.
So we know a great deal about the brain, and being able to appreciate and understand what is known, that's a process that takes decades in itself. And then being able to use that understanding to craft theories is a further difficulty. Anyway, I've been through all of that and have been working on this. A few years ago some of the pieces began to come together and they all gelled and the result is a comprehensive theory of cognition, which is the first ever.
Can you explain your theory?
I focus more on the primate brain although I'm sure that this theory is applicable across the board to include octopuses and invertebrates, or whatever. The basic idea of the theory is that there are four basic principles and these principles essentially explain how it is that the cognitive brain works.
The first part of it is that we have to have someway of representing the world in the brain, so it explains how that works exactly and it explains how those representations are used to carry out cognition. It also explains how knowledge arises and what knowledge is, specifically.
So how is it that you know that Winston Churchill was the Prime Minister of Great Britain? You don't know that as some sort of rule or entry into some sort of database. Those don't exist in the brain. So in what form is that knowledge stored? Believe it or not all of these questions have now been answered. Is this theory proven? Well you know better than that, that scientific theories can never been proven.
What they can do is withstand attempts to disprove them. That's called falsification. In other words you need to have a theory that is explicit enough and then carry out those experiments on it to see whether it passes those tests or whether it's destroyed by them. And that process hasn't begun yet. This theory is brand new, it's just now being promulgated and so that's where we stand.
So why are we talking then? Because obviously you probably realise that there are scientists all over the world with all sorts of theories but none of them are on the phone right now! What makes this theory so unusual? Well if you think about it, cognitive information processing is something that actually happens. Both you and I are exhibiting it right now.
The fact is that if you have a detailed comprehensive theory of how that works then you should be able to take that theory and apply it to information outside of the brain using a computer simulation. That's a demand you would naturally make of any such theory. If it purports to be complete and detailed then let's try it out. Let's simulate it on a computer and see what happens. That's why were talking. We did that - we've done that many times and the results appear to exhibit real intelligence.
What sort of experimentation have you done to assess your theory?
Well, let me give you the premier experiment to date. We have more in progress, but this one’s completed. It gives you a pretty clear picture of what I'm talking about. What we did was an experiment where we took many hundreds of many tiny bits of brain tissue; this is tissue that is a model of the cerebral cortex.
We made a computer simulation of this bit of tissue and then we put a whole bunch of this tissue, hundreds of them together operating in parallel. Now these bits of brain tissue are really incapable of any kind of processing - it's not like a computer you can’t just put software into it - its brain tissue, its meat! So how in the world can you get something like this to do something?
The first thing we did was to apply stimuli to this brain tissue. Now the brain tissue isn't just randomly organised and put together, that would be stupid. Like the brain itself this tissue has a specific structure. It's divided into layers and each of the layers has its own function and so forth. There are axons which connect the different bits of tissue and so on.
What we did was to expose this simulated piece of hardware, this simulated brain tissue, to three sentences in a row from within a paragraph of a news story. So you take a newspaper story, you take three sentences in a row, from one paragraph, you put the first word of the first sentence into one little bit of brain tissue and it has the capacity to represent the words and phrases in that paragraph. In other words it represents words and phrases.
Now you might wonder well what if it was a Chinese paper and Chinese text that you were using. Well then it would have to have a pre-existing ability to represent those characters, but essentially it would work the same way.
Then we go in and put another three sentences in a row from some other newspaper, or from the same story but later on in the text. And we do that over and over again. What happens is the system sees the same representations for the same words and phrases in the same position occurring over and over again. Let's say, for example, that the first sentence was 'the train went, blah, blah, blah'. What if it sees 'The' with a capital 'T' and 'train', 'The train', 'The train', 'The train' - it sees those over and over again.
The neurons, following a principle that was developed by Donald Head, a Canadian neuroscientist, 57 years ago, they will form strengthened links; their synapses will strengthen. Now this, we know, is exactly what happens in the brain. This is absolutely established.
The synaptic strengthening is almost certainly the mechanism of learning. But this is very explicit. This says that if I see these two words, over and over and over again then a link between their representations will form; that these neuronal representations for those work. We put in tens of millions of these triples from sentences.
That’s the learning phase; it’s just like a child, its being exposed to language. Pre -schoolers are not taught anything, they're just exposed to things. Well that's all that happens here. Now, following that exposure, we take an entirely fresh news story, like from today's newspaper, and we pick two sentences in a row, from within one paragraph, and we put them into this mal-trained system.
We then cause the system to decide on a set of words for a third sentence. Well, you're probably wondering what sort of grammar are you using, what sort of software or what sort of algorithm are you using? There is no grammar, there's no software, there are no rules, there are just the learned connections which occurred during those previous exposures.
So what does it do?
Well first of all, astoundingly, it crafts, a perfect English sentence. You're beginning to see why this phone call is taking place? Secondly, not only does it craft a perfect English sentence, when you read that sentence you are suddenly amazed that the sentence actually makes sense, that it relates to the previous two sentences that were put in.
Which, by the way, are totally fresh - it's never seen that phrasing before, its never seen these sentences before, the collections of topics being discussed may be fresh and novel. And yet it responds by what we call a plausible third sentence - a sentence that most people would agree, 'yes that could well have been the next sentence to that story'.
Now the first point to be made is 'is that third sentence factually correct?' Well no, it's just making it up! Now in psychiatry, when patients lose some of their brain areas they very often can still function but when they talk, for example, or when they do things they may actually end up with factually incorrect utterances. And that is called confabulation.
In my belief, it's due, not to a problem with the information processing, it's due to a lack of sufficient context. And that’s why I call it Confabulation theory. What this theory says is we always confabulate. It's just usually we’re supplied with a sufficient amount of context that the utterances that we produce are acceptable. Not everything we say is factually correct but is acceptable. The reality is that this thing can now take two sentences from a news story, you put them in, and it crafts a third sentence.
How does this relate to AI?
First of all if we look at this system it has no algorithm. There's no software, there's no grammar, there’s no rules. How can this thing create a sentence in the absence of all of those absolutely necessary things that computers must have? Well obviously this doesn't function like a computer. This is an entirely alien type of information processing. This is starkly alien.
This is what we've accomplished so far. The combination of a comprehensive theory and this application of the theory to real world data with these results suggests that this may be what everyone has been waiting all of these decades for. And that's it. Now what is it that we're doing? As you can imagine there are innumerable different reactions that may occur as time progresses, as this becomes known. As this process of promulgating this new discovery proceeds.
What kind of reaction are you anticipating from the scientific community?
One reaction will be from the world of neuroscience - as a neuroscientist myself I can tell you that what we've done, over say the past 50 years, at least is every year we go to congress and we say we're very thankful for your support - the brain is one of the most important scientific mysteries but we need another $6 billion this year and it's going to take us another 30 more years to learn anything significant and your patience is most appreciated, so goodbye.
Then they pony up the six billion and, by the way that is the current level of neuroscience research each year in the United States, which is about the same as the gross national product of Mongolia, so it's a big industry. So that has become completely enshrined in tradition and if someone makes a discovery like this that could change everything.
For one thing the people who hand out the money may now be induced into doing some different things, which could be very disruptive. So the neural science community is going to go through the standard grief resolution process with the five steps. The first of those of course is denial so don't hold your breath that neural science journals are going to be screaming out 'success - we've done it'! That's not going to happen.
The second thing is that in terms of the applications, in terms of the human species, in terms of getting benefit from this, we can't sit around on our hands, we have to move forward. This is not something that the world wants to wait for, there are still a lot of impoverished people on this planet and if we had a100 billion machines, as you well appreciate, out there working out there every day, toiling away, that would be value that would be added to the current wealth of the world. And that would make everyone's life better so there isn't any moral argument for delay - we must move forward.
Now another fact is there are many applications in industry which are probably now possible. The world of technology is driven by application and if you have a high value application that will be tackled and if it can be solved it will actually go to implementation quickly. People are ready for this; people are ready to move out. So I think we're going to see a very high level of interest and activity within the technological community and particularly the computer industry.
How do you see your work integrating into the AI industry over the next five to ten years?
I don't think the AI industry will have any interest because they're already totally committed to whatever they are doing - rules, ontologies, whatever... They're not going to be interested in scrapping all that and starting all over. However, it's ultimately the young people who grab on to the new trends and make them their own and succeed with them. And that's what will happen here.
You might remember a scientific author named Isaac Asimov and he postulated that to build an artificial brain you would need what he called a positronic brain, which is a very exotic and far future kind of development. However, it turns out that what it takes to implement a brain is a PC. We can do this with more or less the same kind of computers that we use today.
If you want to have a conversational customer service system, let's say you're a bank and you want millions of new customers to use your bank without building branches, what you do is install a conversational system so that people take one trip to a bank to enrol into the system, and become a customer, and then they do everything on their cell phone, they talk to the machine, talking back and forth, and the machine understands what they are saying and they understand what the machine is saying and so forth, and they can do all of their banking, pay all of their bills, look at all of their financial issues, all through this portal, and what is this machine on the other end - it's just a server in a server park, which is tied up for the moment that it's speaking to this one customer and the moment the customer hangs up it takes the next call.
With today's technology we can create a new world of artificial intelligence immediately without delay. As a matter of fact Fair Isaac is discussing what we call pilot installations of a system just like that with some of our clients. It's premature to identify them; we certainly don't have any projects well under way yet, to the point where our client would be comfortable with that, but that's actually real, that's happening. And so this is the state of the world.
What sort of equipment do you use at the moment - a normal PC?
Right, yes. We use Dell computers. They work just fine. If you think about it, meat is a very sub ideal kind of computing hardware. It has so many limitations - for one thing it has to have food all the time, it’s constantly thirsty, it needs all these vitamins, minerals and other kinds of nutrients. And then when you’re all done the speed of one neuron is very slow.
Obviously silicon is a far superior substrate for information processing. The problem has been we haven't known what to do with it. By the way, if these brain simulations that we’re using right now are the best that anyone can do I'll be shocked.
The moment this catches on they'll be millions of people diving into this and they'll be looking at every single aspect of it, including the fundamentals of how these things are implemented and there will be enormous improvements. So it's all up hill and brighter skies from here on!
How optimistic are you that your system will be used?
I'm very optimistic, but not because I've always been optimistic in this sense but only because we have the fuel for optimism. The evidence that we present in this book, which has now been published by Springer/Verlock, (it’s called Confabulation Theory, for obvious reasons), presents a theory for how cognition works, which hasn't been done before. Any theory that's really comprehensive and detailed, well you can try it out and we did! What did we end up building? An artificial intelligence, that’s what.
That's the claim; this stupid little system that takes two newspaper sentences in a row and then creates a third is clearly intelligent, in the strongest sense of the word. It understands a vast array of the objects of the world, of the people of the world, and their relationships and then it can utilise that knowledge to craft new creations that exhibit it.
That's not a Turing test; it's a Hecht - Neilsen test. When people can actually see this and see what it can do it's shocking and astounding and it will promote them to ask how they can build their own artificial intelligence, or how can they solve this or that problem. And that's going to trigger this enormous avalanche of interest and activity.
You say there isn't any software involved but there must be some software in your computer?
There is. There is software that is simulating the brain tissue.
And you and your colleagues specifically wrote this for this particular task?
Yes. We have an actual system that's called an API, known as the brain operating system and we're up to version 2.0 now. And this implements all these difference bits of tissue. So we've put in a design for our brain. How would we design it when we know that an English sentence can have many, many words?
We put in many tens of these little modular units of tissue, one for each word that we're going to enter. And we have three sets of those. One for each sentence entered, one for the first sentence, one for the second and one for the third. And then we basically just go ahead and expose it to the sentences by entering each word and lighting up the neurons which represent that word. And then we let the neurons just play with each other. That's it!
And that's how a normal brain works?
Yes that's right, exactly.
It sounds to me a little bit like a very complicated version of a piece of software which was demonstrated to me a while ago, where a program had been designed to detect students cheating. On a very simple level it recognised patterns of words, three in a row together. Hence, if it detected, say 50 very similar patterns within a couple of paragraphs within one manuscript, it would flag it. That's the closest thing I've heard about to what you're talking about here.
The difference here is that this thing is actually crafting English sentences on its own. And there is no software, it's just neurons talking back and forth and converging to a particular word in the first position, a particular word in the second position, and so on. Oh look that first word is capitalised, do I need to capitalise? By the way it can start a sentence with a non capitalised letter or a number, but it never does. Why does it do that, we don't know and I’m pretty sure it's not knowable.
There was a very famous logician, mathematician, and philosopher named Kurt Gödel who came up with a theory of whether or not it’s possible to create an axiomatic system in mathematics, a complete mathematical universe that's closed.
The answer's no but I think there are properties to thought which are like that; which is what we're going to find out - and by the way - if you were to go in and put just the tiniest bit of noise in these neurons you wouldn't get the same answer.
You might get a totally different answer, but that answer would still be a good answer. So those sorts of properties are so utterly different from what we are familiar with, that I think the fundamental study of these systems is going to be interesting as well.
Looking further down the line can you see this 'system' being implemented in robots and if so how might it be used to interact with humans? I presume that sometime in the future we'll have robots which will be able to integrate, even in a very simple manner with humans; for example, in a shop or somewhere in the retail sector?
Oh yeah. There are two issues. One is, as you know, a concept that Isaac Asimov also launched called the three laws of robotics. Well, even now it's clear to me that those will never be achieved. Robots will never have the ability to follow such laws. Robots will be inherently dangerous.
If you have a whole robot and it's strong enough to lift furniture out of the way, or whatever, it is capable of killing you. The advent of the home cleaning robot, in my view, is quite a way off. A robot that can walk around, pick up the dishes, feed the cat, all of that, that's going to be infinitely too dangerous for a long time. That's going to be a very difficult development.
In industry it will be the opposite - people will create robots almost immediately for doing industrial tasks and that will be safer because industry can apply safeguards. You can put in barriers, restraints, you can put in disconnects and so forth. So industry yes, home no. But what will happen is the customer service, which is just a passive box that talks back and forth with you. That’s going to happen instantly. We're actually working on that now.
In case you're wondering what we’re going to do, we were fixated on this application of conversational customer service, because that's what our clients want. We already have clients who are very interested in buying this. So that's naturally a place that we would go.
Your clients, are they a mix of industry and academic?
No - strictly industry only. Generally, these are industries such as banking, insurance, healthcare, retail sales, and so on.
The reality in this whole arena is that we have a development which is quite different. So we are going to have to have a lot of people working out the details in this area. So for the first time, in a long time, we're going to see many, many young people seeing this suddenly as a career path. It's going to be infinitely fascinating, infinitely remunerative and where the opportunities are boundless. The computer industry has certainly had a lot of pull but nothing like this.
Particularly in this country the IT industry is looked upon as being quite geeky. There are certain areas which are looked upon as being more cool - the games industry, for example, and also artificial intelligence. How do you think the industry could improve its image or do you think by emphasising things like artificial intelligence you will pull more students into the field?
When you talk about the computer world one of the problems is the 'geek factor', as you say. To become a computer programmer you have to undergo years of depersonalisation where you sit in front of a screen and sit there typing in software code and learning that craft. There's a price that comes with that - you become geek-a-fied or whatever! Everyone knows that and many don't like it.
Now what's this going to be like? Well, if you think about our little sentence generator, what's it really doing. What it's doing is imitating a human master. You have master writers who have created these newspaper stories and so what it's doing is acquiring a skill by sampling and observing the performances of highly skilled practitioners.
Now that kind of system is going to be much more interesting for most young people. So what we'll have, for example, in five years time - we have a client who wants to install a customer service system - what they’re going to have to do is literally decide on a perfect role model.
In other words they have to come up with a person and a way of doing their job and describe that and that sounds like the theatre doesn't it? And then they'll have to hire a dramaturge (or we will) who will standardise this performance across many, many different actors. And then they will actually go online and implement the function manually.
They will actually talk to the customer and implement the various functions the customer asks of them. The machine will then learn to actually carry out that customer service function by simply emulating those performances.
So many people are drawn towards entertainment and the theatre because it's a performance craft - it's not like computer programming which is this extremely intellectual and arcane thing.
Basically, they have to be phenomenally engaged with all the complexity of what they’re doing. A lot of people don't have that in them. They want to be able to express an artistic vision in a very satisfactory and holistically consistent manner. And so that kind of person is going to find a home in this industry.
Looking at your own job what would you say is the hardest part of it and the most rewarding?
Well I've had the experience of going from being a young person, who started as a mathematician and mathematicians are way beyond the nerd stage - they're into some kind of super nerd-hood! I've spent my entire life now, pondering this question and working on the answer to this question and I've found the answer.
When they speak of self actualisation you can't go beyond that! So my life is perfect. I've achieved everything that I ever dreamed of achieving and more. So this is all gravy from here on in - this is like a reward as far as I'm concerned. The opportunity to live beyond this discovery is pure gold.
So looking back there isn't anything you'd do differently given the chance to?
No, because apparently what I did worked! You can't ask for more than that. I'm sure if I went back and changed something it would be a disaster. I want to leave my path the same; by whatever way, by whatever means, that particular historical thread led to the right place.
The BCS recently helped put together the Royal Society Lecture which was given by Rodney Brooks. His talk was on robotics and he pre-empted a question, which apparently always comes up asking if he thought robots would eventually take over from humans. He didn't think so, he just thought that humans would become more robotic and robots more human as time goes by. What are your thoughts on that?
I think basically that he's correct. First of all I can't imagine our successors turning over the world to the machines they've created. That doesn't make sense. That would be like someone saying 'let's explode all the atomic bombs just for fun'! That's not going to happen.
I do believe though that, as he said, we will become more robot-like. Let me give you an example. It seems very likely that within thirty or forty years a very tiny device, let's say approximately the size of a postage stamp, will become available. And any six year old whose parents wish for them to have it can safely have it implanted in their brain in a five minute office procedure, done by a little robot.
What is this little postage stamp gizmo? It has all human knowledge on it. It's a repository of all human knowledge down to a very detailed level. It has huge amounts of data; it has all of the maps, all of the records, along with all the encyclopaedias and everything else. It also has a huge uncommitted memory store that they can use to store anything they never want to forget right throughout their life.
Now that is going to be possible within just a few decades. And it will be safe. What will happen is that one child in the class will get one and now all the other children will become what we call 'losers' - they can't possibly keep up. I mean this child now has an advantage that is absolutely impossible to overcome. And so every other child will get one as well. Things like that will undoubtedly become ubiquitous and will be just a further extension of the technology that we have today.
Today parents give children reference books so they can have access to the world of knowledge or they give them the internet. I mean who would deny their child the internet deliberately and maliciously - it would just seem like a crime. So I think that he's right - we will become more mechanical, we will add on this kind of hardware. And by the way, the principles for building that little postage stamp are essentially now known. At least the basics, I think the question of can we do this - the answer is yes.
Rodney was basically saying that the future of robotics was intertwined with the future of data storage, which is what you're saying there?
The bottom line is that robots are going to become ubiquitous; they are going to be serving people in industry, in the military, in all kinds of different ways. We'll have robots out there cleaning up the world. So much of the environmental damage that's been done could be reversed if we could afford to do it.
Well with robots we can. All of these places that have been destroyed environmentally, including the sea, we can go back and restore them now. I think robots will be essential. Will they be in the home? As long as we have product liability attorneys that's going to be delayed! It's too dangerous to play that game.
Who within the IT industry has inspired you the most - who's a role model for you?
In the IT industry there have been many roles models for me, but they're all people who took a basic vision, and then through an enormously long and difficult process realised their vision. They weren't really doing what I did or what I'm doing I should say. For example, years ago I met a person called Cuthbert Hurd and he was the leader of IBM's first commercial computer development.
There was a time when IBM was adverse to computers. And this was at the end of the reign of the founder of the company, TJ Watson Snr. And the board decided that that was not a profitable direction and so they installed his son, TJ Watson Jnr.
Anyway, very shortly thereafter, this guy, Cuthbert Hurd, took on this first commercial project and I got to know him and got to understand what he went through to make that happen; because the whole culture was turned against that. They wanted scientific computers and they only wanted a few of them for government projects and the like. And he successfully fought his way forward and turned it into a huge business success.
There are many others. Grace Hopper, for instance, was with the Burrell's Corporation in the early 1950s. Back then the way you programmed a computer is with what we call machine language. You literally specified which register is going to be added to which other register and where the answer was going to be placed. And that was horribly difficult and time consuming.
And she decided to develop a programme that can abstract those kinds of basic operations so that the programming only needs to specify 'here are the variables I'm going to use and here then are the manipulations that I want to go through with those variables'. And so she invented the compiler. That was in, I think, 1953.
This changed the world. And then she went on to join the US Navy to guide computer application in the military and ended up an admiral. She's one of my heroes. There are all these people who went through very long sagas, where they had to fight their way forwards, step by step, against constant resistance and yet succeeded. And those are my heroes.
So would you say that say Grace Hopper's achievements are some of the most exciting and groundbreaking developments to happen within the last 50 years within the IT industry?
Of course! Oh and by the way, after she got into the Navy, she wasn't satisfied with the original computer language that she created, so she created another one called COBOL. Everybody's heard of COBOL, it became the arch –typical language for business. And she developed it; a tremendous person.
Look at Bill Gates. Look what he went through. Those two guys, they left Harvard, they went to Albuquerque and founded a company called Microsoft. Most people don't even realise it was founded there. And almost nobody knows why.
It's because that's where the Altair computer was made, which they were building the micro-software for. They were building a basic compiler for this thing which you again had to programme in direct instruction language and they had to fight their way forward – it wasn't easy.
They had some brilliant accomplishments; the day they sold IBM on using their operating system DOS, was one of the greatest triumphs in the history of human affairs, in my opinion. And it wasn't done by some un-named committee; it was done by Bill Gates.
He will modestly refer you to other people who helped but the fact is he did it. These are the kind of performances in human history that I have always admired and used as guide posts in my life. It's always better to have outstanding role models than average ones.
What exactly is enterprise decision management?
EDM is an approach that automates, improves and connects decisions to enhance business performance. Fair Isaac's solutions and technologies for EDM turn strategy into action, giving organisations greater control over high-volume operational decisions that are growing more complex.
What is the Chancellor Project about?
The Fair Isaac Chancellor Project is developing automated conversational customer service systems with human-level capabilities for use in a variety of industries. Our goal is to bring to all customers highly enjoyable, effective, and efficient customer service of a calibre that today is only available to royalty and high-end celebrities.
BCS is pursuing professionalism in IT - what are your thoughts on this?
The history of IT teaches us the central importance of broad and deep education and understanding and aggressive innovation. These need to be central attributes of all IT professionals.
Project failure is a big subject in the UK - what have you learned from your involvement in various projects that could benefit our members?
Project success or failure is generally determined by the project leader. Some leaders are capable of reliably defining and then controlling a project to a successful conclusion. The process of identifying and developing new project leaders is, therefore, one of the most important in our field. Highly successful companies often implement a system in which more senior, proven, leaders are explicitly involved in spotting and mentoring (and, when necessary, removing) new leadership talent.
Short and sweet
Open source or proprietary?
Both have their place.
Apple or PC?
PC
Wii or PlayStation?
I do not play games.
Geek or nerd?
Both
Blackberry or Smartphone?
Both
Finally, what would you like to be remembered for - words on headstone?
He explained thinking.