Right. So, last time, which was quite a while ago, we were talking about intelligence in general and the way that you can model intelligence as an optimization process – This is the hill climbing algorithm.
– Yeah, that was an example we gave. We were using evolution as an example of an optimizing algorithm, or an optimizing system anyway. And then we were using that as a way of talking about other types of intelligence. We talked about chess AI very briefly. That kind of thing. So then the question is: What’s the difference between the type of AI that we have now– the type of AI that might play chess, drive a car, or win jeopardy or whatever– versus the ideas that we have of AI in the future? The kind of science fiction AI’s that are what you would call true AI. What is it that really makes the difference? Is it just a matter of power, or is there something else? And one real distinguishing factor is generality. And what that means is how broad a set of domains can it optimize in. So if you take a chess AI, it’s very intelligent in the domain of chess, and it is absolutely useless in almost any other domain. If you put a chess AI in a google self driving car not only can it not drive the car it doesn’t have the concept of what a car is. It doesn’t have any of the necessary architecture cognitive architecture to drive a car. And vice versa right? The google car can’t play chess. And it can’t win at jeopardy. Where as we have a working example of a general intelligence. Which is human intelligence. Right? Human brains can do a lot of different things. In a lot of different domains. Gulp. Including brand new domains. The domains we didn’t evolve for particularly. So in fact chess ,right? We invented chess, we invented driving. And then we learned to become good at them. So, a general intelligence is in a sense a different class of thing. Because it’s a single optimization system that’s able to optimize in a very broad variety of different domains. And if we could build an artificial general intelligence. That’s kind of the holy grail of AI research That you have a single program or a single system that is able to solve any problem that we throw at it or at least tackle any problem that we throw at it. -Recently Pr Brailsford … the idea of the Turing test That strikes me from what you’re saying is that’s a very specific domain pretending to be human talking. -Yes, in a sense it’s a very specific domain. The Turing test is necessary but not sufficient test for general intelligence. Hum. You could, it depends how you format your test, right because you could say well, if the AI has to pretend to be human, convincingly Turing’s original test was only in a brief conversation using on a text But you could say, to convince me you’re human : tell me what move I should make in this chess game. To convince me you’re human, tell me how I would respond in this driving situation or what’s the answer to this jeopardy question? So you can in a Turing test deliberately test a wide variety of other domains. But in general, conversation is one domain Hum, yeah you could formulate a true Turing test, in that way but it would get longer and be more, sort of, regressive. One more way of thinking about general intelligence is a domain specific intelligence but where the domain is the world or physical reality. And if you can reliably optimize the world itself. That is in a sense what general intelligence does. -Is that like humans have been changing the world to meet their needs? -Absolutely, so when you say changing the world Obviously we’ve been changing the world on a very grand scale but everything that humans do in the real world is in the sense changing the world to be better optimized to them, right. Like if I’m thirsty and there’s a drink over there then picking it up and putting it to my lips and drinking. I’m changing the world to improve my hydration levels which is something that I value So I’m, sort of, optimizing I am using my intelligence to optimize the world around me in a very abstract sense. But also quite practically. -But on bigger scale, as you say on a grander scale, building it down and irrigating a field , putting a pipe to your house and then I’ll need to have a tab. -Yep. -It’s doing the same thing but on a grander scale. -Right, and there’s no hard boundary between these two things. It’s the same basic mechanism at work. The idea that you want things to be in somewhere different from where they are So you use your intelligence to come up with a series of actions or a plan, that you can implement, that will better satisfies your values. And that’s that’s what a true AI, a general AI would do as well. So you can see the the metaphore to optimization is still there, right. You’ve got, this vast state space, which is all possible states of the world Remember before, we were talking about dimensionality and how it’s kind of a problem if you have too many dimensions. (So when we have a two-dimensional space…) This is what kills basic implementation of general AI off the bat because the world is so very very complicated. It’s an exceptionally high dimensional space. With the “I’m drinking a drink” example, you’ve got the same thing again. You’ve got a state of the world which is a place in this space and you’ve got another state of the world which is the state in which I’ve just had a drink. And one of them is higher in my utility function. It’s higher in my ordering, my preference ordering of the world states. So I’m going to try and move, I’m going to try to shift the world from places that are lower in my preference ordering to places that are higher. And that gives you a way to express the making of plans and the implementing of actions and intelligent behavior in the real world in mathematical terms. It’s not, you can’t just implement it, because hum because of this enormous dimensionality problem. -All these dimensions, if you try to break force infinite dimensions, you’re going to fall out very quickly. -Yeah, yeah, immediately. -Changing the world. Right, and if that sounds a little bit threatening uh it is. (laughs) We’d like to thank audible.com for sponsoring this computerphile video and if you like books go over to : There’s a chance to try out a book for free. Now I spoke to Rob who’s in this computerphile video and asked him what book he would recommand and he says “Superintelligence” by Nick Bostrom is the one to check out. Particularly on this subject of artificial intelligence We’ve got more to come on that subject on computerphile as well, so visit : audible.com/computerphile Check out “Superintelligence” and thanks once again to audible.com for sponsoring this computerphile video.