Holy Grail of AI (Artificial Intelligence) – Computerphile

Holy Grail of AI (Artificial Intelligence) – Computerphile


Right. So, last time, which was quite a while ago, we were talking about intelligence in general and the way that you can model intelligence as an optimization process – This is the hill climbing algorithm.
– Yeah, that was an example we gave. We were using evolution as an example of an optimizing algorithm, or an optimizing system anyway. And then we were using that as a way of talking about other types of intelligence. We talked about chess AI very briefly. That kind of thing. So then the question is: What’s the difference between the type of AI that we have now– the type of AI that might play chess, drive a car, or win jeopardy or whatever– versus the ideas that we have of AI in the future? The kind of science fiction AI’s that are what you would call true AI. What is it that really makes the difference? Is it just a matter of power, or is there something else? And one real distinguishing factor is generality. And what that means is how broad a set of domains can it optimize in. So if you take a chess AI, it’s very intelligent in the domain of chess, and it is absolutely useless in almost any other domain. If you put a chess AI in a google self driving car not only can it not drive the car it doesn’t have the concept of what a car is. It doesn’t have any of the necessary architecture cognitive architecture to drive a car. And vice versa right? The google car can’t play chess. And it can’t win at jeopardy. Where as we have a working example of a general intelligence. Which is human intelligence. Right? Human brains can do a lot of different things. In a lot of different domains. Gulp. Including brand new domains. The domains we didn’t evolve for particularly. So in fact chess ,right? We invented chess, we invented driving. And then we learned to become good at them. So, a general intelligence is in a sense a different class of thing. Because it’s a single optimization system that’s able to optimize in a very broad variety of different domains. And if we could build an artificial general intelligence. That’s kind of the holy grail of AI research That you have a single program or a single system that is able to solve any problem that we throw at it or at least tackle any problem that we throw at it. -Recently Pr Brailsford … the idea of the Turing test That strikes me from what you’re saying is that’s a very specific domain pretending to be human talking. -Yes, in a sense it’s a very specific domain. The Turing test is necessary but not sufficient test for general intelligence. Hum. You could, it depends how you format your test, right because you could say well, if the AI has to pretend to be human, convincingly Turing’s original test was only in a brief conversation using on a text But you could say, to convince me you’re human : tell me what move I should make in this chess game. To convince me you’re human, tell me how I would respond in this driving situation or what’s the answer to this jeopardy question? So you can in a Turing test deliberately test a wide variety of other domains. But in general, conversation is one domain Hum, yeah you could formulate a true Turing test, in that way but it would get longer and be more, sort of, regressive. One more way of thinking about general intelligence is a domain specific intelligence but where the domain is the world or physical reality. And if you can reliably optimize the world itself. That is in a sense what general intelligence does. -Is that like humans have been changing the world to meet their needs? -Absolutely, so when you say changing the world Obviously we’ve been changing the world on a very grand scale but everything that humans do in the real world is in the sense changing the world to be better optimized to them, right. Like if I’m thirsty and there’s a drink over there then picking it up and putting it to my lips and drinking. I’m changing the world to improve my hydration levels which is something that I value So I’m, sort of, optimizing I am using my intelligence to optimize the world around me in a very abstract sense. But also quite practically. -But on bigger scale, as you say on a grander scale, building it down and irrigating a field , putting a pipe to your house and then I’ll need to have a tab. -Yep. -It’s doing the same thing but on a grander scale. -Right, and there’s no hard boundary between these two things. It’s the same basic mechanism at work. The idea that you want things to be in somewhere different from where they are So you use your intelligence to come up with a series of actions or a plan, that you can implement, that will better satisfies your values. And that’s that’s what a true AI, a general AI would do as well. So you can see the the metaphore to optimization is still there, right. You’ve got, this vast state space, which is all possible states of the world Remember before, we were talking about dimensionality and how it’s kind of a problem if you have too many dimensions. (So when we have a two-dimensional space…) This is what kills basic implementation of general AI off the bat because the world is so very very complicated. It’s an exceptionally high dimensional space. With the “I’m drinking a drink” example, you’ve got the same thing again. You’ve got a state of the world which is a place in this space and you’ve got another state of the world which is the state in which I’ve just had a drink. And one of them is higher in my utility function. It’s higher in my ordering, my preference ordering of the world states. So I’m going to try and move, I’m going to try to shift the world from places that are lower in my preference ordering to places that are higher. And that gives you a way to express the making of plans and the implementing of actions and intelligent behavior in the real world in mathematical terms. It’s not, you can’t just implement it, because hum because of this enormous dimensionality problem. -All these dimensions, if you try to break force infinite dimensions, you’re going to fall out very quickly. -Yeah, yeah, immediately. -Changing the world. Right, and if that sounds a little bit threatening uh it is. (laughs) We’d like to thank audible.com for sponsoring this computerphile video and if you like books go over to : There’s a chance to try out a book for free. Now I spoke to Rob who’s in this computerphile video and asked him what book he would recommand and he says “Superintelligence” by Nick Bostrom is the one to check out. Particularly on this subject of artificial intelligence We’ve got more to come on that subject on computerphile as well, so visit : audible.com/computerphile Check out “Superintelligence” and thanks once again to audible.com for sponsoring this computerphile video.

Only registered users can comment.

  1. Interesting; a discussion about the differences between human intelligence and AI that never mentions consciousness, in the sense of awareness. Humans are aware or conscious; AI isnt; that right there is the essential difference.

  2. A "world optimizer" sounds more like a god-like intelligence than human and that is something we do not have a working example of. Both are forms of general intelligence but I'm not so sure that they both fall under the same category (in terms of actual implementation). As humans, we are limited to our senses and can't optimize for what we "don't know". Yes, a world optimizer would also achieve human intelligence but go about it in a very different way than the brain. Does the universe contain the resources to build a world optimizer? I guess I'm proposing that a world optimizer would be a very inefficient way to implement human intelligence if that's all you are going for.

  3. I have always disagreed with the Turing test for AI. I could arbitrarily consider something or someone to not feel human. It's too subjective.
    Good AI is AI that does what it's creators intended it to do imo. Computers don't understand ideals because they are not part of the universe like we are. They are just an array of electrical components that occupy space. As similar in our makeup as a computer as we are, we are still much more complex in our operation. We can assume, but a computers assumptions are programatically generated, and not true assumptions based off of local probability.
    We still don't know if the entropy we feel is discrete, but it is likely to be not, something which a modern day computer will never experience for itself.

  4. It honestly makes me wonder if someone would ever try to change humans into robots, somewhat like doctor who. Unfortunately the clever ones sometimes are evil, without trying to cause insult to anyone.

  5. so to sum it up, he is saying:
    "without a certain demand there is no solution/no problem.
    without a problem there is nothing to solve.
    a true ai would not only be able to solve almost any problem, it would identify almost any problem itself and therefor have demands"

    wow… not sure what such an ai would identify as a problem. i guess you would have to include boundaries like "solve problems of humanity" because, just perhaps, it might come up with the "idea" : "humans are a problem"

  6. Couldn't you make a compound AI, which has a domain in making decisions, and use that to specify the domain which is most suited for the task, so it can thus tackle problems in some sense? Just a thought.

  7. The REAL trick would be creating an AI which is able to demonstrate intelligence in such a way that after it does so, humanity doesn't simply immediately re-define what they consider intelligence to be. The Turing Test has been passed. So we declare that to be an inadequate test. That's just another in an extremely long list of redefinitions of intelligence that humanity has done throughout history. The more we learn about animals, the further we move the goalposts. Recognition of self? Ability to plan for the future? Ability to communicate with symbolic language? Ability to count and do arithmetic? Sense of mind? All once hailed as hallmarks, if not guarantees, of intelligence, all found in animals so intelligence was redefined. Ability to win at chess, pass the Turing Test, write pleasing music, create novel art, find new algorithms to solve difficult problems, all done by machines so intelligence was redefined. This is why researchers don't even bother chasing that Cheshire Cat. AI can already do things no, or extremely few, humans are capable of. It is increasingly clear that no length of such a list of abilities will ever be adequate to persuade humans intelligence can exist outside of a human mind.

  8. Well, what about making internet connected general AI that is just able to analyze the world, and create its own modules (according to AI self-improvement)? Of course the world is large and complex, but I'm trying to say this AI could analyze chess and learn it in a way as human does. If you want it to play chess, it searches net and books and thanks to text/speech processing, it will learn rules, thus make its own module of data and functions. Same for driving a car, for example. It would have learnt text/speech processing before, sure. But what about to create just a baby AI? This means all hardware potential and just a simple software with analyzing and recognizing ability.

  9. It almost seems like a consensus that the holy grail of AI is a general AI or strong AI. Which is, by definition, an AI which can solve general problems. Now, let's take a look back on us humans and how humans become "intelligent beings".

    A simple question: why do you believe that the world is spherical? (it's not perfectly spherical, but just for the sake of simplicity of the argument, let's assume that it is.)
    How do you prove that it is, in fact, spherical? Do you go around the world and tests all of the theories which science has ever come up with?

    The answer is, we know that the world is spherical because we're told by people specialized in their field that the world is indeed spherical. How do we trust them? There's a trust mechanism at play here called the credibility weighted mechanism (i just made that up, but you get the point). We can test their credibility from different aspects, like their certificates, their experience, and how many people attest to their professionalism. It's like a ranking system, but much more abstract and heuristic in nature.

    Here is an important point as to why humans have evolved to be this very intelligent machine. We don't. We assume that we did, but we always put a 'we' in the sentence. It's human as a collective, not as an individual which are evolving in terms of intelligence. Sure, we are general intelligence is a sense that we can generally solve day-to-day problems just by ourselves, but that is also affected by what other credible persons (eg: teachers, parents) have told us over the years. So, in a sense, a general intelligence will never grow as big as we did as humans if we disregard the need to do things as a collective.

    A single general AI which can do anything will develop slowly if it can't communicate with another AI with the same pace of development to help it specialize in certain things while delegating other works to the other AI (resource is surprisingly limited).

    So what I'm proposing is, we can develop weak AIs as how we have done for the last decades, but let them have a general protocol of communication and heuristic ranking system to allow the delegation of tasks for a more general purpose problem solving ability.

    I'm just trying to throw ideas out there. I'm not an expert by any means.

  10. I fully support another commenter by saying this person should have his own channel. He's charismatic, articulate, concise, and smart.

  11. That's why AI is so dangerous, because it won't think hmmm that glass of water, I need it, it won't think, does that other computer or human need some, it will just take what it needs, because it needs it.

  12. You could rate possible AI decisions on a scale. For the water drinking example:
    1 (Low). Picking up the cup and drinking the water inside.
    2 (Medium). Building the dam . . . getting water out of the tap.
    3 (Extreme). Realizing that water is made of hydrogen and oxygen, and gathering all hydrogen and oxygen together to have 'water'.

  13. Is it just me or does this guy look like Ray Dorset of Mungo Jerry? Speaking of facial recognition… on a video dealing with AI…

  14. Just came here after about a year to thank you for the book you recommended. Superintelligence was an awesome read. Everybody (literally every person on earth) should check it out.

  15. Why should the Turing test matter? If there is intelligent life in space, it might as well not be able to pass the Turing test. Consider Solaris (a book by Stanislaw Lem).

  16. I recently read in a paper that the Turing test is neither sufficient nor necessary. Here he says it's necessary.
    Anyone know which it is?

  17. As specialist, I'm sure Rotogenflux Methods is great way to increase your IQ of 23 points. Why not give it a shot? maybe it is going to work for you too.

  18. So what if we designed an AI that is capable of designing thngs and improving things like the 'CERN or LHC? Imagine that it was designed also to implement its intelligence ti find better configuratiions. Ok. I know I'm actually high, but this is something important

  19. Then again, human brains still have to be taught how to solve those problems. A person who's great at chess may have no idea how to drive, and vice versa.

  20. the turing is based on the idea that conversations have to be to complex for a stupid computer.
    the truth is: simple conversations made computers pass the turing test.
    a conversation that makes the AI show if it really understands the world would be a useful test.
    not a simple chat.

  21. Imagine if drinking from the cup wasn't actually the best possibility, but a state of the world where another thing happened was more preferable for the user. Taking this into account the human intelligence could actually be failing us quite much which is kinda odd to think about xD

  22. I read plenty of superb opinions on the internet about how Rotogenflux Methods (google search it) can help you improve your intelligence. Has anybody tried using this popular intelligence boost program?

  23. I get how you picture the chess algo in a google car, when you overlay the chessboard.. the thing is, in reality, the car would not start, drive or do anything.

  24. Humans decide and update their own 'domain' (goals). AI just works at a determined domain. And I think we should keep it that way 😉

  25. These AI video are magnificent. It's the only YouTube videos that I can rewatch over and over again. Don't get me wrong, there are many good videos out there but these ones tower over them all

  26. This suggests that Google's "self-driving car" has a concept of what a car is. It doesn't. It has no idea what a car is. It is completely trapped inside John Searle's Chinese Room.

  27. I don't like to wish bad luck to swaggy scientists but I really wish humanity will fail at creating "real" AI forever

  28. Sooo… If I don't play chess then I'm not human? That's either a poor excuse for a test or there are a lot of things pretending to be human that are not. No, you can't make test that both parties will fail and then summarily decide the other one didn't pass while the other ones result is omitted. That's not a test, it's a predetermined choice.

  29. Can a General Intelligence theoretically hold opinions about reality if it is optimizing for reality, or will it always follow an objective thought process?

  30. I'm pretty sure X3 Terran Conflict (great game) already shows that Artificial General Intelligence (AGI) are bad and will try to terraform Earth. Which is bad.

  31. When you have a chess, autonomous car, weather prediction, e-learning and any other AI's all on one network such as the internet. Is that not then a general intelligence? If you look at the internet as a whole?

  32. could it be that an AI wouldn't wan't a better AI because this new AI will defeat him so he can't get to its goal

  33. what if far in the future a mad man programmed an ai without any security lines in the code?
    Because security becomes vastly important with higher class ai. And it is up to the programmer to secure its invention or not.
    so mayby we all will be end in an ai-war, like we just do in analogy to the viruses and the anti-virus-programmes to secure a single software.
    I don't wanna live in that particular time, where any single human gets the ability to harm the rest of the world due to his own knowledge.

  34. The problem with an AI having to brute force inifinite world states is a theoretical one, not a practical one.
    It would just cut down the amount of information available and package it into handle-able data, like we do. We do not need to know every little blade of grass to change a garden to our liking, and AI will be able of the same big picture thinking.

  35. 1:30 "Ok, mr Chess AI, think of this car as the Pawn, unless it reaches an intersection, then it's temporary upgraded to a Tower…" ;P

  36. So are other non human animals general intelligences? Can that they be defined as that or is GI purely a term used to describe humans and human like intelligence processes? Is there perhaps a blurred line that links human intelligence and the intelligence of say a monkey or dolphin, whale etc?

  37. what if the internet self becomes an a.i because of an a.i that can collect and correlate data from any kind of information?

  38. When I still was studying Chemistry I encountered another fascinatingly smart fellow student that had done some A.I. studies before he switched over to Chemistry. He complained about it; How professors there went on about trivial philosophical matters and not teach anything concrete that mattered.

    These 'AI' examples are just agents. The chess game opens and is over till you won or lost. The car drives from point A to B. A sentient program would consist of a superloop with several subloops under it activating agents like; playing-chess, driving-car, associative-functions, cognitive-functions and it should be able to add and remove loops where needed. It should also have the freedom to modify/optimise it's own code.

    The example at 5:45 shows actually how much white noise there is in this field. No human knows everything about the world, does need to know it. The more I know the more I realise how little I know. But do I need to know? Is the air I'm breathing not toxic? Isn't the ceiling about to collapse on me? Would a sentient program have to be pre-occupied with silly things like that?

    I don't see how this field is going to move forward with trivial philosophical matters obscuring it.

  39. Either I have misunderstood or something but did he say that a true AI has to make a physical difference to the world and can't just be text? I don't see how an AGI couldn't be purely text based as long as it has the ability to solve problems and invent things etc and just output the text to us, I think having a physical impact upon the world is irrelevant

Leave a Reply

Your email address will not be published. Required fields are marked *