I define intelligence simply as how good something
is at accomplishing complex goals.
Human intelligence today is very different from machine intelligence today
都有着极大的区别 首先 以前的机器智能
in multiple ways. First of all, machine intelligence in the past
used to be just in always inferior to human intelligence.
Gradually machine intelligence got better than human intelligence in
certain very, very narrow areas, like multiplying numbers fast like pocket calculators
or remembering large amounts of data are really fast.
What we’re seeing now is that machine intelligence is spreading out
a little bit from those narrow peaks and getting a bit broader.
We still have nothing that is as broad as human intelligence,
which you know where a human child can learn
to get pretty good at almost any goal.
But you have systems now for example that can learn to
play a whole swath of different kinds of computer games
or to learn to drive a car and in pretty varied environments.
And where things are obviously going in AI is increased breadth
and the holy grail of AI research is to build a machine.
that is as broad as human intelligence can get good at anything.
And once that’s happened it’s very likely
it’s not only going to be as broad as humans, but also better than humans
at all tasks, as opposed to just some right now.
I have to confess that I’m quite the computer nerd myself.
I wrote some computer games back in high school and college
and more recently I’ve been doing a lot of
deep learning research with my lab at MIT.
So something that really blew me away like “whoa”
was when I first saw this Google deepmind system
that learned to play computer games from scratch.
You had this artificial simulated neural network,
它不知道电脑游戏 电脑 屏幕都是什么
it didn’t know what a computer game was it didn’t know what a computer was, it didn’t know what a screen was
you just fed in numbers that represented the different colors on the screen
and told it that it could output different numbers
which correspond to the different keystrokes it also didn’t know anything about.
And then kept just feeding it the score and all this offer I knew
was to try to randomly do stuff that would maximize that score.
You say well I remember watching this on the screen once when Demis Hassabis the CEO of google deepmind show them,
seeing the first half this thing really played total BS strategy and lost all the time,
gradually got better and better, and then it got better than I was
and then after a while it figured out this crazy strategy in Breakout
(you’re supposed to bounce the ball off of a brick wall)
where it would keep aiming for the upper-left corner until it punched a hole through there
and got the ball bouncing around in the back and just racked up crazy many points.
然后我就感叹 哇 它真聪明
And I was like, “Whoa, that’s intelligent!”
And the guys who programmed this didn’t even know about that strategy
because they hadn’t played that game very much.
This is a simple example of how machine intelligence can surpass the intelligence of its creator,
much in the same way as a human child can
end up becoming more intelligent than its parents
if educated well. And
this is just tiny little computers
the sort of hardware you can have on your desktop.
If you now imagine scaling up to the biggest computer facilities we have in the world
and you give us a couple of more decades of algorithm development,
I think it is very plausible that we can make machines
that cannot just learn to play computer games better than us
but that it can view life as a game
and do everything better than us.