未登录,请登录后再发表信息
最新评论 (0)
播放视频

AI没你想的聪明 但未来则不一定

Jeff Dean: AI isn't as smart as you think -- but it could be | TED

Hi, I’m Jeff.
嗨 我是杰夫
I lead AI Research and Health at Google.
我在谷歌负责人工智能研究与健康
I joined Google more than 20 years ago,
20多年前我进入谷歌
when we were all wedged into a tiny office space,
当时我们挤在狭小的办公空间
above what’s now a T-Mobile store in downtown Palo Alto.
就在帕洛阿尔托市中心一家T-mobile门店楼上
I’ve seen a lot of computing transformations in that time,
那时我就已经见过许多计算转换
and in the last decade, we’ve seen AI be able to do tremendous things.
过去10年我们见证了人工智能巨大的能力
But we’re still doing it all wrong in many ways.
但我们在许多方面仍然做的不对
That’s what I want to talk to you about today.
这就是我今天要跟大家讲的
But first, let’s talk about what AI can do.
首先我们来谈谈人工智能可以做什么
So in the last decade, we’ve seen tremendous progress
过去十年我们见证了人工智能在如何使
in how AI can help computers see, understand language,
计算机比以往任何时候都更好地识别图像
understand speech better than ever before.
理解语言和话语上取得了空前的巨大进步
Things that we couldn’t do before, now we can do.
从前我们做不到的事 如今可以做到
If you think about computer vision alone,
如果你单从计算机的角度出发
just in the last 10 years,
仅仅在过去十年里
computers have effectively developed the ability to see;
计算机就已具备了有效识别图像的能力
10 years ago, they couldn’t see, now they can see.
十年前还不能识别图像 现今可以了
You can imagine this has had a transformative effect
你可以想象 这已经极大地
on what we can do with computers.
影响了我们能用计算机做的事
So let’s look at a couple of the great applications
我们来看看基于计算机这些能力的
enabled by these capabilities.
几个伟大运用
We can better predict flooding, keep everyone safe.
我们能更好预测洪灾 保障人们的安全
Using machine learning,
使用机器学习能力
We can translate over 100 languages so we all can communicate better,
我们能翻译上百种语言 可以更好地交流
and better predict and diagnose disease,
更好地诊断和治疗疾病
where everyone gets the treatment that they need.
人们都能够得到所需的治疗
So let’s look at two key components
让我们来看看今天人工智能进步
that underlie the progress in AI systems today.
背后的两个关键组成要素
The first is neural networks,
第一个是神经网络
a breakthrough approach to solving some of these difficult problems
这是解决这些困难问题的突破性方法
that has really shone in the last 15 years.
在过去的15年里大放异彩
But they’re not a new idea.
但不是什么新鲜概念
And the second is computational power.
第二个是计算能力
It actually takes a lot of computational power
事实上 使神经网络真正发挥效用
to make neural networks able to really sing,
需要花费巨大的计算能力
and in the last 15 years, we’ve been able to halve that,
过去15年里 我们已开发了一半
and that’s partly what’s enabled all this progress.
这一定程度上激活了神经网络的发展
But at the same time, I think we’re doing several things wrong,
但与此同时 我认为我们做错了几件事
and that’s what I want to talk to you about at the end of the talk.
这些我想在演讲最后再告诉大家
First, a bit of a history lesson.
首先 让我们回顾一下从前
So for decades,
几十年来
almost since the very beginning of computing,
几乎追溯至信息处理概念诞生之初
people have wanted to be able to build computers
人们希望能够制造出
that could see, understand language, understand speech.
可以识别图像 理解语言和话语的计算机
The earliest approaches to this, generally,
一般最早的方法是
people were trying to hand-code all the algorithms
人们试图手写所有需要的
that you need to accomplish those difficult tasks,
可以解决困难任务的代码算法
and it just turned out to not work very well.
结果却差强人意
But in the last 15 years, a single approach
但过去15年里 有一个方法出乎意料地
unexpectedly advanced all these different problem spaces all at once:
同时使不同的问题在其领域有了进展
neural networks.
那就是神经网络
So neural networks are not a new idea.
神经网络不是一个新概念
They’re kind of loosely based
神经网络大致基于
on some of the properties that are in real neural systems.
人类神经网络中的一些行为特性
And many of the ideas behind neural networks
神经网络背后的许多原理
have been around since the 1960s and 70s.
在20世纪60年代和70年代出现
A neural network is what it sounds like,
神经网络恰如其名
a series of interconnected artificial neurons
由一系列相互连接的人工神经元组成
that loosely emulate the properties of your real neurons.
大致模仿了人脑神经网络的特点
An individual neuron in one of these systems
其中一个系统中的单一神经元
has a set of inputs,
有一组输入神经元
each with an associated weight,
每一个神经元都有相应的权重
and the output of a neuron
神经元的输出层
is a function of those inputs multiplied by those weights.
则是由这些输入与对应的权重相乘
So pretty simple,
这很简单
and lots and lots of these work together to learn complicated things.
许多这样的神经元共同工作以学习复杂事物
So how do we actually learn in a neural network?
所以我们是如何在神经网络中学习的呢?
It turns out the learning process consists of repeatedly
原理是在学习过程中包含反复地
making tiny little adjustments to the weight values,
对权值进行微调
strengthening the influence of some things,
加强某些部分的影响
weakening the influence of others.
削减其他部分的影响
By driving the overall system towards desired behaviors,
通过驱动总系统来达成想要的行为
these systems can be trained to do really complicated things,
这些系统通过训练可以完成复杂的任务
like translate from one language to another,
诸如将一种语言译至另一种
You know, detect what kind of objects are in a photo,
或者读取一张照片上的出现的物体
all kinds of complicated things.
以及其他各种复杂的事
I first got interested in neural networks
1990年读本科上神经网络的课程时
when I took a class on them as an undergraduate in 1990.
我第一次对神经网络产生了兴趣
At that time, neural networks showed impressive results on tiny problems,
那时神经网络对微小问题的反应令人印象深刻
but they really couldn’t scale to do real-world important tasks.
但神经网络并不能完成真正重要的任务
But I was super excited.
我当时超级激动
I felt maybe we just needed more compute power.
那时我觉得我们需要的可能只是更强的计算能力
And the University of Minnesota had a 32-processor machine.
圣玛丽明尼苏达大学有32台处理器
I thought, “With more compute power,
我心想 “有了更强的计算能力
boy, we could really make neural networks really sing.”
伙计们 我们可以让神经网络真的发挥作用”
So I decided to do a senior thesis on parallel training of neural networks,
所以我决定撰写关于神经网络并行训练的毕业论文
the idea of using processors in a computer or in a computer system
意在讨论在计算机或计算机系统上通过使用处理器
to all work toward the same task,
完成以同样任务为目标的不同工作
that of training neural networks.
即训练神经网络
32 processors, wow,
32台处理器 天呐
we’ve got to be able to do great things with this.
我们可以用它做伟大的事
But I was wrong.
但我错了
Turns out we needed about a million times
结果显示 我们需要比1990年
as much computational power as we had in 1990
多百万倍的运算能力
before we could actually get neural networks to do impressive things.
才能让神经网络有惊人的表现
But starting around 2005,
大约2005年开始
thanks to the computing progress of Moore’s law,
得益于摩尔定律的计算过程
we actually started to have that much computing power,
我们的确开始拥有那样的计算能力
and researchers in a few universities around the world started to see success
世界各地一些大学的研究学者开始看到
in using neural networks for a wide variety of different kinds of tasks.
使用神经网络来完成各种不同任务的成果
I and a few others at Google heard about some of these successes,
我和其他谷歌同事听说了其中一些成功案例后
and we decided to start a project to train very large neural networks.
决定开展一个项目 以训练非常大的神经网络
One system that we trained,
我们所训练的其中一个系统
we trained with 10 million randomly selected frames from YouTube videos.
是从YouTube中随机选取千万帧来进行训练的
The system developed the capability to recognize all kinds of different objects.
这个系统发展了识别各种不同事物的能力
And it being YouTube, of course, it developed the ability to recognize cats.
当然在YouTube上系统获得了识别猫的能力
YouTube is full of cats.
YouTube上到处都有猫
But what made that so remarkable
但更引人注目的是
is that the system was never told what a cat was.
这一系统从未被告知猫是什么
So using just patterns in data,
所以仅仅通过数据模式
the system honed in on the concept of a cat all on its own.
系统就能独自理解猫这一概念
All of this occurred at the beginning of a decade-long string of successes,
所有这一切发生在谷歌以及其他地方
of using neural networks for a huge variety of tasks,
发生在长达十年之久的系列成功以及用神经网络
at Google and elsewhere.
完成多种任务之初
Many of the things you use every day,
许多你日常使用的功能
things like better speech recognition for your phone,
比如手机能更好地语音识别
improved understanding of queries and documents
搜索软件能更好地理解问题和文件
for better search quality,
具有更高的准确度
better understanding of geographic information to improve maps, and so on.
更好地理解地理信息来改善地图等等
Around that time,
在那时
we also got excited about how we could build hardware
为了更适应于不同神经网络所需的指令
that was better tailored to the kinds of computations neural networks wanted to do.
开发相关硬件的过程也同样令我们兴奋
And turns out neural network computations have two special properties.
结果显示神经网络指令表现出两种特殊性能
The first is they’re very tolerant of reduced precision.
第一是它们对于低精确度包容度高
Couple of significant digits, you don’t need six or seven.
只需几个有效数字 不需要六个或七个
And the second is that all the algorithms are generally composed
第二是所有的算法基本囊括
of different sequences of matrix and vector operations.
不同顺序排列的矩阵和向量运算
So if you can build a computer
所以如果你可以组装一台擅长
that is really good at low-precision matrix and vector operations
低精确度矩阵和向量运算 但做不了
but can’t do much else,
其他事情的计算机
that’s going to be great for neural-network computation,
这对于神经网络运算会很有用
even though you can’t use it for a lot of other things.
尽管你不能用它做很多其他的事情
And if you build such things,
但若你组装了这样的计算机
people will find amazing uses for them.
你可以发现其令人惊叹的功用
This is the first one we built, TPU v1.
此即是我们做出的第一个东西 TPU v1
“TPU” stands for Tensor Processing Unit.
“TPU”即张量处理器
These have been used for many years behind every Google search, for translation,
在谷歌搜索中它用于翻译多年
they were used in the DeepMind AlphaGo matches,
还用于深度思考公司的阿尔法围棋比赛中
so Lee Sedol and Ke Jie maybe didn’t realize,
李世石和柯洁可能都没意识到
but they were competing against racks of TPU cards.
他们的竞争对手是成堆的TPU卡
And we’ve built a bunch of subsequent versions of TPUs
我们已构建了一堆后续版本的 TPU
that are even better and more exciting.
这些版本更好更令人激动
But despite all these successes,
尽管取得了这些成功
I think we’re still doing many things wrong,
我们在许多事情上还是做得不对
and I’ll tell you about three key things we’re doing wrong,
我要跟你讲讲我们做错的三件事
and how we’ll fix them.
以及如何修正的
The first is that most neural networks today
第一是今天大多数神经网络
are trained to do one thing, and one thing only.
仅被设计做一件事 只有一件事
You train it for a particular task that you might care deeply about,
你训练它来完成一项你可能很关心的任务
but it’s a pretty heavyweight activity.
但这一任务量巨大
You need to curate a data set,
你需要创建一个数据库
you need to decide what network architecture you’ll use for this problem,
你需要决定使用何种网络架构解决此问题
you need to initialize the weights with random values,
你需要用随机值初始化权重
You need to apply lots of computation to make adjustments to the weights.
你需要用大量运算调整权重
And at the end, if you’re lucky,
最后如果幸运的话
you end up with a model that is really good at that task you care about.
你会得到一个极擅长你所关心任务的模型
But if you do this over and over,
但如果你一直重复
you end up with thousands of separate models,
你就会得到上千个独立模型
each perhaps very capable,
每个可能都有强大的能力
but separate for all the different tasks you care about.
但对于你所关心的不同任务又是分开的
But think about how people learn.
想想人类是怎样学习的
In the last year, many of us have picked up a bunch of new skills.
过去一年我们许多人都学会了一堆新技能
I’ve been honing my gardening skills,
我一直在磨练我的园艺技能
experimenting with vertical hydroponic gardening.
尝试垂直水培园艺
To do that, I didn’t need to relearn everything I already knew about plants.
要做这个 我无需再次学习我已知的园艺知识
I was able to know how to put a plant in a hole, how to pour water,
我知道怎么把植株放在坑里 怎么浇水
that plants need sun,
也知道植物需要阳光
and leverage that in learning this, you know, new skill.
利用已有知识学习新技能
Computers can work the same way, but they don’t today.
电脑也能以同样方式运作 但现今不行
If you train a neural network from scratch,
如果你从零开始训练神经网络
it’s effectively like forgetting your entire education
就相当于每一次尝试新东西时
every time you try to do something new.
就要忘记所学的全部知识
That’s crazy, right?
太疯狂了 不是吗
So instead, I think we can and should be training multitask models
所以我认为我们可以且应该训练可以完成
that can do thousands or millions of different tasks.
成千上万个不同任务的多任务模型
Each part of that model would specialize in different kinds of things.
该模型各部分可以针对解决不同的任务
And then, if we have a model that can do a thousand things,
然后如果我们有一个能解决上千件任务的模型
and the thousand and first thing comes along,
这些任务开头的第一件事接踵而至
we can leverage the expertise we already have in the related kinds of things
我们可以利用我们在相关领域已有的专业知识
so that we can more quickly be able to do this new task,
这样我们就能更快地完成这项新任务
just like you, if you’re confronted with some new problem,
就像你一样 如果你碰见了一些新问题
you quickly identify the 17 things you already know
你马上想到17件你已知的
that are going to be helpful in solving that problem.
有利于解决该问题的事
Second problem is that most of our models today
第二是今天我们大多数模型
deal with only a single modality of data —
只处理单一形式的数据
with images, or text or speech,
用图像 文本或者演说的方式
but not all of these all at once.
但不能一次解决所有这些问题
But think about how you go about the world.
想想你是如何在这个世界上生存的
You’re continuously using all your senses
你反复地使用自己所有的感官
to learn from, react to,
来学习 来反应
figure out what actions you want to take in the world.
来明白生存在这个世界要采取怎样的行动
Makes a lot more sense to do that,
这样做会更有意义
and we can build models in the same way.
我们可以利用同样的的方式建立模型
We can build models that take in these different modalities of input data,
我们可以建立接收不同数据形式的模型
text, images, speech,
包含文本 图像和演说
but then fuse them together,
然后把它们融合在一起
so that regardless of whether the model sees the word “leopard,”
所以不管模型是否能识别“豹”这个词
sees a video of a leopard or hears someone say the word “leopard,”
能识别视频中的豹或识别人话语中的“豹”
the same response is triggered inside the model:
在模型内部都会触发相同的反应
the concept of a leopard
即“豹”的概念
can deal with different kinds of input data,
能处理不同类型的数据输入
even nonhuman inputs, like genetic sequences,
甚至是非人类输入 比如基因序列
3D clouds of points, as well as images, text and video.
三维点云 包含图像 文本和视频的形式
The third problem is that today’s models are dense.
第三个问题是 如今的模型针对的任务过于密集
There’s a single model,
这里有一个单一模块
the model is fully activated for every task,
这一模型被全面激活 用于所有任务
for every example that we want to accomplish,
以响应每一个我们想要完成的案例
whether that’s a really simple or a really complicated thing.
不管这个案例非常简单还是分外复杂
This, too, is unlike how our own brains work.
这同样也不像我们大脑的工作机制
We have different parts of our brains are good at different things,
我们大脑的不同部位擅长处理不同的事情
and we’re continuously calling upon the pieces of them
我们不断地调用其中
that are relevant for the task at hand.
与手头任务相关的部分
For example, nervously watching a garbage truck
例如 你紧张地看着一辆垃圾车
back up towards your car,
朝着你的车倒车
the part of your brain that thinks about Shakespearean sonnets is probably inactive.
你大脑中大概不会调用莎士比亚十四行诗
AI models can work the same way.
人工智能模型能以同样方式运作
Instead of a dense model,
除了密集任务模型
we can have one that is sparsely activated.
我们可以有个不常被用到的模型
So for particular different tasks, we call upon different parts of the model.
对于特定的不同任务 我们调用模型的不同模块
During training, the model can also learn which parts are good at which things,
在训练过程中模块还能了解哪一部分擅长哪种任务
so that can continuously identify what parts it wants to call upon
不断识别它想要调用的部分
in order to accomplish a new task.
从而面临新的任务也能举一反三
The advantage of this is we can have a very high-capacity model,
它的优点是我们可以有一个高容量
but it’s very efficient,
且十分高效的模块
because we’re only calling upon the parts that we need for any given task.
因为针对任何给定任务我们只需调用所需的模块
So fixing these three things, I think,
所以我觉得解决了这三个问题
will lead to a more powerful AI system:
可以构建更强大的人工智能体系
instead of thousands of separate models,
而不是上千个零散的模块
train a handful of general-purpose models
训练一些通用的 可以完成
that can do thousands or millions of things.
成千上万次任务的模型
Instead of dealing with single modalities,
而不是解决单一模块的模型
deal with all modalities,
解决所有的模块
and be able to fuse them together.
再将这些模型结合起来
And instead of dense models,
不同于密集任务模型
use sparse, high-capacity models,
而要用稀疏且高容量的模型
where we call upon the relevant bits as we need them.
如此在需要时只须调用相关字节即可
We’ve been building a system that enables these kinds of approaches,
我们已经建造了能实现这些办法的系统
and we’ve been calling the system “Pathways.”
这个系统叫做“Pathways”
So the idea is this model will be able to do
所以我们希望这一模型能够
thousands or millions of different tasks,
完成成千上万次不同的任务
and then, we can incrementally add new tasks,
然后我们可以逐渐增加新任务
and it can deal with all modalities at once,
这一模型可以一次性解决所有模块
and then incrementally learn new tasks as needed
在有需求时可以自行学习解决新任务
and call upon the relevant bits of the model
可以调用模型中的相关模块
for different examples or tasks.
解决不同的案例和任务
And we’re pretty excited about this,
我们对此十分激动
we think this is going to be a step forward in how we build AI systems.
这在如何建造人工智能系统上迈进了一大步
But I also wanted to touch on responsible AI.
但我也想谈谈可信赖的人工智能
We clearly need to make sure that
我们显然需要确保
this vision of powerful AI systems benefits everyone.
这一强大的人工智能系统对每个人都有利
These kinds of models raise important new questions
这些模型提出了一个很重要的新问题
about how do we build them with fairness,
我们如何为所有用户建立
interpretability, privacy and security, for all users in mind.
公平性 可解释性 隐私性和安全性
For example, if we’re going to train these models
比如如果我们要用
on thousands or millions of tasks,
成千上万的任务来训练这些模型
we’ll need to be able to train them on large amounts of data.
我们需要在海量的数据中进行训练
And we need to make sure that data
我们需要确保这些数据的采集
is thoughtfully collected
是经过深思熟虑的
and is representative of different communities and situations all around the world.
能代表各种地区和各种情形
And data concerns are only one aspect of responsible AI.
数据问题只是建立可靠人工智能的一个方面
We have a lot of work to do here.
我们还有许多事情要做
So in 2018, Google published this set of AI principles
2018年谷歌通过发展这些技术
by which we think about developing these kinds of technology.
出台了这一系列人工智能准则
And these have helped guide us in how we do research in this space,
这些有助于指导我们如何在这个领域进行研究
how we use AI in our products.
如何在产品中使用人工智能
And I think it’s a really helpful and important framing
我认为这是一个十分重要且有用的框架
for how to think about these deep and complex questions
指导我们如何在社会中运用人工智能
about how we should be using AI in society.
这样有深度且复杂的问题
We continue to update these as we learn more.
随着了解的越多 我们不断更新这些准则
Many of these kinds of principles are active areas of research —
这些准则中很多都涉及热门的研究领域
super important area.
非常重要的领域
Moving from single-purpose systems that kind of recognize patterns in data
从能够识别数据模式的单一用途系统
to these kinds of general-purpose intelligent systems
到这种让我们更深入理解这个世界的
that have a deeper understanding of the world
通用智能系统
will really enable us to tackle
将使我们能够
some of the greatest problems humanity faces.
解决一些人类面临的重大问题
For example, we’ll be able to diagnose more disease;
比如我们能诊断出更多的疾病
we’ll be able to engineer better medicines by infusing these models with
通过将这些模型与化学和物理知识融合
knowledge of chemistry and physics;
我们将研制出更好的药物
we’ll be able to advance educational systems
通过加入更多的个性化教学
by providing more individualized tutoring
我们将促进教育系统的发展
to help people learn in new and better ways;
帮助人们以更好的新方式学习
we’ll be able to tackle really complicated issues, like climate change,
我们将有能力解决像气候变化一样复杂的问题
and perhaps engineering of clean energy solutions.
或是解决在工业中使用清洁能源的问题
So really, all of these kinds of systems
说真的 所有这些系统
are going to be requiring the multidisciplinary expertise of people all over the world.
都将需要世界各地的多学科的专家
So connecting AI with whatever field you are in, in order to make progress.
所以为了取得进步 将AI与你所在的领域联系起来
So I’ve seen a lot of advances in computing,
我看见了计算领域中的许多进步
and how computing, over the past decades,
以及在过去几十年里计算又是如何
has really helped millions of people better understand the world around them.
帮助数百万人更好的理解周遭的世界
And AI today has the potential to help billions of people.
如今的人工智能有帮助数亿人的潜能
We truly live in exciting times.
我们生活在一个激动人心的时代
Thank you.
谢谢
Thank you so much. I just,
十分感谢
I want to follow up on a couple things.
我想再问几件事
So, the level of the progress
说起AI的进步
This is what I heard.
我听说的
Most people’s traditional picture of AI is that
大多数人对人工智能的印象
computers recognize a pattern of information,
是计算机可以识别信息的模式
and with a bit of machine learning,
机器具有自主学习的能力
they can get really good at that, better than humans.
可以比人类做得更好
What you’re saying is those patterns of information
你所说的那些模式
are no longer the atoms that AI is working with,
不再是人工智能运行的基础
that it’s much richer-layered concepts
而是层次更丰富的概念
that can include all manners of types of things
可以包括所有种类的事物
that go to make up a leopard, for example.
比如信息汇总后可以识别出豹
So what could that lead to?
所以这可以带来什么
Give me an example of when that AI is working,
能给我一个人工智能运用的例子吗
what do you picture happening in the world
在接下来五年或十年里你可以预想一下
in the next five or 10 years that excites you?
这个世界会发生什么令你激动的事吗
I think the grand challenge in AI
我觉得人工智能所面临的的巨大挑战
is how do you generalize from a set of tasks
是如何归纳总结一系列任务
you already know how to do to new tasks, as easily and effortlessly as possible.
你已经知道如何尽可能简单高效地完成新任务
And the current approach of training separate models for everything
目前的方法是为所有任务训练不同的模型
means you need lots of data about that particular problem,
意味着你需要关于特定问题的大量数据
because you’re effectively trying to learn everything
因为实际上你是从一无所知开始去了解
about the world and that problem, from nothing.
关于这个世界 这个问题的一切
Right? But if you can build these systems
但如果你能研发出这种系统
that already are infused with how to do thousands and millions of tasks,
系统已被灌输了成千上万任务的处理方式
then you can effectively teach them to do a new thing
然后你就可以只提供少许例子
with relatively few examples.
高效地让它们学会新的事情
So I think that’s the real hope,
我觉得真正需要期待的是
is that you could then have a system where
你可以有一个这样的系统
you just give it five examples
你只需基于你关心的事
of something you care about,
给它五个例子
and it learns to do that new task.
它就可以学习处理新任务
You can do a form of self-supervised learning
你建立了一种只需基于极少样例
that is based on remarkably little seeding.
而形成的自我监督的学习模式
Yeah, as opposed to needing 10,000 or 100,000 examples
是的 不需要一万或十万个样例就可以
to figure everything in the world out.
弄明白所有的事情
Aren’t there kind of terrifying unintended consequences possible from that?
难道不会有可怕的意想不到的后果吗?
I think it depends on how you apply these systems.
我认为这取决于你如何运用这些系统
It’s very clear that AI can be a powerful system for good,
从好的方面来看 人工智能很明显是一个强大的系统
or if you apply it in ways that are not so great,
如果你把它用在那些不好的地方
it can be a negative consequence.
就会产生负面的后果
So I think that’s why it’s really important to have a set of principles
所以我认为这就是为什么有一套准则
by which you look at potential uses of AI
来看待人工智能的潜在用途非常重要
and really are careful and thoughtful about how you consider applications.
你需要认真地思考如何使用它们
One of the things people worry most about is that
人们最关心的事之一
if AI is so good at learning from the world as it is,
是如果人工智能如此善于向世界学习
it’s going to carry forward into the future aspects of the world
那么也将运用于未来某些
as it is that actually aren’t right, right now.
目前看来是不正义的领域
And there’s obviously been a huge controversy about that recently at Google.
近期在谷歌 这显然引起了很大的争议
Some of those principles of AI development,
人工智能开发中的部分原则
you’ve been challenged that you’re not actually holding to them.
使你们受到挑战 因为你们没有坚持这么做
Not really interested to hear about comments on a specific case,
我不是很想听对某一个具体案例的评论
but … are you really committed?
而想问你真的坚持这些准则了吗?
How do we know that you are committed to these principles?
我们如何得知你们是否坚持了这些准则
Is that just PR, or is that real,
这到底只是公关 还是真的
at the heart of your day-to-day?
是你们日常生活中所坚持的
No, that is absolutely real.
不 那绝对是真的
Like, we have literally hundreds of people
例如 我们有上百名员工
working on many of these related research issues,
致力于许多相关的问题研究
because many of those things are research topics in their own right.
因为其中很多事情本身就是研究课题
How do you take data from the real world,
你是如何从现实世界获得数据资料的
that is the world as it is,
世界本来就是这样
not the world as we like it to be,
不是我们想要的那样
and how do you then use that
然后你如何运用这些数据
to train a machine-learning model
来训练有自主学习能力的机器模型
and adapt the data bit of the scene
然后调整机器的数据思维
or augment the data with additional data
或用额外的数据来扩充数据
so that it can better reflect the values we want the system to have,
这样它才能更好地反映我们希望这个体系拥有的价值观
not the values that it sees in the world?
而不是它在当今世界上观察到的价值观
But you work for Google,
但是你为谷歌工作
Google is funding the research.
谷歌资助了这项研究
How do we know that the main values that
我们如何知道AI所建立的主要价值观
this AI will build are for the world,
是为了让这个世界变得更好
and not, for example, to maximize the profitability of an ad model?
而不是 比如最大化广告模式的盈利能力
When you know everything there is to know about human attention,
当AI了解了人类所关心的一切事物
you’re going to know so much about the little wriggly, weird, dark parts of us.
就会知道很多关于人类小小的 怪异的黑暗面
In your group, are there rules about how you
在你们小组里 有关于如何抵挡
hold off, church-state wall between
如何拆除在业务进程中
over a sort of commercial push,
遭遇的政教壁垒的规则吗
“You must do it for this purpose,”
“为了这个原则你必须这么做”
so that you can inspire your engineers and so forth
这样就可以激励你们的工程师等等
to do this for the world, for all of us.
为了这个世界 为了我们所有人
Yeah, I mean our research group does collaborate
我们的研究团队确实
with a number of groups across Google,
和谷歌很多小组有合作
including the Ads group, the Search group, the Maps group,
包括广告组 搜索引擎组 地图组
so we do have some collaboration,
所以我们确实有一些合作
but we also have a lot of basic research
我们还有很多基础研究
that we publish openly, you know
是我们近期发表的
We’ve published more than 1,000 papers last year
去年我们已经发表了1000多篇论文
in all kinds of different topics, including the ones you discussed
有很多不同的论题 包括你提到的那个
about fairness, interpretability of the machine-learning models,
关于公平 机器学习模型的可解释性
all these kinds of things that are super important,
所有这些是非常重要的事
and we need to advance the state of the art in this
我们需要提高这方面的技术水平
in order to continue to make progress
才能继续取得进步
to make sure these models are developed safely and responsibly.
确保安全负责地开发这些模型
Cool, it feels like we’re at a time when people are concerned
好 感觉我们现在所处的这个时代
about the power of the big tech companies,
人们对大型科技公司的力量感到担忧
and it’s almost, if there was ever a moment to really show the world
几乎可以说如果有一个时刻可以真正向世界展示
that this is being done to make a better future,
这样做是在创造更美好的未来
you know, that is actually key to Google’s future,
而这也确实是谷歌未来的关键
as well as all of ours.
也是我们所有人未来的关键
-Indeed. -It’s very good to hear you come and say that, Jeff.
—的确 —很高兴你来到这里 杰夫
Thank you so much for coming here to TED.
非常感谢你来到TED
Thank you.
谢谢

发表评论

译制信息
视频概述

AI人工智能是永恒的热门话题。视频中,杰夫为我们简述人工智能的过去,发展,与未来。讲述人工智能所取得的进步,和我们所走的弯路,犯的错误。展望了人工智能的未来,以及如何开发更可靠、更值得信赖的人工智能,给人类一个更崭新的未来。快来看看吧!

听录译者

收集自网络

翻译译者

Polaristear

审核员

审核员YUWI

视频来源

https://www.youtube.com/watch?v=J-FzHIQ7SOs

相关推荐