ADM-201 dump PMP dumps pdf SSCP exam materials CBAP exam sample questions

人工智能越发展,人类的责任越重大 – 译学馆
未登陆,请登陆后再发表信息
最新评论 (0)
播放视频

人工智能越发展,人类的责任越重大

Machine intelligence makes human morals more important | Zeynep Tufekci

[启点字幕组]
Light up the world
在我刚上大学那会儿
So, I started my first job as a computer programmer
得到的第一份工作就是作一名程序员
in my very first year of college —
当时我还很年轻
basically, as a teenager.
在公司从事编程工作
Soon after I started working,
没多久
writing software in a company,
有个部门经理跑到我这儿来
a manager who worked at the company came down to where I was,
对我耳语道
and he whispered to me,
“它能看出来我在撒谎吗”
“Can he tell if I’m lying?”
当时办公室里并没有其他人
There was nobody else in the room.
“谁能看出你在撒谎啊 干嘛这么小声呢”
“Can who tell if you’re lying? And why are we whispering?”
经理指着办公室的电脑 说道
The manager pointed at the computer in the room.
“它能看出我在撒谎吗”
“Can he tell if I’m lying?”
呵呵 其实那个经理和前台小姐有染
Well, that manager was having an affair with the receptionist.
我当时还很年轻
And I was still a teenager.
所以 我对他小声喊道
So I whisper-shouted back to him,
“没错 电脑能看出你在撒谎”
“Yes, the computer can tell if you’re lying.”
我笑了 但事实上我在笑我自己
Well, I laughed, but actually, the laugh’s on me.
如今 已经有了
Nowadays, there are computational systems
能够通过分析人脸 判别出我们的情绪状态
that can suss out emotional states and even lying
甚至辨别出我们是否在撒谎的计算系统
from processing human faces.
广告商 甚至政府部门 都对此十分感兴趣
Advertisers and even governments are very interested.
我之所以会成为一个程序员
I had become a computer programmer
是因为我也是痴迷于数学和科技的人中的一个
because I was one of those kids crazy about math and science.
但一路走来 当我了解到核武器的有关知识
But somewhere along the line I’d learned about nuclear weapons,
我对科技中的道德问题产生了浓厚的兴趣
and I’d gotten really concerned with the ethics of science.
并感到困惑
I was troubled.
然而 由于家庭环境的因素
However, because of family circumstances,
我必须尽快的找份工作
I also needed to start working as soon as possible.
所以我就告诉自己 嘿 去技术领域混吧
So I thought to myself, hey, let me pick a technical field
在那儿你能轻松的找到一份工作
where I can get a job easily
而且无需处理任何棘手的道德问题
and where I don’t have to deal with any troublesome questions of ethics.
然后我就挑了计算机领域
So I picked computers.
好吧 哈 哈 哈 都在笑我
Well, ha, ha, ha! All the laughs are on me.
如今 计算机科学家正在创建
Nowadays, computer scientists are building platforms
能够控制十亿人日常所需的平台
that control what a billion people see every day.
他们正在开发可以自主选择碾压对象的汽车
They’re developing cars that could decide who to run over.
甚至正在开发
They’re even building machines, weapons,
可能在战争中致人死地的机器和武器
that might kill human beings in war.
道德沦丧
It’s ethics all the way down.
这就是机器智能
Machine intelligence is here.
现在我们运用算法来做各种决定
We’re now using computation to make all sort of decisions,
哪怕是对从未遇见的新情况也是如此
but also new kinds of decisions.
我们会向算法求证一些没有标准答案
We’re asking questions to computation that have no single right answers,
主观的
that are subjective
开放式 且具有价值的问题
and open-ended and value-laden.
我们会问这样一些问题
We’re asking questions like,
“企业应该雇佣什么样的人呢”
“Who should the company hire?”
“应该显示哪个刚更新的朋友圈”
“Which update from which friend should you be shown?”
“哪个罪犯更可能会再犯罪”
“Which convict is more likely to reoffend?”
“哪条新闻或者电影该被推送呢”
“Which news item or movie should be recommended to people?”
看吧 没错 我们确实已经用了一段时间的计算机
Look, yes, we’ve been using computers for a while,
但今非昔比
but this is different.
这是个历史性的转折
This is a historical twist,
因为我们不能将计算用于这样涉及主观的决定
because we cannot anchor computation for such subjective decisions
这与我们将计算用于驾驶飞机 修桥
the way we can anchor computation for flying airplanes, building bridges,
登月等是不一样的
going to the moon.
飞机更安全吗 桥梁会摇摆或垮塌吗
Are airplanes safer? Did the bridge sway and fall?
在这些领域中 有约定俗成的明确标准
There, we have agreed-upon, fairly clear benchmarks,
和自然法则引导我们
and we have laws of nature to guide us.
而在无章可寻的人类事务中
We have no such anchors and benchmarks
我们并没有这样的行为规范
for decisions in messy human affairs.
更复杂的是 软件功能正变得更强大
To make things more complicated, our software is getting more powerful,
但同时也变得更加不透明 更复杂
but it’s also getting less transparent and more complex.
最近十年来
Recently, in the past decade,
复杂的算法正大踏步地发展更新
complex algorithms have made great strides.
它们能够识别人脸
They can recognize human faces.
可以破译笔迹
They can decipher handwriting.
能识破信用卡诈骗
They can detect credit card fraud
还能阻挡垃圾邮件
and block spam
进行语言互译
and they can translate between languages.
它们能在医学影像中探测到肿瘤
They can detect tumors in medical imaging.
还能在国际象棋和围棋中击败人类
They can beat humans in chess and Go.
这种发展进步很大程度上要归功于 “机器学习”
Much of this progress comes from a method called “machine learning.”
它不同于传统的编码过程
Machine learning is different than traditional programming,
你不用费心给出精细 正确的指令
where you give the computer detailed, exact, painstaking instructions.
它更像是你饲养并喂食大量数据的一个系统
It’s more like you take the system and you feed it lots of data,
包括杂乱无章的
including unstructured data,
像我们日常的数字生活中所产生的数据那样
like the kind we generate in our digital lives.
系统从这些数据中汲取信息
And the system learns by churning through this data.
并且 关键是
And also, crucially,
这些系统不会在一成不变的逻辑下运行
these systems don’t operate under a single-answer logic.
它们不会仅给出一个简单的答案
They don’t produce a simple answer; it’s more probabilistic:
而是给出“这个可能更像是你要的”这样的回复
“This one is probably more like what you’re looking for.”
现在 它的优势在于:这种方法真的势不可挡
Now, the upside is: this method is really powerful.
谷歌人工智能系统的老大称之为
The head of Google’s AI systems called it,
“不可思议的数据效应”
“the unreasonable effectiveness of data.”
但缺点是
The downside is,
我们并不了解系统究竟学习了什么东西
we don’t really understand what the system learned.
实际上 这就是它的能力所在
In fact, that’s its power.
这不像是给计算机输入指令
This is less like giving instructions to a computer;
而更像是训练一只
it’s more like training a puppy-machine-creature
我们不能真正理解和控制的机器狗
we don’t really understand or control.
这就是我们的问题了
So this is our problem.
当人工智能系统犯错的时候事情就麻烦了
It’s a problem when this artificial intelligence system gets things wrong.
当它把事情做对时也是个问题
It’s also a problem when it gets things right,
因为对于主观问题 我们分不出它是对的还是错的
because we don’t even know which is which when it’s a subjective problem.
我们不知道这个东西在想些什么
We don’t know what this thing is thinking.
那么 设想一个雇佣算法——
So, consider a hiring algorithm —
即一种运用机器学习并挑选雇员的系统
a system used to hire people, using machine-learning systems.
该系统的参数来源于以前的员工
Such a system would have been trained on previous employees’ data
从而能够发现并雇佣
and instructed to find and hire
拥有类似出色表现的员工
people like the existing high performers in the company.
听上去还不错
Sounds good.
我曾经参加过一个
I once attended a conference
聚集了人事经理和执行官的会议
that brought together human resources managers and executives,
高端人士
high-level people,
在雇佣方面都在使用这种系统
using such systems in hiring.
他们都很兴奋
They were super excited.
认为这些会让招聘工作更加客观 少些偏见
They thought that this would make hiring more objective, less biased,
并让女性和非主流人群
and give women and minorities a better shot
从带有偏见的人事经理手下得到更多工作机会
against biased human managers.
瞧 人工招聘是带有偏见的
And look — human hiring is biased.
我就知道是这样
I know.
这么说吧 在我作为程序员的早期职业生涯中
I mean, in one of my early jobs as a programmer,
我的顶头上司总会或早或晚
my immediate manager would sometimes come down to where I was
来到我这儿
really early in the morning or really late in the afternoon,
她会说 “泽伊内普 咱们去吃午饭吧”
and she’d say, “Zeynep, let’s go to lunch!”
我一直觉得她来找我的时机挺奇怪的
I’d be puzzled by the weird timing.
这都下午4点了 还吃午饭吗
It’s 4pm. Lunch?
我无语了 不过我从不错过免费的午餐
I was broke, so free lunch. I always went.
很快我就明白了
I later realized what was happening.
我的顶头上司们并没有告诉他们的上级
My immediate managers had not confessed to their higher-ups
他们在重要岗位上雇佣的居然是
that the programmer they hired for a serious job was a teen girl
还穿着牛仔裤和运动鞋的小女生
who wore jeans and sneakers to work.
我做得还不错
I was doing a good job, I just looked wrong
只是看起来不符合他们对年龄和性别的要求罢了
and was the wrong age and gender.
因此 若招聘时真能忽略性别和种族的话
So hiring in a gender- and race-blind way
听上去倒也不错
certainly sounds good to me.
但使用这些系统会使事情变得更复杂 原因如下:
But with these systems, it is more complicated, and here’s why:
目前 计算系统可以通过蛛丝马迹
Currently, computational systems can infer all sorts of things about you
推断出你的各种事情
from your digital crumbs,
即使你从没公开过这些东西
even if you have not disclosed those things.
它们可以推断出你的性取向
They can infer your sexual orientation,
人格特质
your personality traits,
政治倾向
your political leanings.
它们具有高度准确的预测能力
They have predictive power with high levels of accuracy.
记住 是那些你并没在网上公开过的事情哦
Remember — for things you haven’t even disclosed.
这就是推理
This is inference.
我有个朋友开发了这样的计算系统
I have a friend who developed such computational systems
通过病患的社交媒体数据
to predict the likelihood of clinical or postpartum depression
来预测其患有临床或产后抑郁症的可能性
from social media data.
结果令人惊讶
The results are impressive.
她的系统在任何症状发作前的几个月
Her system can predict the likelihood of depression
便能准确预测出来
months before the onset of any symptoms —
几个月前哦
months before.
没有症状 也能预测出来
No symptoms, there’s prediction.
她希望将系统用于早期干预治疗 太棒了
She hopes it will be used for early intervention. Great!
但现在回到之前提到的招聘系统
But now put this in the context of hiring.
在人事经理会议上
So at this human resources managers conference,
我凑近一位在大公司工作的高管
I approached a high-level manager in a very large company,
对她说 “如果在你不知情的情况下
and I said to her, “Look, what if, unbeknownst to you,
你的系统清除了潜在的患有抑郁症的人会怎样呢
your system is weeding out people with high future likelihood of depression?
他们现在不觉的抑郁 只是在未来很有可能患病
They’re not depressed now, just maybe in the future, more likely.
如果你的系统排除了那些
What if it’s weeding out women more likely to be pregnant
现在没有 但一到两年内可能会怀孕的女性又怎样呢
in the next year or two but aren’t pregnant now?
如果雇佣了充满侵略性的人 仅仅是因为这符合你们的办公室文化 又会怎样呢”
What if it’s hiring aggressive people because that’s your workplace culture?”
你没办法通过性别来区分他们
You can’t tell this by looking at gender breakdowns.
那些方面可能都是均衡的
Those may be balanced.
而且由于这是机器学习 并不是传统的编码过程
And since this is machine learning, not traditional coding,
并没有变量可以标记为“高抑郁风险”
there is no variable there labeled “higher risk of depression,”
“高怀孕风险”
“higher risk of pregnancy,”
“属于冲动型”
“aggressive guy scale.”
你不仅不知道系统选择的依据
Not only do you not know what your system is selecting on,
更不知道从哪儿着手
you don’t even know where to begin to look.
这是个黑匣子
It’s a black box.
它具有你无法理解的预测能力
It has predictive power, but you don’t understand it.
我问她 “你有几分把握
“What safeguards,” I asked, “do you have
来保证你的黑匣子行事光明呢”
to make sure that your black box isn’t doing something shady?”
她看着我 好像我刚刚踩着十只小狗的尾巴一样
She looked at me as if I had just stepped on 10 puppy tails.
她盯着我说
She stared at me and she said,
“关于此事 我一句都不想再听”
“I don’t want to hear another word about this.”
然后她转身就走掉了
And she turned around and walked away.
注意——她并不是没礼貌
Mind you — she wasn’t rude.
很显然她想说 这又不是我的错 走开 然后对我使用“死亡凝视”
It was clearly: what I don’t know isn’t my problem, go away, death stare.
瞧 从某方面而言
Look, such a system may even be less biased
这个系统比人事经理公平多了
than human managers in some ways.
并且还能带来金钱收益
And it could make monetary sense.
但它也可能导致
But it could also lead
稳定但隐蔽的就业市场
to a steady but stealthy shutting out of the job market
对那些有高抑郁症风险的人群关闭
of people with higher risk of depression.
难道这就是我们想要建立的
Is this the kind of society we want to build,
或者是不知不觉中已经建立的社会吗?
without even knowing we’ve done this,
因为我们把自己完全不懂的东西交由机器来做决定吗?
because we turned decision-making to machines we don’t totally understand?
另一个问题是:
Another problem is this:
通常情况下 这些系统所汲取的信息来自于
these systems are often trained on data generated by our actions,
人类的行为 印记所产生的数据
human imprints.
好吧 它们可能只是在反映我们的偏见
Well, they could just be reflecting our biases,
也可能是将我们的偏见收集起来
and these systems could be picking up on our biases
并放大它们
and amplifying them
再将它们返馈回来
and showing them back to us,
而我们告诉自己
while we’re telling ourselves,
“我们只是在做客观 中立的计算”
“We’re just doing objective, neutral computation.”
研究人员发现 在Google上
Researchers found that on Google,
比起男性 女性看到高薪招聘广告的可能性更低
women are less likely than men to be shown job ads for high-paying jobs.
而且当搜索非裔美国人的名字时
And searching for African-American names
更有可能出现暗示犯罪记录的广告
is more likely to bring up ads suggesting criminal history,
即使他们根本就没有犯过罪
even when there is none.
对于这些暗含偏见进行暗箱操作的算法
Such hidden biases and black-box algorithms
有时研究人员能发现 有时则对其一无所知
that researchers uncover sometimes but sometimes we don’t know,
它们会产生改变命运的结果
can have life-altering consequences.
在威斯康星州 一个被告因为袭警
In Wisconsin, a defendant was sentenced to six years in prison
被判处六年有期徒刑
for evading the police.
大家可能不知道这些
You may not know this,
但算法被越来越多地用于假释和判决的做出
but algorithms are increasingly used in parole and sentencing decisions.
他会想知道 这些刑期是如何计算的的
He wanted to know: How is this score calculated?
这是个商业暗箱
It’s a commercial black box.
公司拒绝其算法在公开法庭上被挑战
The company refused to have its algorithm be challenged in open court.
但是ProPublica 一个非盈利调查机构
But ProPublica, an investigative nonprofit, audited that very algorithm
利用他们能找到的所有公开数据 去审查这些算法
with what public data they could find,
并发现 算法的结果是带有偏见的
and found that its outcomes were biased
它的预测能力堪忧 仅好于碰运气而已
and its predictive power was dismal, barely better than chance,
并且认为未来 黑人被告的犯罪几率是白人的两倍
and it was wrongly labeling black defendants as future criminals
这样的认知也是错误的
at twice the rate of white defendants.
我们来看看这件案子
So, consider this case:
不久前 这个女人因为接她在佛罗里达州布劳沃德县上学的妹妹
This woman was late picking up her godsister
快要迟到了
from a school in Broward County, Florida,
就和她的朋友在大街上狂跑
running down the street with a friend of hers.
当发现走廊上有一个未锁的儿童自行车和踏板时
They spotted an unlocked kid’s bike and a scooter on a porch
就傻乎乎地跳了上去
and foolishly jumped on it.
正当她们准备逃之夭夭时 一个女人出现了
As they were speeding off, a woman came out and said,
“嘿 那是我孩子的自行车”
“Hey! That’s my kid’s bike!”
尽管她们跳下车子走开了 但还是被逮捕了
They dropped it, they walked away, but they were arrested.
她确实错了 确实犯傻了 但她只有十八岁
She was wrong, she was foolish, but she was also just 18.
她还有几个犯罪记录
She had a couple of juvenile misdemeanors.
同一时间 这名男子因为在家得宝盗窃而被捕
Meanwhile, that man had been arrested for shoplifting in Home Depot —
只偷得八十五美元的东西 一个小数额的犯罪
85 dollars’ worth of stuff, a similar petty crime.
但他曾两次被判武装抢劫
But he had two prior armed robbery convictions.
算法却判定那名女子而非男子是高危罪犯
But the algorithm scored her as high risk, and not him.
两年以后 ProPublica发现该女子并未再次犯罪
Two years later, ProPublica found that she had not reoffended.
对她来说因为有前科会很难找到工作
It was just hard to get a job for her with her record.
而那名男子则再次入狱
He, on the other hand, did reoffend
而今要在监狱服刑八年
and is now serving an eight-year prison term for a later crime.
显然 我们需要审查我们的黑匣子
Clearly, we need to audit our black boxes
而不是让其拥有这种无法控制的力量
and not have them have this kind of unchecked power.
审查工作相当重要 但却不能解决所有的问题
Audits are great and important, but they don’t solve all our problems.
拿Facebook强大的新闻广播算法来举例
Take Facebook’s powerful news feed algorithm —
它可以从你关注的好友和页面中
you know, the one that ranks everything and decides what to show you
将所有东西排名 并决定将哪些东西呈现给你
from all the friends and pages you follow.
是该向你展示不同的婴儿的照片
Should you be shown another baby picture?
还是一个熟人的悲伤笔记呢
A sullen note from an acquaintance?
抑或是 一个重要但艰涩的新闻标题呢
An important but difficult news item?
没有正确答案
There’s no right answer.
Facebook优化了网站的参与度
Facebook optimizes for engagement on the site:
像点赞 分享 和评论
likes, shares, comments.
2014年8月
In August of 2014,
一个白人警察杀死了一个非裔美国少年
protests broke out in Ferguson, Missouri,
然后 抗议活动在密苏里州的弗格森市爆发
after the killing of an African-American teenager by a white police officer,
一切都处于阴沉的氛围中
under murky circumstances.
有关抗议的新闻
The news of the protests was all over
在我Twitter的未过滤广播上铺天盖地都是
my algorithmically unfiltered Twitter feed,
但在Facebook上却一点都看不到
but nowhere on my Facebook.
是因为我的Facebook好友的原因吗
Was it my Facebook friends?
我禁用了Facebook的算法
I disabled Facebook’s algorithm,
这很艰难 因为Facebook还是希望你
which is hard because Facebook keeps wanting to make you
回到它算法的控制之下
come under the algorithm’s control,
同时 我发现我的朋友们都在谈论这个新闻
and saw that my friends were talking about it.
只是算法没有展示给我
It’s just that the algorithm wasn’t showing it to me.
我研究了下 发现这是个普遍的问题
I researched this and found this was a widespread problem.
关于弗格森的抗议活动并不是友好型算法
The story of Ferguson wasn’t algorithm-friendly.
它不”可爱”
It’s not “likable.”
谁会去点赞啊
Who’s going to click on “like?”
这种事情并不方便评论
It’s not even easy to comment on.
没有了大量的点赞和评论
Without likes and comments,
该算法可能只向更少的人显示相关信息
the algorithm was likely showing it to even fewer people,
所以我们看不到结果
so we didn’t get to see this.
相反 就在那个星期
Instead, that week,
Facebook的算法突出显示了这个
Facebook’s algorithm highlighted this,
ALS冰桶挑战
which is the ALS Ice Bucket Challenge.
理由很充分 要么倒冰水 要么捐款做慈善 不错
Worthy cause; dump ice water, donate to charity, fine.
但叫做算法友好
But it was super algorithm-friendly.
机器为我们做了这个决定
The machine made this decision for us.
如果Facebook是唯一的渠道 那么
A very important but difficult conversation
一场重要但艰难的对话
might have been smothered,
可能将因此消亡
had Facebook been the only channel.
现在 这些系统中
Now, finally, these systems can also be wrong
与人类系统不相似的地方也可能出错
in ways that don’t resemble human systems.
你们还记得沃森吗 IBM的机器智能系统
Do you guys remember Watson, IBM’s machine-intelligence system
和人类选手在Jeopardy节目上同场竞技
that wiped the floor with human contestants on Jeopardy?
它是个了不起的选手
It was a great player.
但之后 在决赛中当被问到:
But then, for Final Jeopardy, Watson was asked this question:
“其最大的机场以二战中的英雄命名
“Its largest airport is named for a World War II hero,
第二大机场以二战中战役命名的是哪座城市”时
its second-largest for a World War II battle.”
芝加哥
Chicago.
两位人类选手答对了
The two humans got it right.
沃森的回答却是 “多伦多”
Watson, on the other hand, answered “Toronto” —
竟把它当作是美国的城市
for a US city category!
异常聪明的系统犯了
The impressive system also made an error
别说成年人 就算是二年级的学生也不会犯的错误
that a human would never make, a second-grader wouldn’t make.
我们的机器智能 可能会在
Our machine intelligence can fail
人类不会犯的错误类型上
in ways that don’t fit error patterns of humans,
或者我们预料之外且毫无准备的方面失败
in ways we won’t expect and be prepared for.
某人得不到能胜任的工作会很糟糕
It’d be lousy not to get a job one is qualified for,
但如果这是计算机程序错误导致的结果
but it would triple suck if it was because of stack overflow
会更糟糕
in some subroutine.
2010年5月
In May of 2010,
华尔街股市暴跌
a flash crash on Wall Street fueled by a feedback loop
就是华尔街”卖出”算法的一个反馈回路引起的
in Wall Street’s “sell” algorithm
导致十亿美元在三十六分钟内化为乌有
wiped a trillion dollars of value in 36 minutes.
我都不用去想 就能知道 如果换成
I don’t even want to think what “error” means
致命的自动武器 这个”错误”会意味着什么
in the context of lethal autonomous weapons.
没错 人总会抱有偏见
So yes, humans have always made biases.
决策者们 当家人
Decision makers and gatekeepers,
不论是在法院 新闻业 还是在战场…
in courts, in news, in war …
他们都会犯错 但这正是我要说的
they make mistakes; but that’s exactly my point.
我们无法逃避这些难题
We cannot escape these difficult questions.
我们也不能把自己的道德责任丢给机器
We cannot outsource our responsibilities to machines.
人工智能并不能给我们“免受道德约束”的权利
Artificial intelligence does not give us a “Get out of ethics free” card.
数据科学家弗莱德·本南森称之为 强行洗白
Data scientist Fred Benenson calls this math-washing.
我们所需的正好相反
We need the opposite.
我们得养成对算法的怀疑 审查和研究的习惯
We need to cultivate algorithm suspicion, scrutiny and investigation.
确保实现对算法的责任
We need to make sure we have algorithmic accountability,
审查和真正意义上的透明度
auditing and meaningful transparency.
我们应该接受将数学和算法
We need to accept that bringing math and computation
应用到杂乱的 身负价值的人类事物中
to messy, value-laden human affairs
未必能带来客观性这样一个现实
does not bring objectivity;
当然了 人类事物的复杂性也融入到了算法中
rather, the complexity of human affairs invades the algorithms.
是的 我们有能力且应当应用算法
Yes, we can and we should use computation
来帮助我们作出更好的决策
to help us make better decisions.
但是也得承认 我们对自己的判断有道德责任
But we have to own up to our moral responsibility to judgment,
也应该在道德架构中使用算法
and use algorithms within that framework,
而不是作为相互推卸责任
not as a means to abdicate and outsource our responsibilities
的一种手段
to one another as human to human.
这才是机器智能存在的价值
Machine intelligence is here.
也就是说 我们必须更加坚定地
That means we must hold on ever tighter
坚持人类的价值观和道德观念
to human values and human ethics.
谢谢大家
Thank you.

发表评论

译制信息
视频概述

随着科技的不断进步,人工智能在当今社会各个领域都发挥着越来越大的作用。但机器终究不是人,人谁无错,何况机器乎?如何正确看待并驾驭机器才是关键所在。观看本视频了解一下吧。

听录译者

收集自网络

翻译译者

启点—飞雪群山

审核员

自动通过审核

视频来源

https://www.youtube.com/watch?v=hSSmmlridUM

相关推荐