如何让人类与人工智能合作来改善业务 Sylvain Duranton: How humans and AI can work together to create better businesses

上映日期: 0

语言:

影片类型:

导演:

演员: Sylvain Duranton


台词
Let me share a paradox.
我来分享一个矛盾。
For the last 10 years,
在过去十年中,
many companies have been trying to become less bureaucratic,
很多公司都想摆脱官僚化,
to have fewer central rules and procedures,
通过减少职务, 精简程序,
more autonomy for their local teams to be more agile.
给团队更多自主权, 让公司运作更灵活。
And now they are pushing artificial intelligence, AI,
现在公司开始引进人工智能,AI,
unaware that cool technology
却没意识到这个很酷的科技
might make them more bureaucratic than ever.
可能让他们变得更加官僚。
Why?
为什么呢?
Because AI operates just like bureaucracies.
因为 AI 的运作方式就很官僚。
The essence of bureaucracy
官僚的本质
is to favor rules and procedures over human judgment.
就是看重规则和程序, 而非人类自身的判断,
And AI decides solely based on rules.
而且只根据规则做决策。
Many rules inferred from past data
虽然 AI 是依据原有规则形成的,
but only rules.
但只有规则。
And if human judgment is not kept in the loop,
若我们抛弃人类的判断,
AI will bring a terrifying form of new bureaucracy --
运用 AI 将带来可怕的新官僚主义——
I call it "algocracy" --
我称之为 AI 官僚主义 (algocracy),
where AI will take more and more critical decisions by the rules
也就是说 AI 将脱离 人类的控制,仅凭规则
outside of any human control.
做出越来越多重要决策。
Is there a real risk?
这有风险吗?
Yes.
当然有。
I'm leading a team of 800 AI specialists.
我领导的团队由 800 名 AI 专家组成,
We have deployed over 100 customized AI solutions
我们为很多全球的大公司
for large companies around the world.
量身打造了上百个 AI 系统。
And I see too many corporate executives behaving like bureaucrats from the past.
我看过太多的公司高管 因此重拾了过往的官僚做派。
They want to take costly, old-fashioned humans out of the loop
他们对麻烦又老套的 人类决策嗤之以鼻,
and rely only upon AI to take decisions.
完全依赖 AI 来做决策。
I call this the "human-zero mindset."
我称之为无人类思维 (human-zero mindset)。
And why is it so tempting?
可为何这种思维这么诱人?
Because the other route, "Human plus AI," is long,
因为另一种思维—— 人类+AI
costly and difficult.
费时、费钱、又费力。
Business teams, tech teams, data-science teams
商业团队、科技团队 和数据科学团队
have to iterate for months
不得不花费几个月的功夫,
to craft exactly how humans and AI can best work together.
探索人类和 AI 如何更好地合作。
Long, costly and difficult.
探索过程漫长艰难, 花了很多钱,
But the reward is huge.
但取得了巨大成果。
A recent survey from BCG and MIT
根据波士顿咨询公司和 麻省理工大学最近的调查,
shows that 18 percent of companies in the world
全球有 18% 的公司
are pioneering AI,
都在推动 AI 的发展,
making money with it.
希望借此盈利。
Those companies focus 80 percent of their AI initiatives
这些公司 80% 的人工智能计划
on effectiveness and growth,
都集中在效率和增长上,
taking better decisions --
以做出更好的决策——
not replacing humans with AI to save costs.
而不是用 AI 取代人类 以减少开支。
Why is it important to keep humans in the loop?
为什么人类的作用必不可少?
Simply because, left alone, AI can do very dumb things.
原因很简单: 没有人类,AI 会干傻事。
Sometimes with no consequences, like in this tweet.
有时候 AI 的工作毫无价值, 就像这条推文讲的:
"Dear Amazon,
“亲爱的亚马逊公司,
I bought a toilet seat.
我之前买了一个马桶圈。
Necessity, not desire.
生活必需品, 不是什么癖好。
I do not collect them,
我不收藏马桶圈,
I'm not a toilet-seat addict.
我没有马桶圈瘾。
No matter how temptingly you email me,
不管你的广告邮件多诱人,
I am not going to think, 'Oh, go on, then,
我都不会觉得 ‘哦,受不了,
one more toilet seat, I'll treat myself.' "
只好再买个马桶圈了, 偶尔放纵一下自己。’ ”
(Laughter)
(笑声)
Sometimes, with more consequence, like in this other tweet.
有时,AI 又“太有帮助”, 像这条推文:
"Had the same situation
“我在为妈妈买了骨灰盒后
with my mother's burial urn."
遇到了同样的状况。”
(Laughter)
(笑声)
"For months after her death,
“在她去世后的几个月里,
I got messages from Amazon, saying, 'If you liked that ...' "
亚马逊给我发的邮件都是‘根据你 的购物历史,你可能喜欢… (骨灰盒) ’ ”
(Laughter)
(笑声)
Sometimes with worse consequences.
有时结果更糟。
Take an AI engine rejecting a student application for university.
比如说 AI 曾经拒绝了 一名学生的大学申请。
Why?
为什么?
Because it has "learned," on past data,
因为这个AI 从以前的数据“学”到了
characteristics of students that will pass and fail.
哪些学生会通过, 哪些学生不能——
Some are obvious, like GPAs.
有一些指标很明确, 比如绩点。
But if, in the past, all students from a given postal code have failed,
但如果在过去,某个地区 学生都没通过,
it is very likely that AI will make this a rule
AI 很可能就此定下规则,
and will reject every student with this postal code,
然后拒绝所有来自这个地区的学生,
not giving anyone the opportunity to prove the rule wrong.
不给任何人证明规则有误的机会。
And no one can check all the rules,
并且没有人能够筛查掉这样的规则,
because advanced AI is constantly learning.
因为先进的 AI 一直在学。
And if humans are kept out of the room,
那么如果直接用 AI 取代人类,
there comes the algocratic nightmare.
迎来的将是 AI 官僚主义的噩梦:
Who is accountable for rejecting the student?
谁应该对学生的被拒负责?
No one, AI did.
没有谁,AI 来负责。
Is it fair? Yes.
这公平吗?公平。
The same set of objective rules has been applied to everyone.
因为所有学生都用同一规则判定。
Could we reconsider for this bright kid with the wrong postal code?
那可不可以重新考虑这个 “住错了地方”的聪明学生?
No, algos don't change their mind.
不行,AI 算法不会改变主意。
We have a choice here.
我们需要做出选择:
Carry on with algocracy
继续 AI 的独裁,
or decide to go to "Human plus AI."
还是考虑“人类+AI”思维?
And to do this,
要拥有这种思维,
we need to stop thinking tech first,
我们不能再优先考虑技术,
and we need to start applying the secret formula.
而是要从秘密公式入手。
To deploy "Human plus AI,"
要实现“人类+AI”,
10 percent of the effort is to code algos;
需要 10% 的编程算法;
20 percent to build tech around the algos,
20% 的科技成分,
collecting data, building UI, integrating into legacy systems;
包括收集数据,构建用户界面, 整合进遗留系统;
But 70 percent, the bulk of the effort,
其余 70% 是最重要的,
is about weaving together AI with people and processes
是结合 AI 和人类的方法,
to maximize real outcome.
让结果最接近完美。
AI fails when cutting short on the 70 percent.
如果这 70% 被削减, AI 就会出现问题。
The price tag for that can be small,
代价可以很小,
wasting many, many millions of dollars on useless technology.
只是在无用科技上 浪费数百万美元。
Anyone cares?
谁会在乎呢?
Or real tragedies:
但代价也可以大到无法承受:
346 casualties in the recent crashes of two B-737 aircrafts
最近两起波音 737 空难造成了 346 人遇难,
when pilots could not interact properly
原因都是电脑控制的飞行系统
with a computerized command system.
没有正确回应飞行员的指令。
For a successful 70 percent,
要成功实现那 70%,
the first step is to make sure that algos are coded by data scientists
第一步就要保证算法编程 由数据科学家
and domain experts together.
和领域专家共同完成。
Take health care for example.
拿医疗领域举例,
One of our teams worked on a new drug with a slight problem.
我们有一个团队曾经处理过 一种药产生的小问题。
When taking their first dose,
在首次服用这种药后,
some patients, very few, have heart attacks.
有很少一部分患者会诱发心脏病。
So, all patients, when taking their first dose,
于是所有第一次服用这种药的患者
have to spend one day in hospital,
都要住院观察一天,
for monitoring, just in case.
以防心脏病发作。
Our objective was to identify patients who were at zero risk of heart attacks,
我们想区分出 完全不可能发心脏病的患者,
who could skip the day in hospital.
这样他们就不用在医院多待一天。
We used AI to analyze data from clinical trials,
我们用 AI 分析了临床试验的数据,
to correlate ECG signal, blood composition, biomarkers,
寻找心电图、血液成分、生物标记
with the risk of heart attack.
和心脏病发作风险之间的关系。
In one month,
在一个月内,
our model could flag 62 percent of patients at zero risk.
我们训练的模型就能标记出 62% 的零发病风险患者。
They could skip the day in hospital.
这样,这些患者就不必 白白在医院呆上一天。
Would you be comfortable staying at home for your first dose
但是,你会放心地 在第一次服药后直接回家,
if the algo said so?
就因为 AI 说你可以回家了?
(Laughter)
(笑声)
Doctors were not.
医师也不会放心。
What if we had false negatives,
万一出现了错误结果呢?
meaning people who are told by AI they can stay at home, and die?
也就是说,AI 叫他们回家等死?
(Laughter)
(笑声)
There started our 70 percent.
这就需要那 70% 的作用了。
We worked with a team of doctors
我们与医师团队合作,
to check the medical logic of each variable in our model.
检验模型中变量的医学合理性。
For instance, we were using the concentration of a liver enzyme
比方说,我们用肝酶浓度
as a predictor,
作为预测变量,
for which the medical logic was not obvious.
这里的医学逻辑并不明显,
The statistical signal was quite strong.
但从统计信号角度看, 与结果有很大关系。
But what if it was a bias in our sample?
但万一它是个偏置项呢? (注:即该变量与心脏病无实际关联)
That predictor was taken out of the model.
所以这个变量会被剔除。
We also took out predictors for which experts told us
我们还剔除了一些变量,
they cannot be rigorously measured by doctors in real life.
因为医师无法精准测出这些变量。
After four months,
四个月后,
we had a model and a medical protocol.
我们训练出了模型, 制定了医学使用协议。
They both got approved
它们都获批通过。
my medical authorities in the US last spring,
去年春天,与我们合作的 美国医疗机构,
resulting in far less stress for half of the patients
为一半服用这种药的患者减轻了压力,
and better quality of life.
提高了生活品质。
And an expected upside on sales over 100 million for that drug.
且这种药的销量迅速增加, 超过了一亿份。
Seventy percent weaving AI with team and processes
人类团队和方法造就的 70%,
also means building powerful interfaces
也意味着在人类和 AI 之间
for humans and AI to solve the most difficult problems together.
建立了坚固的联结, 以共同解决最难的问题。
Once, we got challenged by a fashion retailer.
以前有一个时装零售商问我们:
"We have the best buyers in the world.
“时装零售商都很会进货,
Could you build an AI engine that would beat them at forecasting sales?
你能不能做一个 AI 在预测销量上超过他们?
At telling how many high-end, light-green, men XL shirts
要卖多少件高端服装、 浅绿色衣服、加大码男衬衫,
we need to buy for next year?
能赚到最多钱?
At predicting better what will sell or not
能不能预测哪些衣服会大卖,
than our designers."
预测得比设计师还准?”
Our team trained a model in a few weeks, on past sales data,
我们的团队在几周内 用以往销量数据训练出模型,
and the competition was organized with human buyers.
和人类商家比赛。
Result?
猜猜谁赢了?
AI wins, reducing forecasting errors by 25 percent.
AI 胜出,预测错误率比人类低 25%。
Human-zero champions could have tried to implement this initial model
零人类思维者可能会改进模型,
and create a fight with all human buyers.
投入和人类商家的竞争。
Have fun.
开心就好。
But we knew that human buyers had insights on fashion trends
但我们知道, 人类买家对时尚潮流有远见,
that could not be found in past data.
这是 AI 在以往数据学不到的。
There started our 70 percent.
于是我们转向那 70%,
We went for a second test,
我们开始了第二次测试。
where human buyers were reviewing quantities
人类商家来复查
suggested by AI
AI 推算的购买量,
and could correct them if needed.
然后做出必要纠正。
Result?
结果如何?
Humans using AI ...
使用 AI 的人类商家……
lose.
输了。
Seventy-five percent of the corrections made by a human
人类做出的纠正中,
were reducing accuracy.
有 75% 都在降低 AI 准确率。
Was it time to get rid of human buyers?
是不是要放弃人类商家的介入了?
No.
不是。
It was time to recreate a model
我们要重新搭建一个模型,
where humans would not try to guess when AI is wrong,
这一次,不让人类猜 AI 的对错,
but where AI would take real input from human buyers.
而是让 AI 寻求人类的建议。
We fully rebuilt the model
我们将模型改头换面,
and went away from our initial interface, which was, more or less,
抛弃了最初的交互方式:
"Hey, human! This is what I forecast,
“嘿人类!这是我的预测,
correct whatever you want,"
帮我纠正一下吧!”
and moved to a much richer one, more like,
改进后的交互方式 变得更广泛,像这样:
"Hey, humans!
“嘿人类!
I don't know the trends for next year.
我不懂明年的流行趋势,
Could you share with me your top creative bets?"
可不可以告诉我你押宝在哪?”
"Hey, humans!
“嘿人类!
Could you help me quantify those few big items?
可以帮我看看这些大家伙吗?
I cannot find any good comparables in the past for them."
它们超出了我的认知范围。”
Result?
结果如何?
"Human plus AI" wins,
“人类+AI” 胜出,
reducing forecast errors by 50 percent.
这次预测错误率降低了 50%。
It took one year to finalize the tool.
我们花了一年才最终完成这个工具,
Long, costly and difficult.
漫长、成本高,还很艰难,
But profits and benefits
但利润很丰厚,好处很多,
were in excess of 100 million of savings per year for that retailer.
每年为零售商节省了超过一亿美金。
Seventy percent on very sensitive topics
在一些特定议题上,
also means human have to decide what is right or wrong
70% 也意味着人类要决定对错,
and define rules for what AI can do or not,
定下规则限制 AI 的权力。
like setting caps on prices to prevent pricing engines
例如设定价格上限,
[from charging] outrageously high prices to uneducated customers
防止 AI 粗暴地抬价, 向不知情的顾客
who would accept them.
漫天要价。
Only humans can define those boundaries --
只有人类能够设定界限,
there is no way AI can find them in past data.
因为 AI 不可能从以往数据学到。
Some situations are in the gray zone.
有时候我们可能遇到灰色地带。
We worked with a health insurer.
我们曾和保险公司有过合作,
He developed an AI engine to identify, among his clients,
他们开发了一个 针对客户的 AI 系统,
people who are just about to go to hospital
用来识别快要去治病的客户,
to sell them premium services.
向他们推销附加产品。
And the problem is,
问题是,
some prospects were called by the commercial team
一些接到推销电话的客户,
while they did not know yet
这时候并不知道
they would have to go to hospital very soon.
他们很可能马上要去医院看病。
You are the CEO of this company.
如果你是这家公司的执行长,
Do you stop that program?
你会取消这个项目吗?
Not an easy question.
这是个两难的抉择。
And to tackle this question, some companies are building teams,
为了解决这个问题, 一些公司正在组建团队,
defining ethical rules and standards to help business and tech teams set limits
帮商业和科技团队 制定伦理规则和标准,
between personalization and manipulation,
在个性化和可操作性间寻找平衡点,
customization of offers and discrimination,
区别意见和偏见,
targeting and intrusion.
分清关照和冒犯。
I am convinced that in every company,
我坚信在每家公司,
applying AI where it really matters has massive payback.
把 AI 运用到关键之处 定会有巨大回报。
Business leaders need to be bold
商业领袖们要大胆尝试,
and select a few topics,
选择一些项目,
and for each of them, mobilize 10, 20, 30 people from their best teams --
为每个项目召集几十个 领域佼佼者——
tech, AI, data science, ethics --
科技、AI、科学、伦理——
and go through the full 10-, 20-, 70-percent cycle
然后完成10%、20%、70%的
of "Human plus AI,"
“人类+AI”目标。
if they want to land AI effectively in their teams and processes.
这样 AI 就可以和人类高效合作。
There is no other way.
除此之外别无他法。
Citizens in developed economies already fear algocracy.
经济飞速发展的同时, 公民已对 AI 官僚主义产生了恐惧。
Seven thousand were interviewed in a recent survey.
在近期的一项针对七千人的调研中,
More than 75 percent expressed real concerns
超过 75% 的人表示了担忧,
on the impact of AI on the workforce, on privacy,
担心 AI 影响就业、隐私,
on the risk of a dehumanized society.
担心社会会失去人性。
Pushing algocracy creates a real risk of severe backlash against AI
AI 官僚主义的出现 会导致公司和社会
within companies or in society at large.
对 AI 的强烈抵触。
"Human plus AI" is our only option
“人类+AI”是唯一选项,
to bring the benefits of AI to the real world.
只有这样才能让 AI 真正带来福祉。
And in the end,
最后,
winning organizations will invest in human knowledge,
因 AI 获利的组织, 要为人类智慧投资,
not just AI and data.
而不仅仅投资 AI 和数据。
Recruiting, training, rewarding human experts.
聘募、培养、奖励人类专家。
Data is said to be the new oil,
有人说数据是新的燃料,
but believe me, human knowledge will make the difference,
但相信我,人类知识能改变世界。
because it is the only derrick available
因为人类知识是唯一的泵,
to pump the oil hidden in the data.
能将蕴藏于数据的“燃料” 源源不断地泵出。
Thank you.
谢谢大家。
(Applause)
(掌声)