Let me share a paradox.
For the last 10 years,
many companies have been trying to become less bureaucratic,
to have fewer central rules and procedures,
more autonomy for their local teams to be more agile.
And now they are pushing artificial intelligence, AI,
unaware that cool technology
might make them more bureaucratic than ever.
Because AI operates just like bureaucracies.
因为 AI 的运作方式就很官僚。
The essence of bureaucracy
is to favor rules and procedures over human judgment.
And AI decides solely based on rules.
Many rules inferred from past data
虽然 AI 是依据原有规则形成的，
And if human judgment is not kept in the loop,
AI will bring a terrifying form of new bureaucracy --
运用 AI 将带来可怕的新官僚主义——
I call it "algocracy" --
我称之为 AI 官僚主义 (algocracy)，
where AI will take more and more critical decisions by the rules
也就是说 AI 将脱离 人类的控制，仅凭规则
outside of any human control.
Is there a real risk?
I'm leading a team of 800 AI specialists.
我领导的团队由 800 名 AI 专家组成，
We have deployed over 100 customized AI solutions
for large companies around the world.
量身打造了上百个 AI 系统。
And I see too many corporate executives behaving like bureaucrats from the past.
They want to take costly, old-fashioned humans out of the loop
and rely only upon AI to take decisions.
完全依赖 AI 来做决策。
I call this the "human-zero mindset."
我称之为无人类思维 （human-zero mindset）。
And why is it so tempting?
Because the other route, "Human plus AI," is long,
costly and difficult.
Business teams, tech teams, data-science teams
have to iterate for months
to craft exactly how humans and AI can best work together.
探索人类和 AI 如何更好地合作。
Long, costly and difficult.
But the reward is huge.
A recent survey from BCG and MIT
shows that 18 percent of companies in the world
全球有 18％ 的公司
are pioneering AI,
都在推动 AI 的发展，
making money with it.
Those companies focus 80 percent of their AI initiatives
这些公司 80% 的人工智能计划
on effectiveness and growth,
taking better decisions --
not replacing humans with AI to save costs.
而不是用 AI 取代人类 以减少开支。
Why is it important to keep humans in the loop?
Simply because, left alone, AI can do very dumb things.
原因很简单： 没有人类，AI 会干傻事。
Sometimes with no consequences, like in this tweet.
有时候 AI 的工作毫无价值， 就像这条推文讲的：
I bought a toilet seat.
Necessity, not desire.
I do not collect them,
I'm not a toilet-seat addict.
No matter how temptingly you email me,
I am not going to think, 'Oh, go on, then,
one more toilet seat, I'll treat myself.' "
只好再买个马桶圈了， 偶尔放纵一下自己。’ ”
Sometimes, with more consequence, like in this other tweet.
有时，AI 又“太有帮助”， 像这条推文：
"Had the same situation
with my mother's burial urn."
"For months after her death,
I got messages from Amazon, saying, 'If you liked that ...' "
亚马逊给我发的邮件都是‘根据你 的购物历史，你可能喜欢… (骨灰盒) ’ ”
Sometimes with worse consequences.
Take an AI engine rejecting a student application for university.
比如说 AI 曾经拒绝了 一名学生的大学申请。
Because it has "learned," on past data,
characteristics of students that will pass and fail.
Some are obvious, like GPAs.
But if, in the past, all students from a given postal code have failed,
it is very likely that AI will make this a rule
and will reject every student with this postal code,
not giving anyone the opportunity to prove the rule wrong.
And no one can check all the rules,
because advanced AI is constantly learning.
因为先进的 AI 一直在学。
And if humans are kept out of the room,
那么如果直接用 AI 取代人类，
there comes the algocratic nightmare.
迎来的将是 AI 官僚主义的噩梦：
Who is accountable for rejecting the student?
No one, AI did.
Is it fair? Yes.
The same set of objective rules has been applied to everyone.
Could we reconsider for this bright kid with the wrong postal code?
No, algos don't change their mind.
We have a choice here.
Carry on with algocracy
继续 AI 的独裁，
or decide to go to "Human plus AI."
we need to stop thinking tech first,
and we need to start applying the secret formula.
To deploy "Human plus AI,"
10 percent of the effort is to code algos;
需要 10％ 的编程算法；
20 percent to build tech around the algos,
collecting data, building UI, integrating into legacy systems;
But 70 percent, the bulk of the effort,
其余 70％ 是最重要的，
is about weaving together AI with people and processes
是结合 AI 和人类的方法，
to maximize real outcome.
AI fails when cutting short on the 70 percent.
如果这 70％ 被削减， AI 就会出现问题。
The price tag for that can be small,
wasting many, many millions of dollars on useless technology.
Or real tragedies:
346 casualties in the recent crashes of two B-737 aircrafts
最近两起波音 737 空难造成了 346 人遇难，
when pilots could not interact properly
with a computerized command system.
For a successful 70 percent,
the first step is to make sure that algos are coded by data scientists
and domain experts together.
Take health care for example.
One of our teams worked on a new drug with a slight problem.
When taking their first dose,
some patients, very few, have heart attacks.
So, all patients, when taking their first dose,
have to spend one day in hospital,
for monitoring, just in case.
Our objective was to identify patients who were at zero risk of heart attacks,
who could skip the day in hospital.
We used AI to analyze data from clinical trials,
我们用 AI 分析了临床试验的数据，
to correlate ECG signal, blood composition, biomarkers,
with the risk of heart attack.
our model could flag 62 percent of patients at zero risk.
我们训练的模型就能标记出 62％ 的零发病风险患者。
They could skip the day in hospital.
Would you be comfortable staying at home for your first dose
if the algo said so?
就因为 AI 说你可以回家了？
Doctors were not.
What if we had false negatives,
meaning people who are told by AI they can stay at home, and die?
There started our 70 percent.
这就需要那 70％ 的作用了。
We worked with a team of doctors
to check the medical logic of each variable in our model.
For instance, we were using the concentration of a liver enzyme
for which the medical logic was not obvious.
The statistical signal was quite strong.
But what if it was a bias in our sample?
That predictor was taken out of the model.
We also took out predictors for which experts told us
they cannot be rigorously measured by doctors in real life.
we had a model and a medical protocol.
They both got approved
my medical authorities in the US last spring,
resulting in far less stress for half of the patients
and better quality of life.
And an expected upside on sales over 100 million for that drug.
Seventy percent weaving AI with team and processes
also means building powerful interfaces
也意味着在人类和 AI 之间
for humans and AI to solve the most difficult problems together.
Once, we got challenged by a fashion retailer.
"We have the best buyers in the world.
Could you build an AI engine that would beat them at forecasting sales?
你能不能做一个 AI 在预测销量上超过他们？
At telling how many high-end, light-green, men XL shirts
we need to buy for next year?
At predicting better what will sell or not
than our designers."
Our team trained a model in a few weeks, on past sales data,
and the competition was organized with human buyers.
AI wins, reducing forecasting errors by 25 percent.
AI 胜出，预测错误率比人类低 25％。
Human-zero champions could have tried to implement this initial model
and create a fight with all human buyers.
But we knew that human buyers had insights on fashion trends
that could not be found in past data.
这是 AI 在以往数据学不到的。
There started our 70 percent.
We went for a second test,
where human buyers were reviewing quantities
suggested by AI
and could correct them if needed.
Humans using AI ...
使用 AI 的人类商家……
Seventy-five percent of the corrections made by a human
were reducing accuracy.
有 75％ 都在降低 AI 准确率。
Was it time to get rid of human buyers?
It was time to recreate a model
where humans would not try to guess when AI is wrong,
这一次，不让人类猜 AI 的对错，
but where AI would take real input from human buyers.
而是让 AI 寻求人类的建议。
We fully rebuilt the model
and went away from our initial interface, which was, more or less,
"Hey, human! This is what I forecast,
correct whatever you want,"
and moved to a much richer one, more like,
I don't know the trends for next year.
Could you share with me your top creative bets?"
Could you help me quantify those few big items?
I cannot find any good comparables in the past for them."
"Human plus AI" wins,
reducing forecast errors by 50 percent.
It took one year to finalize the tool.
Long, costly and difficult.
But profits and benefits
were in excess of 100 million of savings per year for that retailer.
Seventy percent on very sensitive topics
also means human have to decide what is right or wrong
and define rules for what AI can do or not,
定下规则限制 AI 的权力。
like setting caps on prices to prevent pricing engines
[from charging] outrageously high prices to uneducated customers
防止 AI 粗暴地抬价， 向不知情的顾客
who would accept them.
Only humans can define those boundaries --
there is no way AI can find them in past data.
因为 AI 不可能从以往数据学到。
Some situations are in the gray zone.
We worked with a health insurer.
He developed an AI engine to identify, among his clients,
他们开发了一个 针对客户的 AI 系统，
people who are just about to go to hospital
to sell them premium services.
some prospects were called by the commercial team
while they did not know yet
they would have to go to hospital very soon.
You are the CEO of this company.
Do you stop that program?
Not an easy question.
And to tackle this question, some companies are building teams,
defining ethical rules and standards to help business and tech teams set limits
between personalization and manipulation,
customization of offers and discrimination,
targeting and intrusion.
I am convinced that in every company,
applying AI where it really matters has massive payback.
把 AI 运用到关键之处 定会有巨大回报。
Business leaders need to be bold
and select a few topics,
and for each of them, mobilize 10, 20, 30 people from their best teams --
tech, AI, data science, ethics --
and go through the full 10-, 20-, 70-percent cycle
of "Human plus AI,"
if they want to land AI effectively in their teams and processes.
这样 AI 就可以和人类高效合作。
There is no other way.
Citizens in developed economies already fear algocracy.
经济飞速发展的同时， 公民已对 AI 官僚主义产生了恐惧。
Seven thousand were interviewed in a recent survey.
More than 75 percent expressed real concerns
超过 75％ 的人表示了担忧，
on the impact of AI on the workforce, on privacy,
担心 AI 影响就业、隐私，
on the risk of a dehumanized society.
Pushing algocracy creates a real risk of severe backlash against AI
AI 官僚主义的出现 会导致公司和社会
within companies or in society at large.
对 AI 的强烈抵触。
"Human plus AI" is our only option
to bring the benefits of AI to the real world.
只有这样才能让 AI 真正带来福祉。
winning organizations will invest in human knowledge,
因 AI 获利的组织， 要为人类智慧投资，
not just AI and data.
而不仅仅投资 AI 和数据。
Recruiting, training, rewarding human experts.
Data is said to be the new oil,
but believe me, human knowledge will make the difference,
because it is the only derrick available
to pump the oil hidden in the data.