Will AI Enhance or Hack Humanity? – Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson

Will AI Enhance or Hack Humanity? – Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson

00:00

My name is Rob Reich,

00:02

I’m delighted to welcome you here to Stanford University

00:05

for an evening of conversation

00:06

with Yuval Harari, Fei-Fei Li, and Nick Thompson.

00:11

I’m a professor of political science here

00:13

and the Faculty Director of

00:14

the Stanford Center for Ethics and Society,

00:17

which is a co-sponsor of tonight’s event,

00:19

along with the Stanford Institute

00:21

for Human Centered Artificial Intelligence

00:23

and the Stanford Humanities Center.

00:26

Our topic tonight is a big one.

00:29

We’re going to be thinking together

00:30

about the promises and perils of artificial intelligence.

00:34

The technology quickly reshaping our economic,

00:37

social, and political worlds, for better or for worse.

00:41

The questions raised by the emergence of AI

00:43

are by now familiar, at least to many people

00:46

here in Silicon Valley but, I think it’s fair

00:48

to say that their importance is only growing.

00:51

What will the future of work look like

00:53

when millions of jobs can be automated?

00:55

Are we doomed or perhaps blessed to live in a world

00:58

where algorithms make decisions instead of humans.

01:02

And these are smaller questions in the big scheme of things.

01:07

What, might you ask you’re the large ones?

01:09

Well, here are three.

01:11

What will become of the human species

01:13

if machine intelligence approaches

01:15

or exceeds that of an ordinary human being?

01:19

As a technology that currently relies

01:22

on massive centralized pools of data,

01:25

does AI favor authoritarian centralized governments

01:29

over more decentralized democratic governance?

01:33

And are we at the start now of an AI arms race?

01:37

And what will happen if powerful systems of AI,

01:40

especially when deployed for purposes

01:42

like facial recognition, are in the hands

01:44

of authoritarian rulers?

01:47

These challenges only scratch the surface when it comes

01:49

to fully wrestling with the implications of AI,

01:52

as the technology continues to improve

01:54

and its use cases continue to multiply.

01:58

I want to mention the format of the evening event.

02:02

First, given the vast areas of expertise

02:04

that Yuval and Fei-Fei have,

02:07

when you ask questions via Slido,

02:10

those questions should pertain

02:11

or be limited to the topics under discussion tonight.

02:14

So, this web interface that we’re using,

02:16

Slido allows people to upvote and downvote questions.

02:20

So, you can see them now if you have

02:21

an internet communication device.

02:24

If you don’t have one, you can take one of these postcards,

02:28

which hopefully you got outside

02:29

and on the back you can fill in a question you might have

02:31

about the evening event and collect it at the end,

02:34

and the Stanford Humanities Center

02:35

will try to foster some type of conversation

02:38

on the basis of those questions.

02:41

Couple housekeeping things,

02:42

if you didn’t purchase one already,

02:44

Yuval’s books are available for sale

02:46

outside in the lobby after the event.

02:49

A reminder to please turn your cell phone ringers off.

02:53

And we will have 90 minutes

02:55

for our moderated conversation here

02:57

and will end sharp after 90 minutes.

03:01

Now, I’m going to leave the stage in just a minute

03:03

and allow a really amazing undergraduate student

03:07

here at Stanford to introduce our guests.

03:10

Her name is Anna-Sofia Lesiv,

03:12

let me just tell you a bit about her.

03:14

She’s a junior here at Stanford majoring in Economics

03:17

with a minor in Computer Science

03:19

and outside the classroom, Anna-Sofia is a journalist

03:22

whose work has been featured in The Globe and Mail,

03:24

Al Jazeera, The Mercury News, The Seattle Times,

03:28

and this campuses paper of record, The Stanford Daily.

03:32

She’s currently the Executive Editor of The Daily

03:35

and her daily magazine article

03:37

from earlier in the year called CS Plus Ethics,

03:42

examined the history of computer science

03:44

and ethics education at Stanford

03:46

and it won the student prize for best journalism of 2018.

03:51

She continues to publish probing examinations

03:53

of the ethical challenges faced by technologists here

03:56

and elsewhere so, ladies and gentlemen

03:58

I invite you to remember this name

04:00

for you’ll be reading about her

04:02

or reading her articles, or likely both,

04:06

please welcome Stanford junior, Anna-Sofia Lesiv.

04:09

[audience clapping]

04:17

Thank you very much for the introduction, Rob.

04:19

Well it’s my great honor now,

04:21

to introduce our three guests tonight,

04:23

Yuval Noah Harari, Fei-Fei Li, and Nicholas Thompson.

04:27

Professor Yuval Noah Harari is a historian,

04:30

futurist, philosopher, and professor at Hebrew University.

04:34

The world also knows him for authoring some of

04:37

the most ambitious and influential books of our decade.

04:40

Professor Harari’s internationally best-selling books,

04:42

which have sold millions of copies worldwide,

04:45

have covered a dizzying array of subject matter

04:47

from narrativizing the entire history

04:49

of the human race in Sapiens,

04:51

to predicting the future awaiting humanity,

04:53

and even coining a new faith called Dadaism, in Homo Deus.

04:57

Professor Harari has become a beloved figure

04:59

in Silicon Valley, whose readings are assigned

05:01

in Stanford’s classrooms and whose name

05:03

is whispered through the hallways

05:05

of the comparative literature

05:07

and computer science departments, alike.

05:09

His most recent book is 21 Lessons for the 21st Century,

05:13

which focuses on the technological,

05:15

social, political, and ecological challenges

05:18

of the present moment.

05:20

In this work, Harari cautions

05:21

that as technological breakthroughs

05:23

continue to accelerate, we will have less

05:25

and less time to reflect upon the meaning

05:27

and consequences of the changes they bring.

05:30

And this urgency, is what charges

05:31

Professor Fei-Fei Li’s work everyday,

05:33

in her role as the Co-Director of Stanford’s

05:35

Human-Centered AI Institute.

05:38

This institute is one of the first

05:40

to insist that AI is not merely the domain of technologists

05:43

but a fundamentally interdisciplinary

05:47

and ultimately human issue.

05:49

Her fascination with the fundamental questions

05:51

of human intelligence is what piqued her interest

05:53

in neuroscience, as she eventually became

05:56

one of the world’s greatest experts

05:58

in the fields of computer vision, machine learning,

06:00

and cognitive and computational neuroscience.

06:03

She’s published over a hundred scientific articles

06:05

in leading journals and has had research supported

06:08

by the National Science Foundation, Microsoft,

06:11

and the Sloan Foundation.

06:13

From 2013 to 2018, Professor Fei-Fei Li served as

06:16

the Director of Stanford’s AI lab

06:18

and between January, 2017 and September, 2018,

06:22

Professor Fei-Fei Li served as Vice President at Google

06:24

and Chief Scientist of AI and Machine Learning

06:28

Nicholas Thompson is the Editor-In-Chief of Wired magazine,

06:32

a position he’s held since January, 2017.

06:35

Under Mr. Thompson’s leadership,

06:36

the topic of artificial intelligence

06:39

has come to hold a special place at the magazine.

06:42

Not only has Wired assigned more feature stories

06:45

on AI than on any other subject,

06:47

but it is the only specific topic

06:49

with a full-time reporter assigned to it.

06:51

It’s no wonder then, that Professors Harari

06:53

and Li are no strangers to its pages.

06:56

Mr. Thompson has led discussions

06:58

with the world’s leaders in technology and AI,

07:01

including Mark Zuckerberg on Facebook and Privacy,

07:04

French President, Emmanuel Macron on France’s AI strategy,

07:07

and Ray Kurzweil on the ethics and limits of AI.

07:11

Mr. Thompson is a Stanford University graduate

07:13

who earned his BA, double majoring

07:15

in earth systems and political science

07:17

and impressively even completed a third degree in economics.

07:21

Of course, I would be remiss if I did not mention

07:24

that Mr. Thompson cut his journalistic teeth

07:26

in the opinions section of the Stanford Daily so,

07:29

Nick, that makes both of us.

07:32

Like all our guests today, I’m at once fascinated

07:35

and worried by the challenges

07:36

that artificial intelligence poses for our society.

07:39

One of my goals at Stanford has been

07:41

to write about and document the challenge

07:43

of educating a generation of students whose lives

07:46

and workplaces, will eventually be transformed by AI.

07:50

Most recently, I published an article

07:52

called Complacent Valley, with the Stanford Daily.

07:55

In it I critiqued our propensity

07:57

to become overly comfortable with the technological

08:00

and financial achievements that Silicon Valley

08:02

has already reached, lest we become complacent

08:05

and lose our ambition and momentum

08:07

to tackle the greater challenges the world has in store.

08:10

Answering the fundamental questions

08:12

of what we should spend our time on,

08:14

how we should live our lives,

08:16

has become much more difficult,

08:17

particularly on the doorstep of the AI revolution.

08:21

I believe that the kind of crisis of agency

08:23

that Author JD Vance wrote of in Hillbilly Elegy,

08:26

for example, is not confined to Appalachia

08:29

or the de-industrialized Midwest

08:31

but is emerging even at elite institutions like Stanford.

08:34

So conversations like hours this evening,

08:36

hosting speakers that aim to re-center

08:38

the individual at the heart of AI,

08:40

will show us how to take responsibility

08:42

in a moment when most decisions

08:44

can seemingly be made for us, by algorithms.

08:47

There are no narratives to guide us through a future

08:50

with AI, no ancient myths or stories

08:52

that we may rely on to tell us what to do.

08:55

At a time when Humanity is facing

08:57

its greatest challenge yet,

08:58

somehow we cannot be more at a loss for ideas or direction.

09:02

It’s this momentous crossroads in human history

09:05

that pulls me towards journalism and writing in the future.

09:08

And it’s why I’m so eager to hear

09:10

our three guests discuss exactly such a future, tonight.

09:13

So, please join me in giving them

09:18

a very warm welcome this evening.

09:20

[audience applause]

09:29

Wow, thank you so much Anna-Sofia, thank you, Rob.

09:33

Thank you, Stanford for inviting us all here.

09:35

I’m having a flashback to the last time

09:38

I was on a stage at Stanford,

09:39

which was playing guitar at the coho

09:41

and I didn’t have either Yuval or Fei-Fei with me

09:43

so, there were about six people in the audience,

09:44

one of whom had her headphones on but, I did meet my wife.

09:49

[audience croons] Isn’t that sweet?

09:52

All right so, a reminder, housekeeping,

09:54

questions are going to come in, in Slido.

09:56

You can put them in, you can vote up questions,

09:58

we’ve already got several thousand

10:00

so please vote up the ones you really like.

10:02

If someone can program an AI that can get

10:04

a really devastating question in

10:06

and stump Yuval, I will get you

10:07

a free subscription to Wired.

10:10

I want this conversation to kind of have three parts.

10:13

First, lay out where we are,

10:15

then talk about some of the choices

10:18

we have to make now, and last talk about some advice

10:21

for all the wonderful people in the halls.

10:23

So, those are the three general areas,

10:25

I’ll feed in questions as we go.

10:27

We may have a specific period for questions

10:29

at the end but, let’s get cracking.

10:34

So, the last time we talked you said many,

10:36

many brilliant things but one that stuck out,

10:38

it was a line where you said,

10:40

We are not just in a technological crisis,

10:43

we are in a philosophical crisis.

10:46

So, explain what you meant, explain how it ties to AI,

10:49

and let’s get going with a note of existential angst.

10:55

Yes, I think what’s happening now

10:57

is that the philosophical framework of the modern world

11:01

that has been established in the 17th and 18th century,

11:04

around ideas like human agency and individual free will,

11:10

are being challenged like never before.

11:13

Not by philosophical ideas but by practical technologies.

11:18

And we see more and more questions,

11:21

which used to be, you know, the bread and butter

11:25

of the philosophy department, being moved

11:28

to the engineering department.

11:30

And that’s scary, partly because, unlike philosophers,

11:35

who are extremely patient people,

11:37

they can discuss something for thousands of years

11:40

without reaching any agreement and they are fine with that,

11:44

[light audience laughter] the engineers won’t wait

11:46

and even if the engineers are willing to wait,

11:49

the investors behind the engineers, won’t wait.

11:54

So, it means that we don’t have a lot of time

11:57

and in order to encapsulate what the crisis is,

12:02

I know that, you know engineers,

12:03

especially in a place like Silicon Valley,

12:05

they like equations so, maybe I

12:07

can try to formulate an equation [laughing]

12:11

to explain what’s happening.

12:13

And the equation is B times C times D equals ah.

12:19

Which means, biological knowledge

12:23

multiplied by computing power multiplied by data

12:27

equals the ability to hack humans.

12:31

And the AI revolutional crisis is not just AI,

12:35

it’s also biology, it’s biotech.

12:38

We haven’t seen anything yet

12:40

because the link is not complete.

12:44

There is a lot of hype now around AI in computers

12:47

but just that it is just half the story.

12:51

The other half is the abilities,

12:53

the biological knowledge coming from brain science

12:57

and biology and once you link that to AI,

13:02

what you get is the ability to hack humans.

13:06

And maybe I’ll explain what it means,

13:08

the ability to hack humans to create an algorithm

13:11

that understands me better than I understand myself

13:15

and can therefore manipulate me, enhance me, or replace me.

13:21

And this something that our philosophical baggage

13:25

and all our belief in, you know, human agency,

13:28

and free will, and the customer is always right,

13:31

and the voter knows best, this just falls apart

13:35

once you have this kind of ability.

13:39

Once you have this kind of ability

13:40

and it’s used to manipulate or replace you,

13:44

not if it’s used to enhance you?

13:45

Also when it’s used to enhance you,

13:47

the question is, who decides what is a good enhancement

13:52

and what is a bad enhancement.

13:54

So, our immediate fallback position

13:58

is to fall back on the traditional humanist ideas

14:03

that the customer is always right,

14:06

the customers will choose the enhancement,

14:08

or the voter is always right.

14:10

The voters will vote.

14:11

There will be a political decision about enhancement,

14:15

or if it feels good, do it.

14:18

We’ll just follow our heart, we’ll just listen to ourselves.

14:22

None of this works when there is a technology

14:26

to hack human on a large scale.

14:28

You can’t trust your feelings,

14:31

or the voters, or the customers on that.

14:33

The easiest people to manipulate

14:35

are the people who believe in free will

14:37

because they think they cannot be manipulated.

14:40

So, how do you decide what to enhance if,

14:46

and this a very deep ethical and philosophical question.

14:50

Again, it philosophers have been debating

14:52

for thousands of years.

14:55

What are the good qualities we need to enhance?

14:58

So, if you can’t trust the customer,

15:00

if you can’t trust the voter,

15:02

if you can’t trust your feelings, who do you trust?

15:07

All right Fei-Fei, you have a PhD,

15:10

you have a CS degree, you’re Professor at Stanford.

15:12

Does A times B times C equal H? [laughing]

15:16

Is Yuvals theory the right way

15:18

to look at where we’re headed?

15:21

Wow, what a beginning, thank you, Yuval.

15:24

Well, one of the things, I’ve been reading Yuval’s book

15:28

for the past couple of years, and talking to you,

15:32

and I’m very envious of philosophers now,

15:35

because they can propose questions

15:37

and crisis but they don’t have to answer them.

15:47

Now, as an engineer and scientist,

15:49

I feel like we have to now solve the crisis.

15:53

So, honestly I think I’m very thankful.

15:57

I mean, personally I’ve been reading your book

15:59

for two years and I’m very thankful

16:03

that Yuval, among other people,

16:05

have opened up this really important question

16:09

for us and it’s also quite a…

16:13

When you said the AI crisis

16:16

and I was sitting there thinking,

16:17

this a field I loved, and felt passionate about,

16:21

and researched for 20 years,

16:24

and that was just a scientific curiosity

16:27

of a young scientist entering PhD and AI.

16:32

What happened, that 20 years later, it has become a crisis?

16:37

And it actually speak of the evolution of AI

16:42

that got me where I am today

16:45

and got my colleagues at Stanford where we are today

16:48

with the Human-Center AI,

16:50

is that this a transformative technology.

16:54

It’s a nascent technology, it’s still a budding science

16:58

compared to physics, chemistry, biology but,

17:02

with the power of data, computing,

17:05

and the kind of diverse impact AI is making,

17:09

it is like you said, is touching human lives

17:12

and business in broad and deep ways.

17:16

And responding to that kind of questions

17:21

in crisis that’s facing humanity,

17:24

I think one of the proposed solution,

17:29

or if not solution at least a try

17:31

that Stanford is making an effort about,

17:34

is can we reframe the education,

17:39

the research, and the dialogue of AI

17:42

and technology in general, in a human centered way.

17:46

We’re not necessarily gonna find solution today but,

17:50

can we involve the humanists, the philosophers,

17:53

the historians, the political scientists,

17:56

the economists, the ethicist, the legal scholars,

18:00

the neuroscientists, the psychologists,

18:03

and many more other disciplines,

18:06

into the study and development of AI

18:10

in the next chapter, in the next phase.

18:14

Don’t be so certain we’re not gonna get an answer today.

18:15

I’ve got two of the smartest people in the world

18:17

glued to their chairs and I’ve got Slido

18:19

for 72 minutes so, let’s give it a shot.

18:21

But he said we have thousands of years.

18:26

But let me go a little bit further in Yuval’s questions.

18:30

So, there are lots, or Yuval’s opening statement,

18:33

there are a lot of crises about AI

18:35

that people talk about, right?

18:36

They talk about AI becoming conscious

18:38

and what will that mean,

18:39

they talk about job displacement,

18:40

They talk about biases, when Yuval has very clearly laid out

18:43

what he thinks is the most important one,

18:45

which is the combination of biology plus

18:48

computing plus data leading to hacking.

18:50

He’s laid out a very specific concern.

18:52

Is that specific concern, what people

18:55

who were thinking about AI should be focused on?

19:00

So, alien technology humanity has created,

19:03

starting from fire, is a double-edged soul.

19:07

So, it can bring improvements to life and to work

19:12

and to society but it can bring the perils

19:15

and AI has the perils, you know?

19:17

I wake up every day worried

19:19

about the diversity inclusion issue in AI.

19:23

We worry about fairness or the lack of fairness,

19:26

privacy, the labor market so,

19:30

absolutely we need to be concerned

19:33

and because of that we need to expand the study,

19:38

the research, and the the development of policies,

19:43

and the dialogue of AI beyond just the codes

19:47

and the products into these human realms,

19:50

into these societal issues.

19:52

So, I absolutely agree with you on that,

19:54

that this the moment to open the dialogue,

19:58

to open the research in those issues.

20:02

Okay. I would just say that again,

20:04

part of my fear is that the dialogue,

20:08

I don’t fear AI experts talking with philosophers,

20:11

I’m fine with that, historians good,

20:13

literary critics wonderful, I fear the moment

20:16

you start talking with biologists.

20:19

That’s my biggest fear.

20:21

When you and the biologist will,

20:23

Hey, we actually had a common language

20:26

and we can do things together.

20:28

And that’s when the really scary things, I think.

20:31

Can you elaborate on the what is scaring you

20:34

that we talk to biologists?

20:36

That’s the moment when you can really hack human beings,

20:39

not by collecting data about our search words,

20:45

or our purchasing habits, or where do we go about town,

20:49

but you can actually start peering inside

20:52

and collect data directly

20:54

from our hearts and from our brains.

20:57

Okay, can I be specific?

20:59

First of all, the birth of AI is AI scientist

21:04

talking to biologists, specifically neuro scientists.

21:07

Right, the birth of AI is very much inspired

21:09

by what the brain does.

21:12

Fast-forward to sixty years later,

21:15

today’s AI is making great improvement in healthcare.

21:19

There’s a lot of data from our physiology

21:24

and pathology being collected

21:26

and using machine learning to help us but,

21:29

I feel like you’re talking about something else.

21:31

That’s part of it, I mean,

21:32

if there wasn’t a great promise in the technology,

21:37

there would also be no danger

21:38

because nobody would go along that path.

21:40

I mean, obviously, there are enormously beneficial things

21:46

that AI can do for us, especially

21:48

when it is linked with how is biology.

21:51

We are about to get the best health care in the world,

21:55

in history, and the cheapest,

21:57

and available for billions of people via smartphones,

22:00

which today they have almost nothing.

22:02

And this is why it is almost impossible to resist

22:06

the temptation and with all the issue now, of privacy.

22:11

If you have a big battle between privacy and health,

22:14

health is likely to win hands down.

22:17

So, I fully agree with that and, you know,

22:21

my job as a historian, as a philosopher,

22:24

as a social critic, is to point out the dangers in that

22:29

because especially in Silicon Valley,

22:31

people are very much familiar with the advantages

22:35

but they don’t like to think so much

22:37

about the dangers and the big danger

22:40

is what happens when you can hack the brain

22:44

and that can serve not just your healthcare provider,

22:47

that can serve so many things from a crazy dictator, to–

22:53

Let’s focus on that, what it means to hack the brain.

22:55

Like what, right now in some ways,

22:56

my brain is hacked, right?

22:57

There’s an allure of this device,

22:59

it wants me to check it constantly.

23:00

Like, my brain has been a little bit hacked.

23:02

Yours hasn’t because you meditate two hours a day

23:04

but mine has and probably [laughter]

23:05

most of these people have.

23:07

But what exactly is the future brain hacking

23:10

going to be, that it isn’t today?

23:14

Much more of the same, but on a much larger scale.

23:18

I mean, the point when for example,

23:21

more and more of your personal decisions in lives

23:25

are being outsourced to an algorithm

23:28

that is just so much better than you.

23:30

So, you know we have two distinct dystopias

23:35

that kind of mesh together.

23:37

We have the dystopia of surveillance capitalism

23:42

in which there is no like, Big Brother dictator

23:47

but more and more of your decisions

23:50

are being made by an algorithm

23:52

and it’s not just decisions about what to eat,

23:55

or what to shop, but decisions like,

23:57

where to work, and where to study, and whom to date,

24:01

and whom to marry, and whom to vote for.

24:03

It’s the same logic and I would be curious to hear

24:07

if you think that there is anything in humans,

24:10

which is by definition un-hackable,

24:13

that we can’t reach a point when the algorithm

24:16

can make that decision better than me.

24:19

So, that’s one line of dystopia

24:21

which is a bit more familiar in this part of the world

24:26

and then you have the full-fledged dystopia

24:29

of a totalitarian regime

24:31

based on a total surveillance system.

24:35

Something like the totalitarian regimes

24:37

that we have seen in the twentieth century

24:39

but augmented with biometric sensors

24:43

and the ability to basically track

24:46

each and every individual, 24 hours a day.

24:51

And you know, which in the days of,

24:52

I don’t know, Stalin or Hitler, was absolutely impossible

24:55

because it didn’t have the technology

24:57

but maybe, might be possible in 20 years or 30 years.

25:02

So, we can choose which dystopia to discuss

25:05

but they are very close in–

25:07

Let’s choose the liberal democracy dystopia.

25:10

Fei-Fei, do you want answer Yuval’s specific question,

25:12

which is, is there something in dystopia,

25:14

a liberal democracy dystopia, is there something endemic

25:18

to humans that cannot be hacked?

25:19

So, when you ask me that question just two minutes ago,

25:23

the first word that came to my mind is love.

25:29

Ask Tinder, I don’t know.

25:31

[crowd and panel laughing]

25:37

Dating is not the entirety of love, I hope.

25:43

The question is which kind of love are you referring to?

25:47

If you are referring to this, you know I don’t know,

25:51

Greek philosophical love or the loving kindness of Buddhism,

25:56

that’s one question,

25:57

which I think it’s much more complicated.

25:59

If you are referring to the

26:02

biological mammalian courtship rituals,

26:11

But humans– Why is it different

26:12

from anything else that is happening in the body?

26:14

But humans are humans because there’s some part of us

26:18

that are beyond the mammalian courtship, right?

26:22

So, is that part hackable?

26:24

That’s the question?

26:25

I mean, you know in in most science fiction books

26:29

and movies, they give your answer.

26:31

When the extra-terrestrial evil robots

26:34

are about to conquer planet Earth

26:37

and nothing can resist them, resistance is futile,

26:40

at the very last moment,

26:43

Humans win It’s just one thing,

26:44

Because the robots don’t understand love.

26:46

Last moment there’s one heroic white dude that saves us.

26:50

[audience cheering and applause] [laughter]

26:57

No, no, it was a joke, don’t worry.

27:00

[audience and panel laughter]

27:02

But, okay so, the two dystopia,

27:06

I do not have answers to the two dystopias

27:09

but I want to keep saying is

27:11

this is precisely why this is the moment

27:14

that we need to seek for solutions.

27:17

This is precisely why this is the moment

27:20

that we believe the new chapter of AI needs to be written

27:24

by cross-pollinating efforts from humanists,

27:32

social scientists, to business leaders,

27:36

to civil society, to governments to come at the same table

27:41

to have that multilateral and cooperative conversation.

27:46

I think you really bring out the urgency

27:50

and the importance and the scale of this potential crisis

27:54

but I think in the face of that, we need to act.

27:58

Yeah, and I agree that we need cooperation,

28:01

that we need much closer cooperation

28:02

between engineers and philosophers

28:05

or engineers and historians

28:07

and also from a philosophical perspective,

28:09

I think there is something wonderful

28:11

about engineers, philosophically.

28:14

Thank you. [laughing]

28:15

That they you really cut the bullshit.

28:18

I mean, philosophers can talk and talk you know,

28:21

in cloudy in flowery metaphors

28:24

and then the engineers can really focus the question.

28:28

Like, I just had a discussion the other day

28:29

with an engineer from Google about this

28:32

and he said, Okay, I know how to maximize

28:37

people’s time on the website.

28:39

If somebody comes to me and tells me,

28:41

Look, your job is to maximize time on this application.

28:46

I know how to do it because I know how to measure it.

28:49

But if somebody comes along and tells me,

28:51

Well you need to maximize human flourishing

28:55

or You need to maximize universal love,

28:58

I don’t know what it means.

29:01

So, the engineers go back to the philosophers

29:03

and ask them, what do you actually mean.

29:06

Which, you know, a lot of philosophical theories

29:09

collapse around that because they can’t really explain

29:13

what and we need this kind of collaboration.

29:18

We need a equation for that. In order to move forward.

29:20

But then Yuval, is Fei-Fei right?

29:22

If we can’t explain and we can’t code love,

29:26

can artificial intelligence ever recreate it

29:27

or is it something intrinsic to humans

29:29

that the machines will never emulate.

29:32

I don’t think that machines will feel love

29:36

but you don’t necessarily need to feel it

29:39

in order to be able to hack it,

29:41

to monitor it, to predict it, to manipulate it.

29:44

I mean, machines don’t like to play candy crush.

29:47

But you think they can– But they can still–

29:49

This device, in some future

29:51

where it’s infinitely more powerful

29:53

than it is right now, could make me fall in love

29:54

with somebody in the audience?

29:56

Hmm, that goes to the question of consciousness

30:00

Let’s go there. I don’t think that we have

30:03

the understanding of what consciousness is

30:06

to answer the question, whether a non-organic consciousness

30:11

is possible or is not possible.

30:13

I think we just don’t know but again

30:17

the bar for hacking humans is much lower.

30:21

The machines don’t need to have consciousness of their own

30:24

in order to predict our choices

30:27

and manipulate our choices, they just need…

30:31

If you accept that something like love is,

30:35

in the end, a biological process in the body.

30:39

If you think that AI can provide us

30:42

with wonderful health care

30:43

by being able to monitor and predict

30:48

something like the flu or something like cancer,

30:51

what’s the essential difference between flu and love?

30:54

[audience applause]

30:55

In the sense of, is this biological

30:59

and this something else, which is so separated

31:05

from the biological reality of the body,

31:08

that even if we have a machine

31:11

that is capable of monitoring and predicting flu,

31:14

it still lacks something essential

31:17

in order to do the same thing with love.

31:21

So, I want to make two comments

31:22

and this is where my engineering,

31:25

you know, personality is speaking.

31:27

We’re making two very important assumptions

31:30

in this part of the conversation.

31:32

One is that AI is so omnipotent

31:36

that it’s achieved to a state

31:38

that it’s beyond predicting anything physical,

31:42

its guarding to the consciousness level

31:44

and getting to the, even the ultimate,

31:47

the love level of capability

31:50

and I do want to make sure that we recognize

31:54

that we’re very, very, very far from that.

31:56

This technology is still very nascent.

31:59

Part of the concern I have about today’s AI

32:02

is that super-hyping of its capability so,

32:07

I’m not saying that, that’s not a valid question

32:10

but I think that part of this conversation

32:13

is built upon that assumption that this technology

32:15

has become that powerful and there’s,

32:18

I don’t even know how many decades we are from that.

32:21

Second related assumption, I feel we are,

32:25

our conversation is being based on this

32:27

that we’re talking about the world or state of the world

32:32

that owning that powerful AI exists

32:36

or that small group of people

32:38

who have produced the powerful AI

32:40

and is intended to hack human, are existing.

32:44

But in fact our human society is so complex

32:48

there’s so many of us, right?

32:50

I mean, humanity in its history,

32:53

have faced so many technology,

32:55

if we left it in the hands of a bad player,

32:59

alone without any regulation, multinational collaboration,

33:03

rules, laws, moral codes, that technology could have,

33:07

maybe not hack human but destroy human

33:10

or hurt human in massive ways.

33:13

It has happened but by and large,

33:16

our society in a historical view

33:20

is moving to a more civilized and controlled state.

33:24

So, I think it’s important to look at that greater society

33:29

and bringing other players and people into this dialogue

33:33

so we don’t talk like there is only this omnipotent AI,

33:38

you know, deciding it’s gonna hack everything to the end.

33:42

And that brings to your topic that in addition

33:47

of hacking human at that level that you’re talking about,

33:51

there are some very immediate concerns already.

33:55

Diversity, privacy, labor, legal changes,

34:01

you know, international geopolitics

34:04

and I think it’s critical to tackle those now.

34:09

I love talking to AI researchers

34:10

because five years ago, all the AI researchers were like,

34:12

It’s much more powerful than you think and now

34:14

they’re all like, It’s not as powerful as you think.

34:16

[audience and panel laughter]

34:22

Let me ask, It’s because five years ago

34:25

you have no idea what AI is,

34:26

I’m not saying it’s wrong Now, you’re extrapolating

34:30

I didn’t say it was wrong, I just said it was a thing.

34:33

I want to go into what you just said

34:35

but before you do that I want to take one question here

34:37

from the audience because once we move

34:40

into the second section, we won’t be able to answer it.

34:42

So, the question is, it’s for you Yuval,

34:44

this from Mara and Lucini, How can we avoid

34:47

the formation of AI power digital dictatorships?

34:49

So, how do we avoid dystopia number two?

34:51

Let’s answer that and then let’s go Fei-Fei,

34:53

into what we can do right now,

34:55

not what we can do in the future.

34:58

The key issue is how to regulate the ownership of data

35:03

because we won’t stop research in biology

35:07

and we won’t stop research in computer science and AI.

35:10

So, for the three components of biological knowledge,

35:13

computing power, and data, I think data is the easiest

35:17

and it’s also very difficult but still the easiest,

35:19

kind of, to regulate or to protect.

35:22

Place some protections there and there are efforts

35:24

now being made and they are not just political efforts but,

35:29

also philosophical efforts to really conceptualize,

35:33

what does it mean to own data

35:35

or to regulate the ownership of data

35:38

because we have a fairly good understanding

35:40

what it means to own land,

35:42

we had thousands of years of experience with that,

35:45

we have a very poor understanding

35:47

of what it actually means to own data

35:50

and how to regulate it.

35:51

But this the very important front

35:54

that we need to focus on in order to prevent

35:59

the worst dystopian outcomes

36:02

and I agree that AI is not nearly as powerful

36:06

as some people imagined but this why,

36:09

and again, I think we need to place the bar low

36:12

for to reach a critical threshold,

36:16

we don’t need the AI to know us perfectly,

36:19

which will never happen, we just need the AI

36:23

to know us better than we know ourselves.

36:26

Which is not so difficult because most people

36:28

don’t know themselves very well

36:31

and often make [laughter and audience applause]

36:33

huge mistakes in critical decisions.

36:37

So, whether it’s finance, or career, or love life,

36:40

to have this shift in authority

36:44

from humans to algorithm, they can still be terrible

36:48

but as long as they are a bit less terrible

36:50

than us, the authority will shift to them.

36:53

Yuv, in your book you tell a very illuminating story

36:58

about your own self and your own coming in terms

36:59

with you with who you are and how you could be manipulated.

37:02

Will you tell that story here,

37:03

about coming to terms with your sexuality

37:05

and the story you told about Coca-Cola

37:06

and your book because I think that will make it clear

37:08

what you mean here, very well.

37:09

Yes so, I said that I only realized

37:14

that I was gay when I was 21.

37:17

And I look back at the time when I was,

37:19

I don’t know, 15, 17 and it should’ve been so obvious.

37:25

And it’s not like a stranger like,

37:27

I’m with myself 24 hours a day [laughter]

37:30

and I just don’t notice any, of like,

37:34

the screaming signs that saying,

37:35

There, you were gay and I don’t know how

37:38

but the fact is, I missed it.

37:41

Now, an AI, even a very stupid AI,

37:45

today, will not miss it.

37:46

[audience and panel laughing] I’m not so sure.

37:50

So imagine, this not like, you know like,

37:52

a science fiction scenario of a century from now,

37:55

this can happen today, that you can write

37:58

all kinds of algorithms that, you know,

38:00

they are not perfect but they are still better,

38:03

say than the average teenager

38:05

and what does it mean to live in a world

38:07

in which you learn about something so important

38:10

about yourself, from an algorithm.

38:14

What happens if the algorithm doesn’t

38:16

share the information with you

38:18

but it shares the information

38:20

with advertisers or with governments?

38:24

So, if you want to, and I think we should,

38:28

go down from the cloudy heights of,

38:30

you know, the extreme scenarios

38:33

to the practicalities of day-to-day life,

38:36

this a good example because this is already happening.

38:39

Yeah, all right well let’s take the elevator

38:40

down to the more conceptual level

38:44

of this particular shopping mall

38:45

that we’re shopping in today

38:46

and Fei-Fei, let’s talk about what we can do today

38:50

as we think about the risks of AI, the benefits of AI,

38:53

and tell us you know, sort of your punch list,

38:56

of what you think the most important things

38:58

we should be thinking about with AI are.

38:59

Wow, boy there are so many things we could do today

39:03

and I cannot agree more with Yuval,

39:07

that this is such an important topic.

39:09

Again I’m gonna try to speak about all the efforts

39:13

that’s being made at Stanford

39:14

because I think this a good representation

39:18

of what we believe, there are so many efforts we can do.

39:21

So, in human-centered AI in which,

39:24

this the overall theme we believe,

39:27

that the next chapter of AI should be, is human-centered.

39:31

We believe in three major principles.

39:34

One principle is to invest in the next generation

39:38

of AI technology that reflects more

39:43

of the kind of human intelligence we would like.

39:46

I was just thinking about your comment

39:48

about AI’s dependence on data and how that the policy

39:52

and governance of data should emerge

39:54

in order to regulate and govern the AI impact.

40:01

Well, we should be developing technology

40:05

that can explain AI, in technical field

40:08

we call it explainable AI or AI interpretability studies.

40:13

We should be focusing on technology that have

40:16

the more nuanced understanding of human intelligence.

40:19

We should be investing in the development

40:24

of less data dependent AI technology

40:29

that would take into considerations of intuition, knowledge,

40:33

creativity, and other forms of human intelligence.

40:38

So, that kind of human intelligence inspired AI

40:41

is one of our principles.

40:43

The second principle is to, again welcome in

40:47

the kind of multidisciplinary study

40:50

of AI cross-pollinating with economics,

40:54

with ethics, with law, with philosophy,

40:57

with history, cognitive science, and so on

41:01

because there is so much more we need to understand

41:06

in terms of AI’s social, human,

41:08

anthropological, ethical impact

41:11

and we cannot possibly do this alone as technologists.

41:16

Some of us shouldn’t even be doing this,

41:18

it’s the ethicist, philosophers should participate

41:22

and work with us on these issues.

41:24

So, that’s the second principle and the third principle…

41:28

Oh, and within this we work with policymakers,

41:32

we convene the kind of dialogues

41:36

of multilateral stakeholders.

41:39

Then the third, the last but not the least,

41:42

I think Nick, you said that at the very beginning

41:44

of this conversation that we need to promote

41:47

that the human enhancing and collaborative

41:50

and augmentative aspect of this technology.

41:54

You have a point, even there it can become manipulative

41:58

but we need to start with that sense of alertness,

42:03

understanding, but still promote

42:04

that kind of benevolent applications

42:07

and design of this technology.

42:10

At least these are the three principles

42:13

the Stanford’s Human-Centered AI Institute is based on

42:17

and I just feel very proud, within a short few months

42:21

of the birth of this Institute,

42:23

there are more than 200 faculty involved on this campus

42:28

in this kind of research dialog, you know,

42:32

study education and that number is still growing.

42:38

Of those three principles let’s start digging into them.

42:42

So, let’s go to number one, explainability,

42:44

’cause this a really interesting debate

42:45

in artificial intelligence so,

42:47

there are some practitioners who say

42:49

you should have algorithms that can explain

42:51

what they did and the choices they made.

42:52

It sounds eminently sensible but how do you do that?

42:56

I make all kinds of decisions that I can’t entirely explain

42:59

like, why did I hire this person off that person?

43:01

I can tell a story about why I did it

43:04

but I don’t know for sure.

43:05

Like, we don’t know ourselves well enough

43:07

to always be able to truthfully

43:08

and fully explain what we did.

43:09

How can we expect a computer using AI, to do that?

43:13

And, if we demand that here in the West

43:17

then there are other parts of the world

43:18

that don’t demand that, who may be able to move faster.

43:20

So, why don’t we start, why don’t I ask you

43:22

the first part of that question,

43:23

Yuval the second part of that question.

43:24

So, the first part is, can we actually get explainability

43:27

if it’s super hard even within ourselves?

43:30

Well, it’s pretty hard for me to multiply two digits

43:33

but you know, computers can do that.

43:37

So, the fact that something is hard for humans

43:39

doesn’t mean we shouldn’t try to get the machines to do it,

43:42

especially, after all, these algorithms

43:45

are based on very simple mathematical logic.

43:48

Granted, we’re dealing with newer networks these days

43:52

of millions of nodes and billions of connections so,

43:55

explainability is actually tough, it’s an ongoing research.

44:00

But I think this is such a fertile ground

44:04

and it’s so critical when it comes to health care decisions,

44:08

financial decisions, legal decisions,

44:11

there’s so many scenarios where this technology

44:16

can be potentially, positively useful

44:19

but with that kind of explainable capabilities so,

44:23

we’ve gotta try and I’m pretty confident

44:25

with a lot of smart minds out there,

44:27

this a crackable thing

44:29

and on top of that– Got 200 professors on it.

44:32

Right, not all of them doing AI algorithms.

44:35

On top of that, I think you have a point that

44:39

if we have technology that can explain

44:44

the decision making process of algorithms,

44:48

it makes it harder for it to manipulate and cheat, right?

44:52

It’s a technical solution, not the entirety of the solution,

44:57

that will contribute to the clarification

45:01

of what this technology is doing.

45:05

But because the, presumably the AI,

45:08

makes decision in a radically different way than humans

45:12

then even if the AI explains its logic

45:16

the fear is it will make absolutely no sense to most humans.

45:21

Most humans, when they are asked to explain a decision

45:23

they tell a story in a narrative form,

45:27

which may or may not reflect

45:29

what is actually happening within them,

45:31

in many cases it doesn’t reflect.

45:33

It’s just a made-up rationalization and not the real thing.

45:38

Now, in AI it could be much better than a human

45:41

in telling me like, I applied to the bank for a loan

45:46

and the bank says no and I ask why not

45:50

and the bank says, Okay, we’ll ask our AI

45:53

and the AI gives this extremely long,

45:56

statistical analysis based,

46:00

not on one or two salient feature of my life

46:05

but on 2,517 different data points

46:10

which it took into account and gave different weights

46:14

and why did you give this, this weight

46:16

and why did you give oh, there is another book about that

46:19

and most of the data points would seem,

46:23

to a human, completely irrelevant.

46:26

You applied for a loan on Monday

46:29

and not on Wednesday and the AI discovered that

46:33

for whatever reason, it’s after the weekend, whatever,

46:36

people who apply for loans on a Monday

46:39

are 0.075 percent less likely to repay the loan.

46:45

So, it goes into the equation

46:48

and I get this book of the real explanation,

46:51

finally I get a real explanation.

46:54

It’s not like sitting with a human banker

46:56

that just bullshit’s me [audience laughing]

46:59

What do I do with a book? Are you rooting for AI?

47:02

Are you saying AI’s good in this case?

47:04

In many cases, yes I mean, I think in many cas…

47:07

I mean, it’s two sides of the coin.

47:09

I think that in many ways the AI in this scenario

47:13

will be an improvement over the human banker

47:16

because for example, you can really know

47:19

what the decision is based on presumably,

47:22

but it’s based on something that I,

47:25

as a human being, just cannot grasp.

47:27

I know how to deal with simple narrative stories.

47:31

I didn’t give you a loan because you’re gay,

47:34

that’s not good or because you didn’t repay

47:37

any of your previous loans.

47:39

Okay, I can understand that.

47:40

But my mind doesn’t know what to do

47:45

with the real explanation that the AI will give,

47:48

which is just this crazy statistical thing, which–

47:51

Okay so, there are two layers to your comment.

47:54

One, is how do you trust

47:56

and be able to comprehend AI’s explanation?

48:00

Second, is actually, can AI be used

48:03

to make humans more trustable

48:05

or be more trustable than the human’s?

48:09

On the first point, I agree with you.

48:11

If AI gives you two thousand dimensions

48:13

of potential features with probability,

48:16

it’s now human understandable

48:18

but the entire history of science in human civilization

48:22

is to be able to communicate the result of science

48:27

in better and better ways, right?

48:29

Like, I just had my annual physical

48:30

and the whole bunch of numbers came to my cell phone

48:34

and well, first of all, my doctors can,

48:38

the expert can help me to explain these numbers.

48:41

Now, even Wikipedia can help me

48:43

to explain some of these numbers.

48:45

But the technological improvements

48:49

of explaining these will improve.

48:52

It’s our failure as AI technologists

48:56

if we just throw two hundred or two thousand dimensions

49:00

of probability numbers at you.

49:01

But I mean, this the explanation and I think

49:06

that the point you raise

49:07

is very important but, I see differently.

49:10

I think science is getting worse and worse

49:13

in explaining its theories and findings to general public.

49:16

Which is the reason for things like,

49:19

doubting climate change and so forth

49:21

and it’s not really even the fault of the scientists

49:25

because the science is just getting more

49:27

and more complicated and reality is extremely complicated

49:31

and the human mind wasn’t adapted

49:34

to understanding the dynamics of climate change

49:38

or the real reasons for refusing to give somebody a lone.

49:43

That’s the point when you have…

49:47

Again, let’s put aside the whole question of manipulation

49:50

and how can I trust.

49:51

Let’s assume the AI is benign

49:54

and let’s assume that there are no hidden biases,

49:57

everything is okay but, still I can’t understand,

50:02

the decision of the AI. That’s why Nick,

50:03

people like Nick, the storyteller, says to expla…

50:07

What I’m saying, you’re right it’s very complex

50:10

but there are people like–

50:12

I’m gonna lose my job to computer like, next week

50:14

but I’m happy to have your confidence with me.

50:16

But that’s the job of the society collectively

50:19

to explain the complex science.

50:21

I’m not saying we’re doing a great job, at all but,

50:25

I’m saying there is hope if we try.

50:27

But my fear is that we just really can’t do it

50:31

because the human mind is not built

50:37

for dealing with these kinds of explanations

50:41

and technologies and it’s true for,

50:43

I mean, it’s true for the individual customer

50:45

who goes to the bank

50:47

and the bank refused to give them a loan

50:50

and it can even be on the level, I mean,

50:53

how many people today on earth

50:55

understand the financial system?

50:57

[silence followed by light laughter]

50:59

How many presidents and prime ministers

51:02

understand the financial system?

51:04

In this country at zero? [audience laughter and applause]

51:10

So, what does it mean to live in a society

51:15

where the people who are supposed

51:17

to be running the business, and again,

51:19

it’s not the fault of a particular politician

51:23

it’s just the financial system has become so complicated

51:27

and I don’t think that economies

51:29

are trying on purpose to hide something for general public,

51:33

it’s just extremely complicated.

51:35

You had the some of the wisest people in the world

51:39

go into the finance industry

51:41

and creating these enormously complex models

51:45

and tools, which objectively, you just can’t explain it

51:52

to most people unless first of all,

51:55

they study economics and mathematics

51:57

for 10 years or whatever so, I think this a real crisis.

52:02

And this again, this part of

52:04

the philosophical crisis we started with

52:07

and the undermining of human agency.

52:12

That’s part of what’s happening,

52:16

that we have these extremely intelligent tools

52:21

that are able to make, perhaps better decisions

52:24

about our health care, about our financial system,

52:27

but we can’t understand what they are doing

52:31

and why they are doing it and this undermines our autonomy

52:36

and our authority and we don’t know

52:39

as a society, how to deal with that.

52:41

Well, ideally, Fei-Fei’s Institute will help that.

52:45

Before we leave this topic though,

52:46

I want to move to a very closely related question,

52:48

which I think is one of the most interesting,

52:50

which is the question of bias in algorithms,

52:52

which is something you’ve spoken eloquently about

52:54

and let’s stay with the financial systems.

52:55

So, you can imagine a loan used by a bank

52:58

to determine whether somebody should get a loan

53:00

and you can imagine training it on historical data

53:02

and historical data is racist and we don’t want that,

53:05

so let’s figure out how to make sure the data isn’t racist

53:09

and that it gives loans to people regardless of race.

53:11

And we probably all, everybody in this room agrees that,

53:13

that is a good outcome but let’s say that

53:16

analyzing the historical data suggests

53:17

that women are more likely to repay their loans than men.

53:20

Do we strip that out or do we allow that to stay in?

53:23

If you allow it to stay in,

53:24

you get a slightly more efficient financial system.

53:26

If you strip it out,

53:27

you have a little more equality between men and women.

53:30

How do you make decisions about

53:32

what biases you want to strip

53:34

and which ones are okay to keep?

53:36

That’s a excellent question Nick, I mean,

53:38

I’m not gonna have the answers personally

53:40

but I think you touched on the really important question.

53:43

It’s, first of all, a machine learning system bias

53:47

is a real thing you know, like you said.

53:50

It starts with data, it probably starts

53:52

with the very moment we’re collecting data

53:55

and the type of data were collecting

53:57

all the way through the whole pipeline

53:58

and then all the way to the application

54:01

but biases come in very complex ways.

54:07

At Stanford, we have machine learning scientists

54:09

studying the technical solutions of bias like,

54:13

you know de-biasing data

54:15

and normalizing certain decision-making

54:19

but we also have humanists debating about what is biased,

54:24

what is fairness, when is bias good,

54:27

when it’s bias bad so, I think you

54:30

just opened up a perfect topic for research

54:34

and debate and conversation in this topic

54:39

and I also want to point out that Yuval,

54:44

you already used a very closely related example,

54:47

machine learning algorithm has a potential

54:50

to actually expose bias, right?

54:55

Like, one of my favorite study was a paper

54:59

a couple of years ago analyzing Hollywood movies

55:03

and using machine learning face recognition algorithm,

55:06

which is a very controversial technology these days,

55:09

to recognize Hollywood systematically gives more screen time

55:13

to male actors than female actors.

55:17

No human being can sit there

55:19

and count all the frames of faces

55:21

and gender bias and this a perfect example

55:24

of using machine learning to expose bias.

55:27

So, in general there’s a rich set of issues

55:33

we should study and again, bring the humanists,

55:36

bring the ethicists, bring the legal scholars,

55:39

bring the gender study experts.

55:41

Agree though, standing up for humans,

55:43

I knew Hollywood was sexist

55:44

even before that paper but yes, agreed.

55:46

You are a smart human. [light laughter]

55:49

Yuval, on that question of the loans,

55:51

do you strip out the racist data,

55:53

do you strip out the gender data,

55:54

what biases do you get rid of,

55:55

what biases do you not?

55:59

I don’t think there is a one-size-fits-all.

56:01

I mean, it’s a question…

56:02

we need this day-to-day collaboration

56:07

between engineers, and ethicists,

56:10

and psychologists, and political scientists–

56:12

But not biologists, right?

56:14

[laughter] But not biologists? and increasing– [laughter]

56:17

And increasingly, also biologists.

56:22

It goes back to the question, what should we do?

56:25

So, we should teach ethics

56:28

to coders as part of their curriculum.

56:32

The people today in the world,

56:34

that most need a background in ethics

56:37

is the people in the computer science departments,

56:40

so it should be an integral part of the curriculum

56:44

and it’s also in the big corporations,

56:48

which are designing these tools,

56:51

they should be embedded within the teams,

56:55

people with background in things like ethics,

56:58

like politics, that they always think

57:02

in terms of what biases might we inadvertently

57:07

be building into our system.

57:09

What could be the cultural or political implications

57:13

of what we are building?

57:15

It shouldn’t be a kind of afterthought

57:17

that you create this neat technical gadget,

57:21

it goes into the world, something bad happens,

57:23

and then you start thinking,

57:24

Oh, we didn’t see this one coming. What do we do now?

57:27

From the very beginning, it should be clear

57:31

that this is part of the process.

57:33

Yep, I do want to give a shout out to Rob Reich

57:36

who just introduced this whole event

57:39

He and my colleagues, Mehran Sahami

57:42

and a few other Stanford professors have opened this course

57:46

called Ethics Computation and sorry Rob,

57:50

I’m abusing the title of your course

57:52

but this exactly the kind of classes it’s…

57:55

I think this quarter, the offering

57:56

has more than 300 students signed up to that.

58:03

I wish the course the existed when I was a student here.

58:05

Let me ask an excellent question

58:06

from the audience, it ties into this.

58:07

This is From Yu Jin Lee;

58:08

how do you reconcile the inherent trade-offs

58:11

between explainability and efficacy

58:13

and accuracy of algorithms?

58:18

This question seems to be assuming if you can explain it,

58:22

you’re less good or less accurate.

58:25

Well, you can imagine that if you require explainability

58:29

you lose some level of efficiency,

58:31

you’re adding a little bit of complexity to the algorithm.

58:34

So okay, first of all,

58:35

I don’t necessarily believe in that,

58:37

there’s no mathematical logic to this assumption.

58:42

Second let’s assume there is a possibility

58:45

that an explainable algorithm suffers efficiency.

58:50

I think this a societal decision we have to make.

58:53

You know, when we put the seatbelt in our car,

58:59

driving that’s a little bit of an efficiency loss

59:02

’cause I have to do that seatbelt movement

59:04

instead of just hopping and drive

59:06

but as a society we decided

59:08

we can afford that loss of efficiency

59:11

because we care more about human safety.

59:13

So, I think AI is the same kind of technology

59:16

as we make these kind of decisions going forward

59:20

in our solutions, in our products,

59:22

we have to balance human wellbeing

59:24

and societal well-being with efficiency.

59:27

So Yuval, let me ask you,

59:29

the global consequences of this is something

59:31

that a number of people have asked about

59:32

in different ways and we’ve touched on

59:33

but we haven’t hit head-on.

59:34

There are two countries, imaginative country A,

59:36

and you have country B.

59:37

Country A says all of you AI engineers,

59:39

you have to make it explainable,

59:40

you have to take ethics classes,

59:42

you have to really think about

59:44

the consequences of what you’re doing,

59:45

you got to have dinner with biologists,

59:46

you have to think about love,

59:47

and you have to like, read you know, John Locke.

59:52

Group B country says just go build some stuff, right?

59:56

These two countries, at some point,

59:57

are gonna come in conflict and I’m gonna guess

00:00

that country B’s technology might be ahead of country A’s.

00:05

Yeah, that’s always the concern with arms races,

00:08

which become a race to the bottom

00:11

in the name of efficiency and domination

00:14

and we are in, I mean…

00:16

What is extremely problematic or dangerous

00:19

about the situation now is, with AI,

00:23

is that more and more countries are waking up

00:25

to the realization that this could be

00:28

the technology of domination in the 21st century.

00:32

So, you’re not talking about just any economic competition

00:37

between the different textile industries

00:40

or even between different oil industries,

00:42

like one country decides, we don’t care

00:45

about environment at all, we’ll just go full gas ahead

00:49

and the other countries is much more environmentally aware.

00:52

The situation with AI is potentially much worse

00:56

because it could be really, the technology of domination

01:01

in the 21st century and those left behind

01:06

could be dominated, exploited,

01:10

conquered by those who forge ahead.

01:14

So, nobody wants to stay behind

01:16

and I think the only way to prevent

01:21

this kind of catastrophic arms race to the bottom

01:24

is greater global cooperation around AI.

01:28

Now this sounds utopian because we are now moving

01:32

in exactly the opposite direction,

01:34

of more and more rivalry and competition

01:38

but this is part of, I think, of our job

01:41

like with the nuclear arms race,

01:44

to make people in different countries realize that

01:50

this is an arms race, that whoever wins, humanity loses.

01:55

And it’s the same with AI, if AI becomes an arms race

01:59

then this is extremely bad news for all the humans

02:06

and it’s easy for say, people in the US,

02:09

to say we are the good guys in this race,

02:12

you should be cheering for us

02:14

but this becoming more and more difficult

02:17

in a situation when the motto of the day is, America first.

02:22

I mean, how can we trust the USA

02:25

to be the leader in AI technology

02:28

if ultimately it will serve only American interests

02:31

in American economic and political domination.

02:34

So it’s really, I think most people

02:37

when they think arms race in AI,

02:40

they think USA versus China

02:42

but there are almost 200 other countries in the world

02:47

and most of them are far, far behind

02:51

and when they look at what is happening

02:53

they are increasingly terrified and for a very good reason.

02:59

The historical example you’ve made is a little unsettling.

03:02

If I heard your answer correctly,

03:03

it’s that we need global cooperation

03:06

and if we don’t we’re gonna lead to an arms race.

03:07

In the actual nuclear arms race

03:09

we tried for global cooperation from,

03:11

I don’t know, roughly 1945 to 1950

03:13

and then we gave up and then we said

03:15

we’re going full-throttle the United States

03:17

and then why did the Cold War end the way it did?

03:19

Who knows, but one argument would be that the United States,

03:22

you know, build up and it’s relentless build up

03:24

of nuclear weapons helped to keep the peace

03:27

until the Soviet Union collapsed.

03:29

So, if that is the parallel, then what might happen here

03:32

is we’ll try for global cooperation in 2019,

03:33

2020, 2021, and then we’ll be off in an arms race.

03:37

A, is that likely and, B if it is,

03:40

would you say, well then the US,

03:42

it needs to really move full-throttle in AI

03:44

because it would better for the liberal democracies

03:46

to have artificial intelligence than totalitarian states.

03:49

Well, I’m afraid it is very likely

03:51

that cooperation will break down

03:54

and we will find ourselves in an extreme version

03:58

of an arms race and in a way,

04:05

it’s worse than the nuclear arms race

04:08

because with nukes, at least until today,

04:11

countries develop them but never use them.

04:14

AI will be used all the time.

04:17

It’s not something you have on the shelf

04:20

for some doomsday war.

04:22

It will be used all the time to create

04:27

potentially, total surveillance regimes

04:30

in extreme totalitarian systems,

04:32

in one way or the other.

04:37

From this perspective, I think the danger is far greater.

04:42

You could say that the nuclear arms race

04:45

actually saved democracy, and the free market,

04:50

and you know, rock and roll,

04:54

and Woodstock, and then the hippies.

04:56

They all owe a huge debt to nuclear weapons [smirking]

05:00

because if nuclear weapons weren’t invented,

05:06

there would have been a conventional arms race

05:08

and conventional military buildup

05:10

between the Soviet bloc and the American bloc

05:14

and that would have meant total mobilization of society.

05:18

If the Soviets are having total mobilization

05:21

the only way the Americans can compete is to do the same.

05:24

Now, what actually happened

05:26

was that you had an extreme totalitarian mobilized Society

05:31

in the communist bloc but thanks to nuclear weapons

05:34

you didn’t have to do it in the United States,

05:38

or in western Germany, or in France

05:40

because you relied on nukes.

05:42

You don’t need millions of conscripts in the army

05:46

and with AI it going to be just the opposite

05:50

that the technology will not only be developed,

05:54

it will be used all the time

05:57

and that’s a very scary scenario.

06:00

Wait, can I just add one thing?

06:02

I don’t know history like you do

06:05

but you said AI is different from nuclear technology.

06:09

I do want to point out, it is very different

06:13

because the same time as you are talking

06:17

about these more scarier situation,

06:20

this technology has a wide

06:22

international scientific collaboration basis

06:26

that is being used to make transportation better,

06:30

is to improve healthcare, to improve education and,

06:37

so it’s a very interesting, new time

06:39

that we haven’t seen before because while we have this,

06:43

kind of, competition we also have

06:44

massive international scientific community collaboration

06:49

on these benevolent users

06:51

and democratization of this technology.

06:53

I just think it’s important to see both side of this.

06:57

You’re absolutely right, there also,

06:59

as I said, there are also enormous benefits

07:01

to this technology.

07:03

And in a global collaborative way,

07:05

especially among the scientists.

07:08

The global aspect is more complicated

07:11

because the question is, what happens

07:13

if there is a huge gap in abilities

07:16

between some countries and most of the world?

07:19

Would we have a re-run of the 19th century

07:22

Industrial Revolution, when the few industrial powers

07:27

conquer, and dominate, and exploit the entire world,

07:30

both economically and politically?

07:32

What’s to prevent that from repeating?

07:35

So, even in terms of, you know,

07:38

without this scary war scenario

07:42

we might still find ourselves

07:44

with a global exploitation regime

07:48

in which the benefits, most of the benefits,

07:51

go to a small number of countries

07:53

at the expense of everybody else.

07:56

Have you heard of archive.org?

07:59

Archive.org? [light laughs]

08:01

So, students in the audience might laugh at this

08:04

but we are in a very different scientific research climate

08:08

is that the kind of globalization of technology

08:12

and technique happens in a way

08:14

that the 19th century even 20th century never saw before.

08:19

Any paper that is a basic science research paper

08:23

in AI today, or technical technique that is produced,

08:29

let’s say, this week at Stanford,

08:31

it’s easily get globally distributed

08:34

through this thing called archive, or GitHub, or repository.

08:38

The information is out there, yeah.

08:40

Globalization of this scientific technology

08:45

travels in a very different way

08:47

from the 19th and 20th century.

08:49

I mean, I don’t doubt there are,

08:51

you know, confined development of this technology,

08:55

maybe by regimes but we do have to recognize

08:58

that this global, the differences is pretty sharp now

09:05

and we might need to take that into consideration

09:07

that the scenario you’re describing is harder.

09:10

I’m not say impossible, but harder to happen.

09:13

So, you think that the way–

09:14

Just say that it’s not just the scientific papers.

09:18

Yes, the scientific paper’s out there

09:20

but if I live in Yemen, or in Nicaragua,

09:24

or in the Indonesia, or in Gaza,

09:27

yes I can connect to the internet and download the paper.

09:30

What will I do with that?

09:32

I don’t have the data.

09:33

I don’t have the infrastructure.

09:35

I mean, you look at

09:36

where the big corporations are coming from

09:39

that hold all the data of the world,

09:42

they are basically coming from just two places.

09:45

I mean even Europe is not really in the competition.

09:48

There is no European Google,

09:49

or European Amazon, or European Baidu,

09:51

or European Tencent and if you look beyond Europe,

09:54

you think about Central America,

09:56

you think about most of Africa,

09:58

the Middle East, much of Southeast Asia,

10:01

it’s yes, the basic scientific knowledge is out there

10:07

but this just one of the components

10:10

that go to creating something that can compete

10:15

with Amazon or with Tencent or with the abilities

10:19

of governments like the US government

10:22

or like the Chinese government.

10:24

So, I agree that the dissemination of information

10:27

and basic scientific knowledge,

10:29

we’re at completely different place,

10:31

than in the 19th century.

10:32

Let me ask you about that

10:33

’cause it’s something three or four people

10:34

have asked in the questions which is,

10:37

it seems like there could be a centralizing force

10:39

of artificial intelligence, that it will make

10:41

whoever has the data and the best compute,

10:43

more powerful and that it could then accentuate

10:45

income inequality both within countries

10:47

and within the world, right?

10:48

You can imagine the countries you’ve just mentioned:

10:49

The United States, China, Europe lagging behind,

10:52

Canada somewhere behind, way ahead of Central America.

10:54

It could accentuate global income inequality.

10:57

A, do you think that’s likely

10:58

and B, how much does it worry you?

10:59

We have about four people who’ve asked a variation on that.

11:02

As I said, it’s very, very likely.

11:04

It’s already happening and it’s extremely dangerous

11:10

because the economic and political consequences

11:13

could be catastrophic.

11:14

We are talking about the potential collapse

11:16

of entire economies and countries.

11:19

Countries that depend say, on cheap manual labor

11:22

and they just don’t have the educational capital

11:27

to compete in a world of AI,

11:30

so what are these countries going to do?

11:33

I mean if, say you shift back

11:36

most production from say, Honduras or Bangladesh,

11:40

to the USA into Germany because,

11:43

the human salaries are no longer part of the equation

11:46

and it’s cheaper to produce the shirt in California

11:50

than in Honduras, so what will the people there do?

11:53

And you can say, okay but there will be many more jobs

11:56

for software engineers but we are not teaching

12:00

the kids in Honduras to be software engineers so,

12:03

maybe a few of them could somehow immigrate to the US

12:08

but most of them won’t and what will they do?

12:12

And we at present, we don’t have the economic answers

12:18

and the political answers to these questions.

12:21

Fei-Fei, you wanna jump in here?

12:22

I think that’s fair enough.

12:23

I think Yuval definitely has laid out

12:26

some of the critical pitfalls of this

12:29

and that’s why we need more people to be studying

12:33

and thinking about this.

12:34

One of the things we over and over noticed,

12:37

even in this process of building a community

12:40

of human-centered AI and also talking to people,

12:43

both internally and externally,

12:46

is that there are opportunities

12:49

for business around the world

12:51

and governments around the world

12:53

to I think about their data and AI strategy.

12:59

There are still many opportunities

13:02

for, you know, outside of the big players

13:07

in terms of companies and countries,

13:10

to really come to the realization

13:13

it’s an important moment for their country,

13:16

for their region, for their business,

13:19

to transform into this digital age

13:22

and I think when you talk about these potential dangers

13:29

and lack of data in parts of the world

13:32

that hasn’t really caught up

13:34

with this digital transformation,

13:36

the moment is now and we hope to,

13:39

you know, raise that kind of awareness

13:41

and then encourage that kind of transformation.

13:44

Yeah, I think it’s very urgent.

13:46

I mean, what we are seeing at the moment

13:49

is on the one hand, what you could call

13:51

some kind of data colonization,

13:54

that the same model that we saw in the 19th century

13:56

that you have the Imperial hub

13:58

where they have the advanced technology,

14:01

they grow the cotton in India or Egypt,

14:04

they send the raw materials to Britain,

14:07

they produce the shirts,

14:09

the high-tech industry of the 19th century in Manchester,

14:12

and they send the shirts back, to sell them in in India

14:16

and out-compete the local producers.

14:18

And we in a way, might beginning to see the same thing now,

14:22

with the data economy, that they harvest the data

14:26

in places also like Brazil and Indonesia

14:29

but they don’t process the data there.

14:30

The data from Brazil and Indonesia

14:33

goes to California or goes to Eastern China,

14:37

being processed there, later produced

14:40

the wonderful new gadgets and technologies,

14:42

and sell them back as finished products

14:46

to the provinces or to the colonies.

14:50

Now, it’s not a one-to-one,

14:52

it’s not the same, there are differences

14:54

but I think we need to keep this analogy in mind

14:58

and another thing that maybe we need to keep in mind

15:02

in this respect, I think is re-emergence of stone walls

15:09

that I’m kind of, you know…

15:12

Originally my specialty was medieval military history.

15:16

This how I began my academic career

15:18

with the Crusades and castles and knights

15:21

and so forth and now I’m doing all these cyborgs

15:25

and AI stuff but suddenly there is something

15:30

that I know from back then, the walls are coming back.

15:34

And I try to kind of, what’s happening here?

15:37

I mean, we have virtual realities, we have 3G, AI,

15:41

and suddenly the hottest political issue

15:44

is building a stone wall.

15:47

Like, the most low-tech thing you can imagine [applause]

15:52

and what is the significance of a stone wall

15:58

in a world of interconnectivity and all that?

16:04

And it really frightens me that

16:06

there is something very sinister there,

16:08

the combination of data is flowing around everywhere

16:12

so easily but more and more countries,

16:15

and also my home country of Israel, it’s the same thing.

16:17

You have the, you know, the startup nation

16:20

and then the wall and what does it mean, this combination?

16:26

Fei-Fei, you wanna answer that?

16:28

[audience and panel laughing]

16:29

Maybe you can look at the next question.

16:34

You know what, let’s go to the next question

16:36

which is tied to that and the next question is,

16:39

you have the people there at Stanford

16:42

who will help be building these companies,

16:43

who will either be furthering the process

16:45

of data colonization or reversing it,

16:46

or who will be building you know,

16:48

the efforts to create a virtual wall.

16:52

A world based on artificial intelligence

16:53

are being created, or funded at least,

16:55

by a Stanford Graduate so,

16:57

you have all these students here, in the room,

17:00

how do you want them to be thinking

17:02

about artificial intelligence

17:03

and what do you want them to learn?

17:04

Let’s spend the last 10 minutes of this conversation

17:07

talking about what everybody here should be doing.

17:09

So, if you’re a computer science or engineering student,

17:15

If you’re humanists, take my class.

17:19

And all of you read Yuval’s books.

17:21

Are his books on your syllabus?

17:24

Not on mine, sorry.

17:28

I teach hard-core, deep learning.

17:30

His book doesn’t have equations.

17:33

I don’t know B plus C plus D equalls H.

17:37

But seriously, you know what I meant to say

17:41

is that Stanford students, you have a great opportunity

17:48

We have a proud history of bringing this technology to life.

17:54

Stanford was at the forefront of the birth of AI,

17:56

in fact our very Professor John McCarthy

18:00

coined the term artificial intelligence

18:02

and came to Stanford in 1963 and started this nation’s,

18:07

one of the two oldest AI labs in this country

18:11

and since then, Stanford’s AI research

18:14

has been at the forefront of every wave of AI changes

18:19

and this 2019, we’re also at the forefront

18:23

of starting the human-centered AI revolution

18:27

or writing of the new AI chapter

18:32

and we did all this for the past 60 years, for you guys.

18:38

For the people who come through the door

18:40

and who will graduate and become practitioners,

18:43

leaders, and part of the civil society,

18:48

and that’s really what the bottom line is about.

18:51

Human-centered AI needs to be written

18:54

by the next generation of technologists

18:57

who have taken classes like Rob’s class,

19:01

to think about the ethical implications,

19:04

the human well being and it’s also gonna be written

19:08

by those potential future policymakers

19:11

who came out of Stanford’s humanity studies

19:15

and Business School, who are versed

19:18

in the details of the technology,

19:20

who understand the implications of this technology,

19:23

and who has the capability to communicate

19:26

with the technologies.

19:28

No matter how we agree and disagree,

19:32

that’s the bottom line, is that we need

19:35

this kind of multilingual leaders

19:39

and thinkers and practitioners and that is

19:43

what Stanford’s Human-Center AI Institute is about.

19:47

Yuval, how do you wanna answer that question?

19:49

Well, on the individual level,

19:51

I think it’s important for every individual,

19:54

whether in Stanford, whether an engineer or not,

19:57

to get to know yourself better

20:00

because you are now in a competition.

20:01

You know, it’s the all the old advice in the book,

20:05

in philosophy, is know yourself.

20:07

We’ve heard it from Socrates,

20:09

from Confucius, from Buddha, get to know yourself.

20:12

But there is a difference,

20:13

which is that now, you have competition.

20:16

In the day of Socrates or Buddha,

20:19

if you didn’t make the effort, so okay,

20:21

so you missed on enlightenment but

20:26

still the king wasn’t competing with you.

20:31

They didn’t have the technology.

20:32

Now you have competition, you’re competing

20:35

against these giant corporations and governments.

20:38

If they get to know you better than you know yourself,

20:42

So you need to buy yourself some time

20:45

and the first way to buy yourself some time

20:47

is to get to know yourself better

20:50

and then they have more ground to cover.

20:53

For engineers and students I would say,

20:55

I’ll focus on engineers maybe,

20:58

the two things that I would like

21:00

to see coming out from the laboratories

21:04

and the engineering departments is first,

21:07

tools that inherently work better

21:10

in a decentralized system, then in a centralized system.

21:15

I don’t know how to do it but if you…

21:18

I hope this something that engineers can work with.

21:22

I heard this blockchain is like the big promise,

21:24

in that area, I don’t know.

21:26

But whatever it is, part of when you start designing a tool,

21:33

part of the specification of what this tool should be like,

21:37

I would say, this tool should work better

21:41

in a decentralized system than in a centralized system.

21:45

That’s the best defense of democracy.

21:49

the second thing that I would like to see coming out–

21:53

I don’t want to cut you off

21:54

’cause I want you to get to this second thing,

21:54

how do you make a tool work better in a democracy than–

21:57

I’m not an engineer, I don’t know. [laughter]

22:04

All right, well then go to part two.

22:05

Take that, someone in this room, figure that out

22:07

’cause it’s very important, whatever it means.

22:09

I can think about it and then…

22:11

I can give you a historical examples

22:13

of tools that work better in this way

22:15

or in that way but I don’t know how to translate it

22:19

into present-day technological terms.

22:21

Go to part two ’cause I got a few more questions

22:22

to ask from the audience.

22:23

Okay so, the other thing that I would like to see coming

22:27

is an AI sidekick that serves me

22:31

and not some corporation or government.

22:35

We can’t stop the progress of this kind of technology

22:39

but I would like to see it serving me.

22:42

So yes, it can hack me but it hacks me

22:45

in order to protect me.

22:47

Like, my computer has an anti-virus

22:49

but my brain hasn’t, it has a biological antivirus

22:53

against the flu or whatever

22:55

but not against hackers and fraud and so forth.

22:58

So, one project to work on is to create an AI sidekick

23:03

which I paid for, maybe a lot of money,

23:06

and it belongs to me, and it follows me,

23:09

and it monitors me, and what I do,

23:11

and my interactions, but everything it learns,

23:14

it learns in order to protect me from manipulation

23:18

by other AI’s, by other outside influencers.

23:23

This something that I think,

23:26

with the present day technology,

23:28

I would like to see more effort in that direction.

23:32

Not to get into too technical terms,

23:34

I think you would feel comforted to know that

23:38

the budding efforts in this kind of research is happening,

23:43

you know, trustworthy AI, explainable AI,

23:48

and security motivated,

23:52

so I’m not saying we have the solution

23:54

but a lot of technologists around the world

23:56

are thinking along that line

23:58

and trying to make that happen.

24:01

It’s not that I want an AI that belongs to Google

24:05

or to the government, that I can trust,

24:07

I want an AI that I’m its master, it’s serving me,

24:12

And it’s powerful, it’s more powerful than my AI

24:14

because otherwise my AI could manipulate your AI.

24:16

[audience and panel laughter]

24:19

It will have the inherent advantage

24:21

of knowing me very well, so it might not be able to hack you

24:27

but because it follows me around

24:29

and it has access to everything I do and so forth,

24:31

it gives it an edge in the specific realm of just me.

24:36

So, this a kind of counterbalance

24:39

to the danger that the people–

24:40

But even that would have a lot of challenges

24:43

Who is accountable, are you accountable

24:47

for your action or your sidekick?

24:50

Oh, good question. This is going to be

24:51

a more and more difficult question

24:52

that we will have to deal with.

24:54

The sidekick defense. [light laughter]

24:57

All right, Fei-Fei,

24:58

let’s go through a couple questions quickly.

25:00

We often talk of, this is from Regan Pollock,

25:02

we often talk about top-down AI from the big companies,

25:04

how should we design personal AI

25:06

to help accelerate our lives and careers?

25:08

The way I interpret that question is

25:11

so much of AI is being done at the big companies.

25:13

If you want to have AI at a small company

25:15

or personally, can you do that?

25:17

So, well first of all, one solution

25:19

is what Yuval just said [laughing]

25:20

But probably, those things will be built by Facebook.

25:25

So, first of all, it’s true

25:27

there’s a lot of investment and efforts putting

25:30

and resource putting big companies in AI research

25:35

and development but it’s not that

25:37

all the AI is happening there.

25:39

I want to say that academia continue to play a huge role

25:43

in AI’s research and development,

25:48

especially in the long term exploration of AI

25:56

and what is academia?

25:57

Academia is a worldwide network

26:00

of individual students and professors

26:04

thinking very independently and creatively

26:06

about different ideas.

26:08

So, from that point of view,

26:09

it’s a very grassroot kind of effort in AI research

26:13

that continues to happen and small businesses

26:17

and independent research institutes,

26:21

also have a role to play, right?

26:23

There are a lot of publicly available data sets,

26:26

it’s a global community that is very open about sharing

26:30

and disseminating knowledge and technology,

26:33

so yes, please, by all means,

26:35

we want global participation in this.

26:37

All right here’s my favorite question.

26:38

This is from anonymous, unfortunately.

26:40

If I am in eighth grade, do I still need to study?

26:44

[loud laughter and applause]

26:50

As a mom, I will tell you yes.

26:54

Go back to your homework.

26:57

All right Fei-Fei, what do you want

26:58

Yuval’s next book to be about?

27:01

Wow, I didn’t know this, I need to think about that.

27:05

All right well, while you think about that,

27:07

Yuval, what area of machine learning

27:09

do you want Fei-Fei to pursue next?

27:11

The sidekick project. [laughing]

27:14

Yeah, I mean, just what I said, an AI,

27:18

can we create a kind of AI which can serve individual people

27:23

and not some kind of big network?

27:27

I mean, is that even possible

27:29

or is there something about the nature of AI

27:32

which inevitably will always lead back

27:34

to some kind of network defect

27:37

and winner-takes-all and so forth?

27:39

All right, we’re gonna wrap with Fei-Fei,

27:40

Okay, his next book is gonna be a science fiction book

27:44

between you and your sidekick. [all laughing]

27:48

All right, one last question for Yuval

27:50

’cause we’ve got two of the top voted questions are this,

27:52

without the belief in free will,

27:53

what gets you up in the morning?

27:58

Without the belief in free will…

28:02

I don’t think that the question of, I mean, is very

28:05

interesting, or very central.

28:06

It has been central in Western civilization

28:09

because of some kind of basically,

28:10

theological mistake made thousands of years ago [laughing]

28:14

but really it’s a misunderstanding of the human condition.

28:19

The real question is,

28:21

how do you liberate yourself from suffering?

28:24

And one of the most important steps in that direction

28:28

is to get to know yourself better

28:31

and for that, you need to just push aside

28:35

this whole, I mean, for me the biggest problem

28:38

with the belief in free will is that

28:41

it makes people incurious about themselves

28:44

and about what is really happening inside themselves

28:46

because they basically say, I know everything

28:49

I know why I make decisions, this my free will.

28:53

And they identify with whatever thought

28:56

or emotion pops up in their mind

28:57

because ey, this my free will

29:00

and this makes them very incurious

29:02

about what is really happening inside

29:04

and what is also the deep sources

29:06

of the misery in their lives.

29:10

And so, this what makes me wake up in the morning

29:15

to try and understand myself better,

29:19

to try and understand the human condition better,

29:22

and free will is, it’s just irrelevant for that.

29:25

And if we lose it, your sidekick can get you up

29:27

in the morning. [light laughter]

29:29

Fei-Fei, 75 minutes ago

29:31

you said we weren’t gonna reach any conclusions.

29:32

Do you think we got somewhere?

29:35

Well, we opened a dialogue between the humanist

29:37

and the technologists and I want to see more of that.

29:41

Great, all right, thank you so much.

29:43

Thank you Fei-Fei, thank you Yuval Noah Harari.

29:44

It was wonderful to be here, thank you to the audience.

Share this:

Like this:

Like Loading...

Leave a Reply

%d bloggers like this: