25
Can AI Become a Dystopian Threat to Humanity? A Hardware Perspective Igor Markov Google and The University of Michigan The views expressed are those of my own and do not represent my employers

Igor Markov, Software Engineer, Google at MLconf SEA - 5/20/16

  • Upload
    mlconf

  • View
    351

  • Download
    1

Embed Size (px)

Citation preview

Can AI Become a Dystopian Threat to Humanity? 

A Hardware Perspective

Igor MarkovGoogle and The University of Michigan

The views expressed are those of my own and do not represent my employers

Threats to humanity? 

Threats to humanity? 

Threats to humanity? 

Why are we so obsessed? 

How did the humanity survive?

How did the humanity survive?• by being smart• by knowing the adversary• by controlling physical resources• by using the physical world to advantage

Now, back to the dystopian AI myth• AI may become smarter than us• Possibly malicious• The physical embodiments are unclear

Black Death killed 50M people in XIV c

Intelligence – hostile or friendly –is limited by physical resources

Computing machinery is designed using an abstraction hierarchy• From transistors to CPUs to data centers• Each level has a well‐defined function

Introduce hard boundaries between different levels of intelligence, trust• Can toasters and doorknobs be trusted?• Who can use weapons?• Each agent should have a key weakness

Limit self‐replication, self‐repairand self‐improvement

Limit AIs access to energy• Firmly control the electric grid• No long‐lasting batteries, fuel cells

or reactors

Tame potential threats and use them for protection

Constraints on AI to intercept dystopian threats

1. Hard boundaries between levels of intelligence and trust2. Limits on self‐replication, self‐repair and self‐improvement3. Limits on access to energy4. Physical and network security of critical infrastructure

Tame potential threats and use them for protection