84
Artificial Artificial Intelligence Intelligence and and Computing Science Computing Science October 19, 2012

Artificial Intelligence and Computing Science October 19, 2012

Embed Size (px)

Citation preview

Artificial IntelligenceArtificial Intelligenceand and

Computing ScienceComputing Science

October 19, 2012

AI in the Seventies and Eighties

A belated realization: in most situations, intelligent behaviour requires knowledge

How many things do you know?

``I can’t think of a hundred things that I know a thousand things about’’

(Marvin Minsky, `Semantic Information Processing’, 1968)

AI in the Seventies

Winograd, SHRDLU

Expert Systems

Minsky, `Frames’

Rumelhart and McClellan, `Parallel Distributed Programming’ (return of the Perceptron)

SHRDLU’s World

Sample Dialogue

Winograd: What’s on the green block?

Shrdlu: A pyramid

Winograd: Put the red block on the green block

Shrdlu: OK

Winograd: Why did you move the purple pyramid?

Shrdlu: To clear off the green block

Winograd: Why did you put the red block on the green block?

Shrdlu: You told me to

Winograd: Pick up a big red block

Shrdlu: OK

Winograd: Grasp the pyramid

Shrdlu: I don’t understand which pyramid you mean

Winograd: Find a block which is taller than the one you are holding and put it into the box

Shrdlu: By `it’, I assume you mean the block which is taller than the one I am holding.

OK.

Winograd: What does the box contain?

Shrdlu: The blue pyramid and the blue block

Winograd: What is the pyramid supported by?

Shrdlu: The box

Winograd: Can the table pick up blocks?

Shrdlu: No

Things SHRDLU doesn’t know:

`red’ and `green’ are colours

SHRDLU’s world is flat

A table has legs but no arms

SHRDLU is a computer program

Expert Systems (rule-based)

Knowledge can be represented by a number of `if…then’ rules plus an

inference engine.

E.g, ``IF temperature is high AND rash is present, THEN patient has measles.’’

We can extract the rules from human experts via interviews.This process is known as `knowledge engineering’:

`If an animal has fur, it is a mammal’`If an animal has feathers, it is a bird’`If an animal is a bird, it can fly’`If an animal has scales, it is a fish’`If an animal is a fish, it can swim’`If an animal lays eggs and has fur, it is a duck-billed platypus’

This gives us a set of rules that an inference engine (or `expert system shell’) can reason about.

Two popular modes of reasoning are forward chaining and backward chaining:

`If an animal has fur, it is a mammal’`If an animal has feathers, it is a bird’`If an animal is a bird, it can fly’`If an animal is a bird, it lays eggs’`If an animal has scales, it is a fish’`If an animal is a fish, it can swim’`If an animal lays eggs and has fur, it is a duck-billed platypus’

Forward chaining:

Given a new fact (`Tweety has feathers’), search for all matching conditionals,draw all possible conclusions, and add them to the knowledge base:

:- Tweety is a bird:- Tweety can fly:- Tweety lays eggs

Potential problem: we run into the combinatorial explosion again

`If an animal has fur, it is a mammal’`If an animal has feathers, it is a bird’`If an animal is a bird, it can fly’`If an animal is a bird, it lays eggs’`If an animal has scales, it is a fish’`If an animal is a fish, it can swim’`If an animal lays eggs and has fur, it is a duck-billed platypus’`Tweety has feathers’

Backward chaining:

Given a query (`Does Tweety lay eggs?’),search for all matching consequentsand see if the database satisfies the conditionals:

`If an animal has fur, it is a mammal’`If an animal has feathers, it is a bird’`If an animal is a bird, it can fly’`If an animal is a bird, it lays eggs’`If an animal has scales, it is a fish’`If an animal is a fish, it can swim’`If an animal lays eggs and has fur, it is a duck-billed platypus’`Tweety has feathers’

Backward chaining:

`Does Tweety lay eggs?’

`If an animal has fur, it is a mammal’`If an animal has feathers, it is a bird’`If an animal is a bird, it can fly’`If an animal is a bird, it lays eggs’`If an animal has scales, it is a fish’`If an animal is a fish, it can swim’`If an animal lays eggs and has fur, it is a duck-billed platypus’`Tweety has feathers’

Backward chaining:

`Does Tweety lay eggs?’`Is Tweety a bird?’

`If an animal has fur, it is a mammal’`If an animal has feathers, it is a bird’`If an animal is a bird, it can fly’`If an animal is a bird, it lays eggs’`If an animal has scales, it is a fish’`If an animal is a fish, it can swim’`If an animal lays eggs and has fur, it is a duck-billed platypus’`Tweety has feathers’

Backward chaining:

`Does Tweety lay eggs?’`Is Tweety a bird?’Does Tweety have feathers?’

Backward chaining:

This method is used by Prolog, for example

Conclusion: Yes, Tweety does lay eggs

`If an animal has feathers, it is a bird’`If an animal is a bird, it can fly’`If an animal is a bird, it lays eggs’

Potential problem: A lot of rules have exceptions.

Frames

(Marvin Minsky, 1974)

A frame allows us to fill in defaultknowledge about a situation from apartial description. For example,

``Sam was hungry. He went into aMcdonalds and ordered a hamburger.Later he went to a movie.’’

Did Sam eat the hamburger?

So we can economically represent knowledge by defining properties at the most general level, then letting specific cases inherit those properties…

Event

Transaction

Buying something

Buying a hamburger

Return of the perceptron(now called a `neural net’)

Changes since 1969:

Hidden layers

Non-linear activation function

Back-propagation allows learning

Rumelhart and McClelland`Parallel Distributed Processing’

Use neural nets to represent knowledge by the strengths of associations between different concepts, rather than as lists of facts, yielding programs that can learn from example.

Conventional Computer Memory

Register One 01100110

Register Two 11100110

Register Three 00101101

. . . .

AI: 1979-2000

Douglas Lenat, `CYC’,

Douglas Hofstadter, `Fluid Analogies’

Brian Hayes, `Naïve Physics’

CYC’s data are written in CycL, which is adescendant of Frege’s predicate calculus(via Lisp).

For example,

(#$isa #$BarackObama #$UnitedStatesPresident)

(#$genls #$Mammal #$Animal)

or

The same language gives rules for deducingnew knowledge:

(#$implies (#$and

(#$isa ?OBJ ?SUBSET) (#$genls ?SUBSET ?SUPERSET))

(#$isa ?OBJ ?SUPERSET))

Intangible Things are things that are not physical -- are not made of, or encoded in, matter. These include events, like going to work, eating dinner, or shopping online. They also include ideas, like those expressed in a book or on a website. Not the physical books themselves, but the ideas expressed in those books. It is useful for a software application to know that something is intangible, so that it can avoid commonsense errors; like, for example, asking a user the color of next Tuesday's meeting.

What CYCcorp says CYC knows about `intangible things’.

Questions CYC couldn’t answer in 1994

What colour is the sky?

What shape is the Earth?

If it’s 20 km from Vancouver to Victoria, and 20 km from Victoria to Sydney, can Sydney be 400 km from Vancouver?

How old are you?

(Prof. Vaughan Pratt)

Hofstadter: Fluid Analogies

Human beings can understand similes, such as

``Mr Pickwick is like Christmas’’

Example:

Who is the Michelle Obama of Canada?

Michaelle Jean, Governor-General

Head of government

Spouse Spouse

Head of State

Spouse Spouse

One of Hofstadter’s approaches to solving theseproblems is `Copycat’, a collection of independentcompeting agents.

If efg becomes efw, what does ghi become?

If aabc becomes aabd, what does ijkk become?

Inside Copycat:

ijkk

ij(kk)

ij(ll)

(ijk)k

(ijk)l

aabc:ijkk

aabd:hjkk

aabd:jjkk

If efg becomes efw, what does ghi become?

If aabc becomes aabd, what does ijkk become?

COPYCAT suggests whi and ghw

COPYCAT suggests ijll and ijkl and jjkk and hjkk

Hofstadter:

``What happens in the first 500 milliseconds?”

Find the O

X X X X X X X X X X X

X X X X X X X X X X X

X X X X X X X X X X X

X X X X X X X X O X X

X X X X X X X X X X X

Find the X

X X X X X X X X X X X

X X X X X X X X X X X

X X X X X X X X X X X

X X X X X X X X X X X

X X X X X X X X X X X

Find the O

X X O X X X O X X O X

X X X X X O X X O X X

X X X O X X O X O X X

X X O X X O X X O X X

O X X X X X O X X O X

What eye sees What I see

The Cutaneous Rabbit

Naive model of perception:

World Vision Awareness

World Vision and Knowledge Awareness

Better model of perception:

We recognise allthese as instances of the letter `A’.

No computer can do this.

Hofstadter’s program `Letter Spirit’ attemptsto design a font.

Naïve Physics

Hayes, ‘Naïve Physics Manifesto’, 1978

“About an order of magnitude more work than anyprevious AI project …’’

Hayes, `Second Naïve Physics Manifesto’, 1985

“About two or three orders of magnitude morework than any previous AI project…”

One sub-project of naïve physics:

Write down what an intelligent 10-year-oldknows about fluids

Part of this is knowing how we talk about fluids:

For example:

Suppose Lake Chad dries up in the dryseason and comesback in the wetseason.

Is it the same lake

when it comes back?

Suppose I buy a cup ofcoffee, drink it, thenget a free refill.

Is it the same cup ofcoffee after the refill?

2011: IBM’s Watson Wins Jeopardy

Inside Watson: 4 Terabytes disk storage: 200 million pages(including all of Wikipedia)

16 Terabytes of RAM

90 3.5-GHz eight-core processors

One of the components of Watson is a Google-like searchalgorithm.

For example, a typical Jeopardy question in the category`American Presidents’ might be

``The father of his country, he didn’t really chop down acherry tree’’

Try typing `father country cherry tree’ into Google

The first hit is `George Washington – Wikipedia’

But Watson also needs to know how confident it shouldbe in its answers

Conspicuous Failures, Invisible Successes

In 2012, we have nothing remotely comparable to 2001’s HAL.

On the other hand, some complex tasks, such as attaching a printer to a computer, have become trivially easy

A different approach: robot intelligence

Grey Walter’s machina speculatrix, 1948

BEAM robotics,

Queen Ant, a light-seeking hexapod, 2009

AI Now: Robot Intelligence

Rodney Brooks, `Cambrian Intelligence’

-Complex behaviour can arise when a simple system interacts with a complex world.

-Intelligent behaviour does not require a symbolic representation of the world.

SPIRIT: Two years onMars and still going.

Brook’s approach invites us to reconsiderour definition of intelligence:

…is it the quality that distinguishesAlbert from Homer?

…or the qualitythat distinguishesAlbert and Homerfrom a rock?

`Chess is the touchstone of intellect’

-- Goethe

…but perhaps we are most impressed by just those of our mental processes that move slowly enough for us to notice them…

Strong AI:

``We can build a machine that will have a mind.’’

Weak AI:

``We can build a machine that acts like ithas a mind.’’

Strong AI (restatement):

``We can build a machine that, solely byvirtue of its manipulation of formal symbols, will have a mind.’’

Hans Moravec:

``We will have humanlike competence in a $1,000 machine in about forty years.’’

---- 1998

Hubert Dreyfus:

``No computer can ever pass the TuringTest, or do any of the following things[long list, including `play master-levelChess’].’’

1965; MacHack beat Dreyfus in 1967

…and if a program did pass the Turing test, what then?

John Searle:

``Even if a computer did pass the Turingtest, it would not be intelligent, as wecan see from the Chinese Room argument’’

John Bird:

``A computer is a deterministic system,and hence can have neither free-will,responsibility or intelligence -- whether it passes the Turing test or not.’’

``This is an AND gate:

A

B

C

A B C

0011

0101

0001

Given A and B, does the computer have any choiceabout the value of C?

… but a computer is just a collection of AND gates andsimilar components. If none of these components canmake a free choice, the computer cannot make a free choice.’’

The Brazen Head