Transcript
Page 1: COMPUTATIONAL COGNITIVE SCIENCE

COMPUTATIONAL COGNITIVE SCIENCE

Page 2: COMPUTATIONAL COGNITIVE SCIENCE

Cognitive Revolution

• Development of computer led to rise of cognitive psychology and artificial intelligence

BINAC: the Binary Automatic Computer, developed in 1949

Page 3: COMPUTATIONAL COGNITIVE SCIENCE

Artificial Intelligence

• Constructing artificial computer-based systems that produce intelligent outcomes

• Examples– Game playing programs

• Deep blue

– Intelligent robots• Mars rovers• Darpa’s urban challenge

– Netflix competition– Conversational agents

Page 4: COMPUTATIONAL COGNITIVE SCIENCE

Weak vs. Strong AI

• Weak AI — using AI as a tool to understand human cognition

• Strong AI — a properly programmed computer has a “mind” capable of understanding

Page 5: COMPUTATIONAL COGNITIVE SCIENCE

Turing Test

• Can artificial intelligence be as good as human intelligence? How can we test this?

• Turing test (1950) – designed to test whether humans can distinguish

between humans and computers based on conversations

– A human interrogator could ask a respondent (either a computer or a human, whose identity was hidden) any question he or she wished, and based on either the computer’s or the human’s response, the interrogator had to decide if the answer was given by the computer or by the human.

Alan Turing (1912-1954)

Page 6: COMPUTATIONAL COGNITIVE SCIENCE

A classic conversational agent

• The Turing Test inspired an early, satirical, attempt to create a computerized Rogerian therapist, “Eliza”:

Eliza’s program is a set of “If…Then” rules:– “IF person mentions ‘father’ THEN say ‘Who else in your family comes to mind when you think about this?’– “IF person writes ‘I remember X’ THEN say ‘Does it make you feel happy to recall X?’

ELIZA has no intelligence itself. Intelligence comes from people interpreting its statements.

Page 7: COMPUTATIONAL COGNITIVE SCIENCE

Example of modern conversational agents

[following is a fragment where A=your instructor; B = very good artificial agent] A: are you human?

B: What do you think? I am a Supreme Ruler.

A: seriouslyB: I am always serious.

A: who was Turing?B: A brilliant and tragic figure in the history of computer science.

A: Do you think the Turing test is a good one?B: Some of them can.

A: are you having a good day?B: I am having fun.

A: How do you know you are having fun?B: By introspection.

http://testing.turinghub.com/

Page 8: COMPUTATIONAL COGNITIVE SCIENCE

Future of AI

• Computer chip capacity and processing speed are increasing exponentially

• Some theorists (e.g. Ray Kurzweil) believe this will lead to a technological singularity along with dramatic improvements in AI

Page 9: COMPUTATIONAL COGNITIVE SCIENCE

Computational Modeling

• Most modeling in cognitive science targets natural intelligence

• Goal is to develop model or mimic some aspects of human cognitive functioning– produce the same errors as humans

Simulations of aspects of human behaviour

Page 10: COMPUTATIONAL COGNITIVE SCIENCE

Why do we need computational models?

• Makes vague verbal terms specific– Provides precision needed to specify complex

theories.

• Provides explanations

• Obtain quantitative predictions – just as meteorologists use computer models to predict tomorrow’s weather, the goal of modeling human behavior is to predict performance in novel settings

Page 11: COMPUTATIONAL COGNITIVE SCIENCE

Neural Networks

Page 12: COMPUTATIONAL COGNITIVE SCIENCE

Neural Networks

• Alternative to traditional information processing models

• Also known as: – PDP (parallel distributed processing approach)– Connectionist models

David Rumelhart Jay McClelland

Page 13: COMPUTATIONAL COGNITIVE SCIENCE

Neural Networks

• Neural networks are networks of simple processors that operate simultaneously

• Some biological plausibility

Page 14: COMPUTATIONAL COGNITIVE SCIENCE

Idealized neurons (units)

Output

Processor

Inputs

Abstract, simplified description of a neuron

Page 15: COMPUTATIONAL COGNITIVE SCIENCE

Neural Networks

• Units– Activation = Activity of unit

– Weight = Strength of the connection between two units

• Learning = changing strength of connections between units

• Excitatory and inhibitory connections– correspond to positive and negative weights

respectively

Page 16: COMPUTATIONAL COGNITIVE SCIENCE

An example calculation for a single (artificial) neuron

Diagram showing how the inputs from a number of units are combined to determine the overall input to unit-i.

Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with 1, but if the net input is less than 1 then it will respond with –1

final output

Page 17: COMPUTATIONAL COGNITIVE SCIENCE

What would happen if we change the input J3 from +1 to -1?a)output changes to -1b)output stays at +1c)do not know

What would happen if we change the input J4 from +1 to -1?a)output changes to -1b)output stays at +1c)do not know

final output

Page 18: COMPUTATIONAL COGNITIVE SCIENCE

If we want a positive correlation between the output and input J3, how should we change the weight for J3?

a)make it negativeb)make it positivec)do not know

final output

Page 19: COMPUTATIONAL COGNITIVE SCIENCE

Multi-layered Networks

• Activation flows from a layer of input units through a set of hidden units to output units

• Weights determine how input patterns are mapped to output patterns

hidden units

input units

output units

Page 20: COMPUTATIONAL COGNITIVE SCIENCE

Multi-layered Networks

• Network can learn to associate output patterns with input patterns by adjusting weights

• Hidden units tend to develop internal representations of the input-output associations

• Backpropagation is a common weight-adjustment algorithm

hidden units

input units

output units

Page 21: COMPUTATIONAL COGNITIVE SCIENCE

A classic neural network: NETtalk

7 groups of 29 input units

26 output units

80 hidden units

_ a _ c a t _ 7 letters of text input

(after Hinton, 1989)

target letter

teacher

/k/

target output

network learns to pronounce English words: i.e., learns spelling to sound relationships. Listen to this audio demo.

Page 22: COMPUTATIONAL COGNITIVE SCIENCE

Different ways to represent information with neural networks: localist representation

concept 1

concept 2

concept 3

Each unit represents just one item “grandmother” cells

1 0 0 0 0 0

0 0 0 1 0 0

0 1 0 0 0 0

Unit 1

Unit 2Unit 3

Unit 4Unit 5

(activations of units; 0=off 1=on)

Unit 6

Page 23: COMPUTATIONAL COGNITIVE SCIENCE

Distributed Representations (aka Coarse Coding)

concept 1

concept 2

concept 3

1 1 1 0 0 0

1 0 1 1 0 1

0 1 0 1 0 1

(activations of units; 0=off 1=on)

Each unit is involved in the representation of multiple items

Unit 1

Unit 2Unit 3

Unit 4Unit 5 Unit 6

Page 24: COMPUTATIONAL COGNITIVE SCIENCE

Suppose we lost unit 6

concept 1

concept 2

concept 3

1 1 1 0 0 0

1 0 1 1 0 1

0 1 0 1 0 1

(activations of units; 0=off 1=on)

Can the three concepts still be discriminated?a)NOb)YESc)do not know

Unit 1

Unit 2Unit 3

Unit 4Unit 5 Unit 6

Page 25: COMPUTATIONAL COGNITIVE SCIENCE

Unit 1

Unit 2

Unit 3

Unit 4

Unit 1

Unit 2

Unit 3

Unit 4

W 1 0 0 0 W 1 0 0 1X 1 0 0 0 X 0 1 1 0Y 1 0 0 0 Y 0 1 0 1Z 1 0 0 0 Z 1 0 1 0

Representation A Representation B

Which representation is a good example of distributed representation?a)representation Ab)representation Bc)neither

Page 26: COMPUTATIONAL COGNITIVE SCIENCE

Advantage of Distributed Representations

• Efficiency – Solve the combinatorial explosion problem:

With n binary units, 2n different representations possible. (e.g.) How many English words from a combination of 26 alphabet letters?

• Damage resistance – Even if some units do not work, information is still

preserved – because information is distributed across a network, performance degrades gradually as function of damage

– (aka: robustness, fault-tolerance, graceful degradation)

Page 27: COMPUTATIONAL COGNITIVE SCIENCE

Neural Network Models

• Inspired by real neurons and brain organization but are highly idealized

• Can spontaneously generalize beyond information explicitly given to network

• Retrieve information even when network is damaged (graceful degradation)

• Networks can be taught: learning is possible by changing weighted connections between nodes

Page 28: COMPUTATIONAL COGNITIVE SCIENCE

Recent Neural Network Research(since 2006)

• “Deep neural networks” by Geoff Hinton– Demos of learning digits– Demos of learned movements

• What is new about these networks?– they can stack many hidden layers– can capture more regularities in data and

generalize better– activity can flow from input to output

and vice-versa

Geoff Hinton

In case you want to see more details: YouTube video

Page 29: COMPUTATIONAL COGNITIVE SCIENCE

Samples generated by network by propagation activation from label nodes downwards to input nodes (e.g. pixels)

Graphic in this slide from Geoff Hinton

Page 30: COMPUTATIONAL COGNITIVE SCIENCE

Examples of correctly recognized handwritten digitsthat the neural network had never seen before

Graphic in this slide from Geoff Hinton

Page 31: COMPUTATIONAL COGNITIVE SCIENCE

Other Demos & Tools

If you are interested, here are tools to create your own neural networks and train it on data:

Hopfield network

http://www.cbu.edu/~pong/ai/hopfield/hopfieldapplet.html

Backpropagation algorithm and competitive learning:

http://www.psychology.mcmaster.ca/4i03/demos/demos.html

Competitive learning:

http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html

Various networks:

http://diwww.epfl.ch/mantra/tutorial/english/

Optical character recognition:

http://sund.de/netze/applets/BPN/bpn2/ochre.html

Brain-wave simulator

http://www.itee.uq.edu.au/%7Ecogs2010/cmc/home.html


Recommended