COMPUTATIONAL COGNITIVE SCIENCE

  • View
    41

  • Download
    2

Embed Size (px)

DESCRIPTION

COMPUTATIONAL COGNITIVE SCIENCE. Cognitive Revolution. Development of computer led to rise of cognitive psychology and artificial intelligence. BINAC: the Binary Automatic Computer, developed in 1949. Artificial Intelligence. - PowerPoint PPT Presentation

Text of COMPUTATIONAL COGNITIVE SCIENCE

  • COMPUTATIONAL COGNITIVE SCIENCE

  • Cognitive RevolutionDevelopment of computer led to rise of cognitive psychology and artificial intelligenceBINAC: the Binary Automatic Computer, developed in 1949

  • Artificial IntelligenceConstructing artificial computer-based systems that produce intelligent outcomes

    ExamplesGame playing programsDeep blueIntelligent robotsMars roversDarpas urban challengeNetflix competitionConversational agents

  • Weak vs. Strong AIWeak AI using AI as a tool to understand human cognition

    Strong AI a properly programmed computer has a mind capable of understanding

  • Turing TestCan artificial intelligence be as good as human intelligence? How can we test this?

    Turing test (1950) designed to test whether humans can distinguish between humans and computers based on conversationsA human interrogator could ask a respondent (either a computer or a human, whose identity was hidden) any question he or she wished, and based on either the computers or the humans response, the interrogator had to decide if the answer was given by the computer or by the human.Alan Turing (1912-1954)

  • A classic conversational agentThe Turing Test inspired an early, satirical, attempt to create a computerized Rogerian therapist, Eliza:

    Elizas program is a set of IfThen rules: IF person mentions father THEN say Who else in your family comes to mind when you think about this? IF person writes I remember X THEN say Does it make you feel happy to recall X?

    ELIZA has no intelligence itself. Intelligence comes from people interpreting its statements.

  • Example of modern conversational agents [following is a fragment where A=your instructor; B = very good artificial agent] A: are you human? B: What do you think? I am a Supreme Ruler. A: seriously B: I am always serious. A: who was Turing? B: A brilliant and tragic figure in the history of computer science. A: Do you think the Turing test is a good one? B: Some of them can.

    A: are you having a good day? B: I am having fun. A: How do you know you are having fun? B: By introspection. http://testing.turinghub.com/

  • Future of AIComputer chip capacity and processing speed are increasing exponentially

    Some theorists (e.g. Ray Kurzweil) believe this will lead to a technological singularity along with dramatic improvements in AI

  • Computational ModelingMost modeling in cognitive science targets natural intelligence

    Goal is to develop model or mimic some aspects of human cognitive functioningproduce the same errors as humans

    Simulations of aspects of human behaviour

  • Why do we need computational models?Makes vague verbal terms specificProvides precision needed to specify complex theories. Provides explanations Obtain quantitative predictions just as meteorologists use computer models to predict tomorrows weather, the goal of modeling human behavior is to predict performance in novel settings

  • Neural Networks

  • Neural NetworksAlternative to traditional information processing models Also known as: PDP (parallel distributed processing approach)Connectionist models

    David RumelhartJay McClelland

  • Neural NetworksNeural networks are networks of simple processors that operate simultaneously

    Some biological plausibility

  • SIdealized neurons (units)OutputProcessorInputsAbstract, simplified description of a neuron

  • Neural NetworksUnits Activation = Activity of unit Weight = Strength of the connection between two units

    Learning = changing strength of connections between units

    Excitatory and inhibitory connectionscorrespond to positive and negative weights respectively

  • An example calculation for a single (artificial) neuronDiagram showing how the inputs from a number of units are combined to determine the overall input to unit-i.

    Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with +1, but if the net input is less than 1 then it will respond with 1final output

  • What would happen if we change the input J3 from +1 to -1?output changes to -1output stays at +1do not know

    What would happen if we change the input J4 from +1 to -1?output changes to -1output stays at +1do not know

    final output

  • If we want a positive correlation between the output and input J3, how should we change the weight for J3?

    make it negativemake it positivedo not know

    final output

  • Multi-layered NetworksActivation flows from a layer of input units through a set of hidden units to output units

    Weights determine how input patterns are mapped to output patterns

    hidden unitsinput unitsoutput units

  • Multi-layered Networks

    Network can learn to associate output patterns with input patterns by adjusting weights

    Hidden units tend to develop internal representations of the input-output associations

    Backpropagation is a common weight-adjustment algorithmhidden unitsinput unitsoutput units

  • A classic neural network: NETtalk(after Hinton, 1989)network learns to pronounce English words: i.e., learns spelling to sound relationships. Listen to this audio demo.

  • Different ways to represent information with neural networks: localist representationconcept 1concept 2concept 3Each unit represents just one item grandmother cellsUnit 1Unit 2Unit 3Unit 4Unit 5(activations of units; 0=off 1=on)Unit 6

    100000000100010000

  • Distributed Representations (aka Coarse Coding)concept 1concept 2concept 3(activations of units; 0=off 1=on)Each unit is involved in the representation of multiple itemsUnit 1Unit 2Unit 3Unit 4Unit 5Unit 6

    111000101101010101

  • Suppose we lost unit 6concept 1concept 2concept 3(activations of units; 0=off 1=on)Can the three concepts still be discriminated?NOYESdo not knowUnit 1Unit 2Unit 3Unit 4Unit 5Unit 6

    111000101101010101

  • Representation ARepresentation BWhich representation is a good example of distributed representation?representation Arepresentation Bneither

    Unit 1Unit 2Unit 3Unit 4Unit 1Unit 2Unit 3Unit 4W1000W1001X1000X0110Y1000Y0101Z1000Z1010

  • Advantage of Distributed RepresentationsEfficiency Solve the combinatorial explosion problem: With n binary units, 2n different representations possible. (e.g.) How many English words from a combination of 26 alphabet letters? Damage resistance Even if some units do not work, information is still preserved because information is distributed across a network, performance degrades gradually as function of damage (aka: robustness, fault-tolerance, graceful degradation)

  • Neural Network ModelsInspired by real neurons and brain organization but are highly idealized

    Can spontaneously generalize beyond information explicitly given to network

    Retrieve information even when network is damaged (graceful degradation)

    Networks can be taught: learning is possible by changing weighted connections between nodes

  • Recent Neural Network Research(since 2006)Deep neural networks by Geoff HintonDemos of learning digitsDemos of learned movements

    What is new about these networks?they can stack many hidden layerscan capture more regularities in data and generalize betteractivity can flow from input to output and vice-versa

    Geoff HintonIn case you want to see more details: YouTube video

  • Samples generated by network by propagation activation from label nodes downwards to input nodes (e.g. pixels)Graphic in this slide from Geoff Hinton

  • Examples of correctly recognized handwritten digitsthat the neural network had never seen before Graphic in this slide from Geoff Hinton

  • Other Demos & ToolsIf you are interested, here are tools to create your own neural networks and train it on data:

    Hopfield networkhttp://www.cbu.edu/~pong/ai/hopfield/hopfieldapplet.html

    Backpropagation algorithm and competitive learning:http://www.psychology.mcmaster.ca/4i03/demos/demos.html

    Competitive learning:http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html

    Various networks:http://diwww.epfl.ch/mantra/tutorial/english/

    Optical character recognition:http://sund.de/netze/applets/BPN/bpn2/ochre.html

    Brain-wave simulatorhttp://www.itee.uq.edu.au/%7Ecogs2010/cmc/home.html

    This new approach, developed in the late 1950s and early 1960s, was directly tied to the development of the computer (Gardner, 1985).

    Researchers seized on the computer as a model for the way in which human mental activity takes place; the computer was a tool that allowed researchers to specify the internal mechanisms that produce behavior.

    Herbert A. Simon and Alan Newell and linguist Noam Chomsky played a central role in this revolution, providing examples of how progress could be achieved by comparing the mind to a computing machine.

    The cognitive revolution led to a detailed conception of the form of a theory of mental activity, but to say that mental activities are like computer programs is a leap. Consider the machines that run computer programs versus the machine that produces mental activitythat is, the brain. Certainly, computers and brains look very different and are composed of different materials. Moreover, computer programs are separate from the computers that run them; the same program can run on many different machines. But the mental activities taking place in your head right now are yours and yours alone. Why should we assume that programs for computers have anything to do with mental activities produced by brains? Clearly the analogy is restricted to only certain aspects of computer progra