Fluidly representing the world: Way, way harder than you think.

  • Published on
    05-Jan-2016

  • View
    212

  • Download
    0

Transcript

Fluidly representing the world:Way, way harder than you thinkThe One Thing to Remembera year from nowRules and statistics must DYNAMICALLY INTERACT to represent the world appropriately in computer models of intelligence.Rules are not enoughStatistics is not enough.Lets start things off with a question that you will never forget...What do cows drink? FIRST ANSWERCOWMILKDRINKStatistical, bottom-up representationsBottom-up, Statistical(e.g., Connectionism)consciousunconsciousWhat do cows drink? FIRST ANSWERThese neurons are activated without ever have heard the word milkconsciousunconsciousCOWMILKDRINKBottom-up, Statistical(e.g., Connectionism)What do cows drink? RIGHT ANSWER Top-Down, Rule-Based(e.g., Symbolic AI) Facts about the world: ISA(cow, mammal) ISA(mammal, animal) Rule 1: IF animal(X) AND thirsty(X) THEN lack_water(X) Rule 2: IF lack_water(X) then drink_water(X) Conclusion: COWS DRINK WATERWhat do cows drink? Context 1: Milk Context 2: WaterCOWMILKDRINKStatistical, bottom-up representationsRule-based representationsBottom-up + Top-DownconsciousunconsciousWhen a computer will be able to represent the world in such a way that, when it is asked, What do cows drink? it will be able to answer either milk or water, depending on the context of the question.Artificial Intelligence: Rules are not enoughARules to characterize an ATwo oblique, approximately vertical lines, slanting inwards and coming together at the top of the figureAn approximately horizontal crossbarAn enclosed area above an open areaVertical lines longer than the crossbarCharacterize this wordIt says GutenbergIt is written in a Gothic font.It looks like very old-fashioned script.Germans would have an easier time reading it than Italians.It has a very German feel to it.It is nine letters long.It contains the letter e twice, and the letter u once.It means good mountain in EnglishIt makes one think of the Vulgate Bible.The t is written in mirror-image...And can also be read perfectly upside-down!Question: But is that really a property of Gutenberg? Answer: Yes, in some contexts. But does that mean that the representation of Gutenberg must always include: Can be read upside-down when written in a VERY special way in pseudo-Gothic script?Moral of the StoryIn the real world, a fixed set of rules, however long, is never sufficient to cover all possible contexts in which an object or a situation occurs. But the statistics of the environment are not enough, either.Without top-down conceptual knowledge we have no hope of understanding the following image Statistics is not enoughStatistics is not enoughA dark spot. Hmmm. Doesnt look like anything. Statistics is not enoughPictures often have faces in them. Is that a face in the lower-left hand corner?Statistics is not enoughNah, doesnt join up with anything else.Statistics is not enoughStatistics is not enoughOh, THEREs a face.Statistics is not enoughBut, again, it doesnt make sense, just an isolated faceStatistics is not enoughLets look at that dark spot again. A shadow?Statistics is not enoughHey, TREES produce shadows. Is there a tree around? THAT could be a tree!Statistics is not enoughIf thats a tree, that could be a metal grating. Statistics is not enoughBut trees with metal gratings like that are on sidewalks. So wheres the kerb?Statistics is not enoughIf this is a kerb, it should go on straight. Does it? Yes, sort of. Now whats this thing in the middle?Statistics is not enoughThat sort of looks like a dogs head. That could make sense.Statistics is not enoughBut heads are attached to bodies, so there should be a body. Hey, that looks like it could be a front leg.Statistics is not enoughThe rest fits pretty well with that interpretation, too. Why so spotty?A dalmatian, yeah, sure, drinking from a road gutter under a tree.Dynamically converging on the appropriate representation Bottom-upTop-downContext-dependentrepresentationN.B. The representation must allow a machine to understand the object in essentially the same way we do.Consider representing an ordinary, everyday objectThe object:A cobblestoneA cobblestone is like.a brickasphaltthe hand-brake of a cara paper-weighta sharpening steela rulera weapona braina weight in balance pansa nutcrackerCarrera marble a bottle openera turnip in Frenchan anchor for a (little) boata tent pega hammeretc. a voters ballot...May 1968, ParisUNDER 21, this is your ballot.So, how could a machine ever produce context-dependent representations? Searching for an analogyContext- I was late in reviewing a paper.- The editor, a friend, said, Please get it done now. I put the paper on my cluttered desk, in plain view, so that it simply COULD NOT ESCAPE MY ATTENTIONit would keep making me feel guilty till I got it done. I wanted to find a nice way of saying I had devised a way so that the paper would continue to bug me till I had gotten it doneBottom-up: statistics-based sub-cognitive, unconscious par of network Top-down: semantic, rule-based, conscious part of networkSwat mosquitoDo the reviewSomething that wont go away until youve taken care of ithungry child No, too harsh to relate a paper to review to a hungry child an itch No, too easy to just scratch it; papers are hard to review dandelions You cant make them go away, ever a mosquito Until you get up and swat it it NEVER stops buzzing! Representations of an object/situation must always be tailored to the context in which they are found. A machine must be able to do this automatically. Solar System Representation used in BACON (Langley et al., 1980) derived from Keplers data.?Is the appropriate representation of this figure ... just the juxtaposition of 64 of these?And if so, just how is the flickering that we humans see represented? Fluidly Representing the World: the Hard Problem of Artificial IntelligenceDiscovering how a system could develop dynamically evolving, context-sensitive representations is almost certainly The Hard Problem of artificial intelligence. Moreover, it always has been; it just hasnt been recognized as such.We argue for a continual interaction between bottom-up and top-down processing, thereby allowing the system, on its own, to dynamically converge to a the context-dependent representations it needs. This ability is at the heart of our own intelligence and AI MUST face the challenge of learning how to fluidly represent the world if there is ever to be any chance of developing autonomous agents with even a glimmer of real human intelligence.