1
Tree Table Chair is made of Wood comes from is used for is made of is made of Plastic Sitting on The History and Status of the Debate — Map 3 of 7 Can Physical Symbol Systems Think? An Issue Map™ Publication 3 124 Lucille Suchman, 1987 Situated action can explain the use of plans without representations. Plans are enacted in the course of practical activities, and are best explained by ethnomethodology, which doesn't presuppose the existence of representations in its explanations of social practices. According to ethnomethodology: Practices sometimes may be explained merely by materials in the local environment of the culture. A difference in practices does not necessarily mean a difference in plan or internal representation. The structure of systems of practice are not essentially ordered by rules or norms that can be is supported by is supported by is disputed by is disputed by is disputed by is disputed by is disputed by is supported by Can physical symbol systems learn as humans do? Does the situated action paradigm show that computers can't think? Can the elements of thinking be represented in discrete symbolic form? 42 Humans learn by adding symbolic data to a knowledge base. Both machines and people learn by adding symbolically encoded information to a knowledge base. 44 Hubert Dreyfus and Stuart Dreyfus, 1986 Computers never move beyond explicit rules. In acquiring skills such as the ability to drive a car or play chess, humans initially use explicit rules and then advance through a series of stages in which performance becomes increasingly skilled, fluid, and habituated. At the highest levels of expertise, the rules are no longer consulted. Computers, on the other hand, are tied to the use of explicit rules and can't move beyond them to more flexible forms of expertise. Input Can symbolic representations account for human thought? 94 George Lakoff, 1987 AI models lack the feature of the embodiment of concepts. There is evidence that the body is involved in many processes that are considered to be pure information processing in classical AI. These processes include recent discoveries concerning basic-level concepts, kinesthetic image schemas, and the experiential basis of metaphorical concepts. Note: Also, see sidebar, "Postulates of Experiential Realism," on this map. Does thinking require a body? Can physical symbol systems think dialectically? 30 Joseph Rychlak, 1991 Agency is due to predication and choice. Automatic decision-making processes, even if they possess delaying mechanisms, still cannot choose their own goals or reflect on the meaning of their alternatives. The standards they use to choose between alternatives are not a result of their own status as agents. 40 Martin Heidegger, 1927 The spatiality of equipment. Human beings organize space into areas of nearness and "farness" relative to their needs and concerns. They do not organize space into a three-dimensional system of discrete coordinates. Is the relation between hardware and software similar to that between human brains and minds? Other physical symbol systems arguments Can a symbolic knowledge base represent human understanding? Does mental processing rely on heuristic search? Do humans use rules as physical symbol systems do? 4 Graham Button, Jeff Coulter, John R. E. Lee, and Wes Sharrock, 1995 Neurons cannot represent rules of ordinary language. AI assumes that rules are ultimately represented in the brain. But neurons don't provide a symbolic medium in which rules can be inspected and modified, so they are not appropriate as a medium for the formulation of rules for ordinary language. We don't use neurons like we use rules. Note: See the "Do humans use rules as physical symbol systems do?" arguments on this map. Also, see sidebar, "Postulates of Ordinary Language," on this map. Do physical symbol systems play chess as humans do? 86 The problem of commonsense knowledge. Human commonsense knowledge is so vast that it can never be adequately represented by propositional data in a knowledge base. Dreyfus factors this problem into three parts. "1. How everyday knowledge must be organized so that one can make inferences from it. 2. How skills or know-how can be represented as knowing-that. 3. How relevant knowledge can be brought to bear in particular situations" (1992, p. xviii). This last problem has been called the access problem, that is, the problem of how to efficiently access data in a knowledge base. The access problem is also relevant to heuristic search (see the "Heuristic Search" arguments on this map), because in large knowledge bases the problem of accessing information is also the problem of searching for it. 98 Zenon Pylyshyn, 1974 The body is not essential to intelligence. The body is important to the development or genesis of intelligence (as Jean Piaget has shown), but not to its ultimate form. By the time a human reaches adulthood, the body is no longer essential. If the body were essential to intelligence, Dreyfus would have to claim that an adult quadriplegic is unintelligent. Because it is possible to model adult intelligence without simulating its development, it is possible to put intelligence in a computer without a body. 95 John Haugeland, 1995 The mind–body–world system. Mind, body, and world communicate vast amounts of information to one another across "wide-bandwidth" channels—so much so that they are integrated into a single system. For example, as a person drives to San Jose, his or her mind doesn't operate like a classical symbol system, solving problems by communicating comparatively tiny instructions at "narrow-bandwidth" transducers. In a trip to San Jose, the brain, the fingers, and the road are in constant wide-bandwidth "collaboration," acting as a single, integrated system. Notes: Haugeland supports his view by citing Dreyfus (see "The Body Is Essential to Human Intelligence," Box 96), Gibson (see "Affordances are Features of the Environment," Box 114), and Brooks (see sidebar, "Postulates of Subsumption Architecture," on this map). Haugeland also argues, with some help from Brooks, that the mind–body–world system does not use classical representations. Therefore, Haugeland's argument also disputes the representationalist assumption. 84 John McCarthy and Patrick J. Hayes, 1969 The frame problem. General reasoning requires that a system make relevant inferences while excluding irrelevant inferences. This process requires "frame axioms," which specify those properties of the world that remain unchanged when an action is carried out. Specifying such axioms in advance cannot be carried out in any simple way. Notes: McCarthy and Hayes are not, strictly speaking, disputing the knowledge base assumption. They raise the frame problem as an important (and solvable) issue for AI researchers to deal with. This definition of the frame problem is itself the subject of dispute. 43 George Lakoff, 1987 Motivational factors cause different rates of learning. Humans learn more efficiently when motivated by supplementary knowledge, for example, by metaphorically motivated knowledge. But for computers the opposite is the case: they work less efficiently when forced to deal with supplementary knowledge. For example, it is easier for us to learn about the flow of electricity given our knowledge of flowing waters, but in a computer such knowledge just adds more complexity for it to deal with. Note: The electricity example is drawn from Gentner and Gentner (1982). 91 George Lakoff, 1987 The predicate calculus cannot capture human reasoning. Many knowledge-based systems encode information with some version of the predicate calculus, which is based on the classical concept of a category. But the classical view of categories has been disproved by empirical evidence. Note: Also, see sidebar, "Postulates of Experiential Realism," on this map. com • mon • sense knowl • edge: Our everyday understanding of the world. Generally, such knowledge consists in pretheoretical information that seems obvious when explicitly stated: for example, our knowledge that tables generally have 4 legs and are something people put things on, or that people generally bring gifts to birthday parties. It has been said that commonsense knowledge consists not of the information contained in an encyclopedia, but rather in all the information necessary to read and understand an encyclopedia in the first place. is disputed by is supported by is supported by is supported by is supported by is disputed by is disputed by is disputed by is disputed by is disputed by is disputed by is disputed by is supported by is disputed by is disputed by is supported by is disputed by is disputed by is disputed by is supported by is supported by is disputed by is disputed by is disputed by is supported by is supported by is supported by is disputed by is supported by is disputed by is disputed by is disputed by is supported by is disputed by is disputed by is disputed by is disputed by is disputed by is disputed by is supported by is disputed by is supported by is disputed by is disputed by is supported by is disputed by is disputed by is supported by is disputed by is supported by is disputed by is disputed by is supported by is supported by is disputed by is disputed by is disputed by is disputed by is disputed by is disputed by is supported by is disputed by is disputed by is supported by is disputed by is disputed by is supported by is disputed by is supported by is supported by is disputed by is supported by is disputed by is disputed by 85 Francisco Varela, Evan Thompson, and Eleanor Rosch, 1991 Independent domains can't be aggregated to model common sense. Current models from the representationalist perspective attempt to reconstruct common sense from the combination of a variety of small domains, but the world isn't composed of separate discrete domains of knowledge that can be represented in isolation from each other. To provide an adequate theory of common sense, we must provide an account of context-dependent know-how. 122 William Clancey, 1993 Manipulation of symbols doesn't encompass all of human thought. Every interaction with the environment is an instance of learning. We do not experience the world in terms of preestablished categories; we create categories as we go along through a dialectical process of coadaptation between cognitive and perceptual systems. Thus, representations do not have a formal structure independent of their application. 103 The representationalist assumption. Symbol structures are internal representations of external reality. They are made up of symbols, and are operated on by rules, search, and other psychological processes. Symbolic representations have a constituent structure, in that the meaning of a given representation is a function of the meaning of its constituent parts. Disputed by "The Front-End Assumption Is Dubious," Map 1, Box 74. Notes: Symbol structures in this sense are often referred to as mental representations or classical representations. For more on this classical AI theory of representation, see Newell and Simon (1976), Fodor (1975), and Pylyshyn (1984). Much of the debate between connectionism and classical AI is focused on the issue of mental representation. One of AI's major charges against connectionism is that connectionist networks can't model constituent structure. See the "Can connectionist networks exhibit systematicity?" arguments on Map 5. The cat is on the ma 75 John McCarthy, 1990a Lighthill's categories are irrelevant to AI research. The Lighthill Report implies that AI research is only useful insofar as it contributes to industrial applications (category A) and to neuroscience (category C). But AI has goals of its own. AI researchers study structures of information and problem solving independently of how these structures are realized in humans and animals. Note: An early variant of this argument was made by Professor D. Mitchie, in an appendix to the Lighthill Report. 80 John McCarthy, 1977 Formalized non- monotonic logic. Formalized non-monotonic logic is a development of formal logic that allows for the introduction of new axioms to invalidate old theorems. In this way non-monotonic logics account for the human ability to revise assumptions in light of new observations. Note: This definition is adapted from McDermott and Doyle (1980). is disputed by is disputed by is disputed by 77 John McCarthy, 1996 What is the easiest thing computers can't do? Dreyfus makes some vague arguments about logic-based AI, but he never issues a precise, testable challenge. John McCarthy is supported by is supported by is supported by 100 Hubert Dreyfus, 1992 Madeleine has bodily and imaginative skills. Although Madeleine is blind and uses a wheelchair, she has a body with an inside and an outside and can be moved around in the world. She can also communicate with others and imagine how they encounter the world. The claim that Madeleine acquired common sense solely from books ignores these bodily and imaginative factors. 101 Harry Collins, 1996 If Madeleine can learn common sense, then so can a computer. If someone with as nonstandard a body as Madeleine can acquire commonsense social knowledge, then a computer with its own nonstandard body—a fixed metal box—could also acquire commonsense knowledge. If we can figure out what process Madeleine went through to become a socialized human, then we might apply that process to a computer as well. Note: For more on Collins's views about socialization, see the "Can computers reason scientifically?" arguments on Map 1. 73 AI programs are brittle. Because symbolic AI programs use rigid rules and data structures, they cannot adapt fluidly to changing environments and ambiguous circumstances. Symbol structures are brittle—they break apart under the pressure of a novel or ambiguous situation. Note: Versions of this claim are widely discussed in the literature and on these maps. For example, brittleness is discussed by Hofstadter (see "The Front- End Assumption is Dubious," Map 1, Box 74), Brooks (see sidebar, "Postulates of Subsumption Architecture," on this map), Dreyfus (see sidebar, "Postulates of Dreideggereanism," on this map), and the connectionists (see Map 5). It is also raised in the context of fuzzy logic. 99 Doug Lenat, 1992, as articulated by Hubert Dreyfus, 1992 Madeleine reasons without a body. Madeleine, a patient discussed by Oliver Sacks, used a wheelchair, was blind, and was unable to read Braille. In effect, she lacked a body. Yet she still managed to acquire commonsense knowledge from books that were read to her. Her experience shows that having a body is not essential to human reasoning. The girl ran down the street. 26 Joseph Rychlak, 1991 Symbol systems can't exhibit agency. Symbol processors can never have agency. Agency requires teleology, dialectical reasoning, and free will. Computers lack these traits because they don't understand meanings, the relations between them, and their relation to the world. Note: For similar arguments, see the "Can computers have free will?" arguments on Map 1. is supported by is disputed by 51 Ludwig Wittgenstein, 1953 The infinite regress of rules. Assuming that all nonarbitrary human behavior is governed by rules, then rules must be specified in order to apply the original rules, and further rules must be specified for those rules, ad infinitum. 47 Anticipated by Alan Turing, 1950 Impossible to write every rule. It is impossible to provide rules for every eventuality that a computer might face. Rule Rule requires requires is disputed by is disputed by is disputed by is supported by is supported by is supported by is disputed by is disputed by is disputed by 56 Noam Chomsky, 1965, 1980 Grammars are rule-based systems. The grammar of a language is a system of rules for the production of sentences. These rules are part of the unconscious "deep structure" of language. 55 Immanuel Kant, 1781 All concepts are rules. Concepts are rules for combining the elements of perception into objective representations. This "synthesis" of experience presupposes rules of consciousness as well as a rule- governed world. 54 Graham Button, Jeff Coulter, John R. E. Lee, and Wes Sharrock, 1995 AI rules cannot explain ordinary language. The project of accounting for ordinary human language in terms of explicit AI rules runs into the following problems. Our linguistic practices are too open-ended to be captured by a set of explicit rules. Human language is essentially embedded in a context of use, and it continually adapts to this context of use. Ordinary language gets its meaning from practical involvement in "language games," not from its parsable grammatical structures. 52 David Rumelhart, James McClelland, and FARG, 1986 Explicit rules are unnecessary. Connectionist networks exhibit lawful behavior without following explicit rules. Regularities emerge from the interactions of low-level processing units, rather than from the application of high-level rules. Although it may be possible to characterize a network's behavior according to high-level rules, none are involved in its underlying mechanisms. 58 Combinatorial explosion of search. When the number of paths in a search space grows exponentially, a combinatorial explosion results: the search becomes too long to be carried out, given time and memory constraints. To make such searches more efficient, methods of estimation called "heuristics" have been invented. Note: Combinatorial explosion also affects the problem of representing commonsense knowledge. See "Combinatorial Explosion of Knowledge," Box 92. 59 Hubert Dreyfus, 1972 Trial and error is different from essential discrimination. Humans are able to intuitively grasp what is essential or inessential about a problem. Symbol systems lack the capacity for "essential discrimination," and proceed blindly by brute-force trial and error. Adding heuristic rules to the system is only a stopgap measure. Humans recognize what is necessary automatically, by zeroing in on the essential nature of a problem. Note: Dreyfus derives his notion of essential discrimination from the work of Gestalt psychologist Max Wertheimer. is disputed by is supported by Dreideggereanism is Hubert Dreyfus's application of Heideggerean phenomenology to issues in AI and philosophy of mind. 1. Our basic way of being in the world is coping with equipment (rather than relating to the world by way of mental representations). 2. Coping skills can't be formalized. 3. Coping takes place against a background of general familiarity. 4. Familiarity is a kind of coping, but it is not directed at any particular task. Heidegger calls this background familiarity an understanding of being. 5. On the basis of familiarity, human beings are able to determine what is relevant to what, what to pay attention to, and what to do in any given situation. 6. Expertise consists of responding to a situation similar to one that has occurred in the past in a way similar to a response that has worked in the past. 7. Similarity is basic and cannot be analyzed in terms of shared features. 8. Expertise is arrived at in 5 stages, beginning with rule-like responses to specific features and ending with responses to whole situations. 9. Situations cannot be specified in terms of features. Adapted from Dreyfus (1972, 1991) and Dreyfus and Dreyfus (1986). Postulates of Dreideggereanism is disputed by 49 Hubert Dreyfus, 1972 Humans behave in an orderly manner without recourse to rules. Human activity may be described by rules, but these rules are not necessarily followed in producing the activity. For example, if I wave my hand in the air, touch something accidentally, and move my hand back to that spot, I am performing a complex series of movements which can be described geometrically, but the only principle I follow is, "Do that again." Similarly, the planets are not solving differential equations as they revolve around the sun, even if their movements can be described by differential equations. What do I need rules for? is disputed by 116 James Greeno and Joyce Moore, 1993 Affordances cannot be redefined as symbols. Affordances aren't like symbols. They don't mediate between the organism and its environment. Affordances are directly "picked up" by the organism. So it is inappropriate to redefine affordances as symbols. Either Or In Either Case 50 Hubert Dreyfus, 1972 Programmed behavior is either strictly rule-like or arbitrary. In confronting a new usage of language, machines face a dilemma. The machine must treat the new usage of language as a case that falls under existing rules, in which case rules covering all cases must be built in beforehand (see "The Infinite Regress of Rules," Box 51). The machine must take a "blind stab" at interpretation and then update its rule base, in which case the machine is behaving in an arbitrary, and hence nonhuman, fashion. The machine is not like a human. A native speaker, by contrast, is embedded in a context of human life, which allows him or her to make sense of utterances in a non-rule- like yet nonarbitrary way. Explicit data cannot account for the understanding of natural language. Humans avoid this antinomy because they recognize the present situation as a continuation of past situations, and on that basis determine what is relevant to understanding a sentence. There is an ultimate context that requires no interpretation, in which case we are forced to postulate a set of facts that have fixed relevance, regardless of the situation. But no such set of facts exist. There is a broader context of facts that determines which facts are relevant to a given sentence, in which case that context must itself be interpreted, so that we face an infinite regress of broader and broader contexts. 41 Hubert Dreyfus, 1972 The context antinomy. For a machine to understand sentences in a natural language, it must place those sentences in a context. However, because machines use explicit bits of data, they run up against an antinomy. In Either Case Either Or 87 Douglas Lenat and Edward A. Feigenbaum, 1991 Computers will be able to demonstrate common sense with a large enough database. A large database containing more than 10 million statements of facts about everyday life, history, physics, and so forth will be able to exhibit common sense. We've got to "bite the bullet" and enter in lots of information. 88 Hubert Dreyfus, 1992 CYC will not be able to demonstrate common sense. CYC is inconsistent with the phenomenology of skilled coping. The project of encoding human knowledge in a vast database is beset by the following problems. Human beings cope successfully without consulting facts in a knowledge base. Skills and know-how resist representation as propositional knowing-that. The more human beings know, the more quickly they bring the relevant knowledge to bear in a situation. In a computer, the reverse is true. 83 Søren Kierkegaard, 1844, as articulated by Herbert Dreyfus, 1972 The leap. There are times when a person makes a "leap" to a new sphere of existence, which infuses his or her being with a new order of significance. Such leaps are so radical that afterwards we cannot imagine how life could have ever been otherwise. Unmapped Territory Additional non-monotonic arguments Unmapped Territory Additional frame problem arguments Unmapped Territory Additional Chomsky arguments is supported by 107 Leopold Lowenheim, 1915; Thoralf Skolem, 1922 The Lowenheim-Skolem theorem. If a countable collection of sentences in a first-order language describes a model (that is, some state of affairs in the world), then it describes more than one model. Note: Hilary Putnam's version (1981) of this argument employs a strengthened version of the Lowenheim-Skolem theorem and claims that the theorem shows that any statement in natural language has an unintended interpretation if it has any interpretation at all. is disputed by is disputed by 16 David Rumelhart, James McClelland, and FARG, 1986 Graceful degradation. Because processing in the brain is distributed, its performance diminishes in proportion to the degree of neuronal damage or noisy input. The performance of the brain "gracefully degrades" in problematic circumstances. In von Neumann machines, by contrast, a single disruption or glitch will generally have catastrophic consequences for the system as a whole. 17 Keith Butler, 1993a Classical machines implemented in connectionist networks can gracefully degrade. If a classical symbol system were implemented in a connectionist network, then the symbol structures could have the kind of internal distribution that Chater and Oaksford require. In such a case, the symbol system could exhibit graceful degradation. 15 Nick Chater and Mike Oaksford, 1990 Spatial distribution is inadequate. Fodor and Pylyshyn claim that symbolic representations can be physically distributed in memory. But that kind of distribution does not afford the right kind of damage tolerance. Representations must be internally distributed in a connectionist activation pattern rather than merely spatially distributed in memory. Note: For more on connectionist representations, see the "Can connectionist networks exhibit systematicity?" arguments on Map 5. That's the wrong kind of distribution. is disputed by 97 Maurice Merleau-Ponty, 1962 The body is a synergic system. All parts of the body are interrelated in a "synergic system." Hands, feet, arms, and so forth are not juxtaposed in a coordinate space, but are enveloped in a frontier of interrelations: sights can be heard, sounds can be seen, and motor abilities can pass from one limb to another. 35 John Barnden, 1987 Patterns of activity may be "fuzzed" symbols. The "distinctive, stereotypic state of activity" (p. 174) that Skarda and Freeman correlate with the inhalation of a learned odor may correspond to the symbolic output of the olfactory system, which the brain then uses in classical symbol-manipulation style. It is consistent with AI to allow that these symbols "embody a certain amount of 'fuzz'" (p. 174). That is, the states of activity produced by the olfactory system may correspond to rough approximations of classical discrete symbols. 36 Christine A. Skarda and Walter J. Freeman, 1987 Fuzzed symbols are not classical symbols. The patterns of activity that Barnden correlates with symbols are not like classical symbols, which can be composed and manipulated by well-understood logical operations. Dynamic patterns of neural activity are context dependent and are only roughly correlated with events in the world. Note: For discussion of nonclassical symbolic representations, see the "Can connectionist networks exhibit systematicity?" arguments on Map 5. 37 Hubert Dreyfus, 1972 Explicit values cannot organize a field of experience. Human interest organizes a field of experience that cannot be captured by explicit goals and values. Specific goals must be checked at preset intervals, but human concerns pervade experience. When concerns are made explicit they lose their pervasive character. Human needs only become explicit after they have been fulfilled. 34 Christine A. Skarda and Walter J. Freeman, 1987 Nonsymbolic explanations of smelling. Rabbits discriminate odors by a process of chaotic self-organizing activity at the level of the neural tissue. Symbols and symbol structures don't explain this process as well as the mathematics of nonlinear dynamics and chaos does. Note: Skarda and Freeman use similar reasoning to dispute "The Rule-Following Assumption," (Box 46) (because neurons don't follow rules) and the central control aspect of "The Brain Has a von Neumann Architecture," (Box 12) (because neural activity is self-organizing). They also argue against representationalism in general, with its assumption of the existence of plans, goals, scripts, and so forth. 33 As articulated by Hubert Dreyfus, 1972 Reductionistic science paradigm. To model and understand the world, divide phenomena into simple elements and relations. Scientists, from Galileo onwards, have analyzed the world using this reductionist program and have made significant progress. 38 John Dewey, 1922 Goals pervade activity. Humans experience goals as pervasive elements of present activity, rather than as fixed ends to be worked toward. The idea that the world can be understood as composed of discrete elements has a long history in the philosophical tradition. Plato, for example, believed that all true knowledge (knowledge of piety, justice, the good, etc.) must be statable in explicit definitions that would act like rules telling us how to behave. René Descartes proposed that certainty in knowledge could only be obtained by relying on what he called "clear and distinct ideas," that is, ideas that are clear in themselves and distinct from other ideas. Gottfried Leibniz, one of the originators of the idea that a machine could think, also introduced the notion of a monad, which, as a basic component of reality "is nothing but a simple substance that enters into composites— simple, that is, without parts" (Leibniz, 1714, p. 1). The British Empiricists (notably, John Locke, George Berkeley, and David Hume) held that all knowledge comes to us through experience in the form of discrete "simple ideas," which combine to form "complex ideas." The notion that ideas are discrete continued in the work of Hume, who claimed that "every distinct perception which enters into the composition of the mind, is a distinct existence, and is different, distinguishable, and separable from every other perception, either contemporary or successive" (1739, p. 259). In the 19th century, Gottlob Frege invented the first complete formalization of predicate logic. Frege regarded thoughts as discrete entities composed of concepts and relations between them. In this century, Bertrand Russell founded logical atomism, in which the world is understood in terms of atomic propositions that are either true or false. Ludwig Wittgenstein articulated his early version of logical atomism as follows. "1. The world is all that is the case. 1.1. The world is the totality of facts, not of things. 1.11. The world is determined by the facts, and by their being all the facts" (1922, p. 5). Much of this history was first recounted (in the context of artificial intelligence) by Dreyfus (1972). History of the Symbolic Data Assumption 39 Martin Heidegger, 1927 Humans beings are called to take a stand on who they are. Human beings take a stand on who they are by living "for the sake of" being teachers, doctors, and so forth. Such "for-the-sake-of-whiches" structure human activity in a pervasive and nondeterminate way. 32 The symbolic data assumption. The elements of thinking are represented in discrete symbolic form. Beliefs, desires, goals, values, plans, and so forth are represented by such data and by their combination into higher-order symbol structures like knowledge bases. 89 Doug Lenat and R. V. Guha, 1990 CYC. CYC is a massive database containing millions of statements and a complex inference engine that represents commonsense knowledge well enough to intelligently process data. Implemented Model 92 Hubert Dreyfus, 1972; James Lighthill, 1973; and others Combinatorial explosion of knowledge. Representing all of the information relevant to an open-ended domain, or to human commonsense understanding in general, is an impossible task, because it results in a combinatorial explosion of relevant information. The number of facts that must be encoded to scale up from a series of small, independent domains to the totality of commonsense knowledge is insurmountably large. Notes: Combinatorial explosion also affects the problem of searching databases. See "Combinatorial Explosion of Search," Box 58. This problem has been raised in the context of the Turing test (see "Combinatorial Explosion Makes the All-Possible-Conversations Machine Impossible," Map 2, Box 103). 90 James Lighthill, 1973 Past disappointments. The number of facts necessary to capture the entirety of commonsense knowledge is insurmountably large. Machine agents can only cope successfully in limited domains, such as the game of checkers, where a small number of facts exhaustively describe the agent's world. To scale up from such artificial worlds to the real world simply by adding more and more facts is not possible, because this leads to a combinatorial explosion of the number of ways in which elements can be grouped in a knowledge base. Because of these problems, AI workers have little more than "past disappointments" to their credit. Supported by "The Lighthill Report," Box 74. Note: This argument summarizes a number of Lighthill's opinions, and it represents in early form versions of the problem of toy worlds, brittleness, combinatorial explosion, and commonsense knowledge, all of which are well-known problems today. 1. A subsumption architecture machine (SAM) has sensors and actuators for interaction with the real world. 2. A SAM is comprised of "layers," that is, "activity-producing subsystems" (p. 146). These layers interact with each other and with the world. 3. Each layer is a complete functional unit; that is, it functions autonomously of other layers, without depending on any central control unit. 4. Layers can be added gradually to SAMs, which can thus develop incrementally. "We must incrementally build up the capabilities of intelligent systems, having complete systems at each step of the way and thus automatically ensure that the pieces and their interfaces are valid" (p. 140). 5. Each layer enables a specific behavior (e.g., obstacle avoidance, exploration, aluminum-can retrieval, etc.). 6. Any given layer can "subsume" the action of attached layers in order to further its own goals, without completely wresting control from the attached layers. 7. A SAM interacts with the "real world," that is, the dynamic world that humans act in; it does not act in a refined, abstracted, or limited "toy world." "At each step we should build complete intelligent systems that we let loose in the real world with real sensing and real action. Anything less provides a candidate with which we can delude ourselves" (p. 140). From Brooks (1991). "Subsumption architecture machine" (SAM) is our own term. Postulates of Subsumption Architecture 109 Cog Shop, 1998 COG. Led by Rodney Brooks, the Cog Shop has developed a humanoid robot composed of a series of interconnected (but independently functional) systems controlled by a set of parallel processors. COG currently consists of a trunk, head, arms, and a sophisticated visual system. Planned additions include hands, vocalization ability, a vestibular system, and skin. COG is intended to be as fully a part of the real world as possible, both by having a body and by interacting in a real (as opposed to a "toy") environment. Implemented Model = 108 Rodney Brooks, 1991 The world is its own best representation. Modeling intelligence requires that a system be directly interfaced with the world through sensors and actuators. These "layers" of interaction with the environment approximate the sensorimotor dynamics of the human body and allow the world to act as its own representation. This removes the necessity of using the inefficient global representations favored by AI and has the advantages of: better response time, because no global representation has to be changed whenever the environment changes; increased robustness, because the system is less likely to crash due to some unpredictable change in the world. 105 George Lakoff, 1987 Objectivism is in conflict with empirical studies of natural human categories. The objectivist paradigm—which influences most contemporary cognitive science—rests on a classical theory of categories that is disproved by a wide body of empirical evidence concerning basic-level concepts, kinesthetic image schemas, metaphorical concepts, metonymic models, and others. "On" is an embodied concept. The cat is on the mat. To see the contradiction, notice the following: Changing the meaning of the parts of a sentence should change the meaning of the sentence as a whole (by 2). This implies that the truth value of this sentence will also change for some possible world (by 1). But, we can construct a sentence in which the meaning of its parts changes but its truth value remains the same in all possible worlds. So, we have a contradiction. Note: In unpacking the claim, Lakoff draws on an argument by Putnam (1981), which extends Lowenheim-Skolem's theorem from first-order logic to higher-order logic. 106 George Lakoff, 1987 The objectivist account of cognition is inconsistent. The objectivist account makes inconsistent assumptions. 1. The meaning of a sentence is a function that assigns a truth value to the sentence for each possible world. (This is a standard definition of meaning in objectivist semantics.) 2. The meaning of the parts cannot be changed without changing the meaning of the whole. (This is a requirement of any theory of meaning.) dic tion con tra 104 As articulated by George Lakoff, 1987 The objectivist account of cognition. Symbol structures are meaningful by virtue of their correspondence with entities and categories in the world. They are mirrors of nature, which "re-present" or "make present again" what exists in external reality. Without their connection to the world, symbol structures are meaningless. 115 Alonso Vera and Herbert Simon, 1993a Affordances are just symbolic representations. Affordances are nothing but internal representations whose symbolic character is concealed from conscious- ness. Acquiring affordances is a matter of encoding sensory stimuli as symbolic representations. 121 Alonso Vera and Herbert Simon, 1993b That the symbol systems approach is a worldview is not crucial. We do present the symbol systems approach as a worldview. But our criticisms of situated action are specific. We address several principal theses propounded by the situated action school. 123 Alonso Vera and Herbert Simon, 1993c Clancey's account of symbols is too limited. Clancey errs in his criticism of the symbol system hypothesis and in his claims for situated action. The symbol systems hypothesis does not limit itself to linguistic symbols: anything is a symbol that is both patterned and denotative. Clancey's ideas about the creation of new categories are too radical and are not supported by the research. Some relatively constant categories are necessary and their existence has been demonstrated in animals. The situated action paradigm is untestable, whereas there is much experimental evidence in support of the symbol systems hypothesis. 120 Phillip E. Agre, 1993 The symbol systems approach is a worldview, not a theory. What is central to the symbol systems approach is a set of metaphors, such as "inside" and "outside." These metaphors constitute a worldview rather than a theory, because they provide a framework for explaining the facts but don't provide explanations themselves. This worldview can't explain interactional entities that don't, strictly speaking, belong to the external world or to the internal mind. 112 J. Y. Lettvin, Humberto Maturana, Warren McCulloch, and Walter Pitts, 1959 Frog retinas provide information without processing representations. Instead of sending a detailed representation of the environment to its brain, the frog retina only sends that information that is most relevant to the frog's needs. 111 Humberto Maturana, 1970, as articulated by Terry Winograd and Fernando Flores, 1986 Biological action is situated. Cognition and language need to be interpreted biologically. The notion of representation is inadequate for this task. Cognition and thinking are best described as a history of "structural coupling" between an organism and its environmental niche. 113 The situated action paradigm. This approach to artificial intelligence emphasizes the situated and embodied character of cognition, and the role of affordances in the acquisition of perceptual information and the control of behavior. 117 Alonso Vera and Herbert Simon, 1993c Affordances need to be redefined. Explaining the mechanisms of perception and cognition requires understanding how information is encoded in the brain. Because affordances are not representations, they provide no help in understanding that coding process. On the other hand, those models that have been helpful in explaining the coding process have been symbolic. So it is best to redefine affordances in terms of symbols. is supported by 119 Alonso Vera and Herbert Simon, 1993a The mechanisms described by situated action are symbol systems. All of the mechanisms suggested by the situated action program can be interpreted as physical symbol systems. The claims of situated action have been achieved in existing symbolic models. 114 James Gibson, 1977, 1979 Affordances are features of the environment. Affordances are directly perceived higher-order properties of things in the environment. For example, water affords swimming and floating; hills afford climbing; plains afford walking and running; foods afford eating; and so on. Affordances aren't representations used as guides. They are environmental factors that an organism directly "picks up" in order to maintain itself. sit-on-able drinkable climb-on-able Unmapped Territory Additional Gibson AI arguments 102 Hubert Dreyfus, 1996 Collins's understanding of Madeleine is science fiction. Madeleine was nothing like an immobile box. In crawling, kicking, balancing, overcoming obstacles, finding optimal distances for listening, and so forth, the baby Madeleine had enough of a body structure to allow her to be socialized into our human world. 67 Feng-hsiung Hsu, Thomas Anantharaman, Murray Campbell, and Andreas Nowatzyk, 1990; IBM, 1998 Deep Blue. A chess-playing computer that beat world champion Andrei Kasparov in 2 out of 6 games at a match in 1996, Deep Blue is an IBM parallel computer running 256 chess processors, which allow it to consider 50–100 billion moves in the 3 minutes allotted to a player. In addition to deep search of possible moves, Deep Blue consults a database of opening games and endgames played by chess masters over the last 100 years. Deep Blue's success is based on engineering rather than on emulation. Even if doesn't think like a human, it is relevant to business applications that require rapid management of large amounts of data. Deep Blue is the descendant of earlier work by graduate students at Carnegie Mellon University. Unmapped Territory Additional chess and other game-playing arguments Implemented Model 61 Computers play expert-level chess using heuristic search. Chess programs play expert- level chess using advanced search techniques and heuristics. 64 Hubert Dreyfus, 1972 Heuristic search is inconsistent with human phenomenology. Human experts play chess by "zeroing in" on relevant moves in fringe consciousness, rather than by iterating though a list of possibilities. Note: Dreyfus uses similar considerations to dispute machine translation, natural language understanding, and pattern recognition, which also require fringe consciousness and zeroing in. 66 William James, 1890, as articulated by Hubert Dreyfus, 1972 Humans zero in on information in fringe consciousness. The fringes of consciousness provide marginal awareness of "background" information. For example, the experience of the front of a house is fringed by an awareness of the back of the house. In chess, "cues from all over the board, while remaining on the fringes of consciousness, draw attention to certain sectors by making them appear promising, dangerous, or simply worth looking into" ( p. 104). Note: In applying James's theory of the fringe to chess, Dreyfus is drawing on the work of Michael Polanyi (1962). 62 John Strom and Lindley Darden, 1996 Deep heuristic search has been used to achieve expert performance. Deep Thought (the precursor to Deep Blue) plays chess at the grandmaster level by considering more moves than any previous system. Its success demonstrates the effectiveness of deep heuristic search, and shows that AI is not, as Dreyfus thinks it is, a degenerating research program. In fact, given the success of AI, it may be Dreyfus's criticisms that are degenerating. 65 Hubert Dreyfus, 1972 Humans see the chess board as a Gestalt whole. Humans see a chess board as an organized pattern or Gestalt. Any move is part of the unfolding Gestalt pattern. Past experiences and the history of the current game work together to build up an integrated awareness of "lines of force, the loci of strength and weaknesses, as well as specific positions" (p. 105). In this way, chess masters zero in on unprotected pieces and promising areas for attack or defense. Note: Also, see the "Can computers recognize Gestalts?" arguments on Map 5. 76 Hubert Dreyfus, 1972, 1979, 1992 The critique of artificial reason. Artificial intelligence is the culmination of a flawed tradition in philosophy that tries to explain human reason in terms of explicit rules, symbols, and calculating procedures. The assumptions of AI are implicitly critiqued by a range of 20th-century thinkers (Maurice Merleau-Ponty, Martin Heidegger, Michael Polanyi, Thomas Kuhn, etc.), who show how the nature of human activity and being differ from that of calculating machines like computers. Note: This is a general statement of Dreyfus's critique. Specific instances of his critique are spread throughout this map. What's the least complex intellectual behavior that you think humans can do and computers can't? (p. 149) Work in categories A and C is legitimate. But work in category B is plagued by a variety of problems (including combinatorial explosion, past failures, limited worlds, and poor performance) and therefore seems unlikely to succeed. Notes: Also, see "Past Disappointments," Box 90. The Lighthill Report was commissioned by the Science Research Council in Britain to help the council make funding decisions for work in AI. Lighthill was chosen as a member of the scientific community who could make an unbiased assessment of the field. His report is said to have had devastating effects on AI funding in Britain during the '70s. 74 James Lighthill, 1973 The Lighthill Report. There are 3 kinds of work in AI. A: Advanced automation. The use of computers to replace human beings in various military, industrial, and scientific tasks. C: Computer-based study of the central nervous system. The use of computers in the study of the brain, as a way of testing hypotheses about the cerebellum, visual cortex, and so forth. B: Bridge activity. The use of computers to study phenomena that fall between categories A and C—in particular, the building of robots as a way of studying general intelligence. 45 Joseph Rychlak, 1991 Learning is process of interpretation. Most AI theorists model learning on a Lockean paradigm that takes repetition and contiguity of perceptions as the primary way in which new concepts are learned. For example, we learn how to spell by repeatedly seeing how words are spelled and which letters are contiguous to each other. But such a model does not pay sufficient attention to the role of meaning in learning. Learning occurs when a mind that reasons dialectically and predicationally interprets what it perceives in terms of the meaning of what it perceives. Because computers don't work with meanings, interpretation and the relevant kind of learning is impossible for them. Note: Rychlak's further arguments about artificial intelligence can be found in the "Can physical symbol systems think dialectically?" arguments on this map. Unmapped Territory Other approaches to AI learning 1. The purpose of philosophy is the resolution of conceptual confusion through the analysis of ordinary language. 2. Ordinary language is our primary medium of thought. 3. Problems of philosophy arise through the misuse or incomplete understanding of ordinary language. 4. The proper method for analyzing language is through the explication of commonsense everyday use of language. 5. Language can be studied in terms of "language games," in which rules of usage are analyzed as if they were the rules of a game. 6. Language can only be understood semantically from within ordinary language and language games. This means there is no formal metalanguage relevant to philosophy. 7. The rules of ordinary language are adapted to the various practical purposes of human conduct, and as such they must be open to examination and modification. 8. Rules for the use of language are like rules for playing games; they are not like the parsing rules posited by AI researchers. Ordinary language rules provide guides and signposts for the use of language rather than a description of how language is generated. Proponents include Ludwig Wittgenstein; Graham Button, Jeff Coulter, John R. E. Lee, and Wes Sharrock; Charles Karelis (Map 2); Hans Obermeier (Map 4); and Gilbert Ryle (Maps 2 and 6). Other notable proponents include John L. Austin, John Wisdom, and Stanley Cavell. Postulates of Ordinary Language 53 Jerry Fodor and Zenon Pylyshyn, 1988 Classicists are not committed to explicit rules. The possibility of implicit rules does not argue against the classical symbolic framework, because there is a wide body of work within the classicist camp that shows how implicit rules can be modeled. In fact, most classicists agree that at least some rules must be implicit. However, the possibility of explicit rule-based systems does argue against connectionism, because connectionist networks cannot encode such rules. Note: For an example of an argument (cited by Fodor and Pylyshyn) that says that explicit rules can't be encoded by connectionist networks, see "The Past-Tense Model Does Not Argue Against Rule-Based Explanation," Map 5, Box 24. 48 Alan Turing, 1950 Being regulated by natural laws implies being a rule-governed machine. The rules objection confuses rules of conduct with laws of behavior. It is true that we cannot formulate a complete set of rules of conduct for human performance. But human behavior is still governed by natural laws. And because these laws of behavior can, in principle, be given a mechanical description, it is also possible to build a machine to fit this description. Thus, humans are a kind of machine governed by rules. laws of be • hav • ior: "Laws of nature as applied to man's body, such as 'If you pinch him he will squeak'" (Turing, 1950, p. 452). rules of con • duct: "Precepts, such as 'Stop if you see red lights,' on which one can act, and of which one can be conscious" (Turing, 1950, p. 452). is supported by 25 Joseph Rychlak, 1991 Symbol systems cannot think dialectically. Computers can't reason dialectically because they can't synthesize opposing meanings into a new meaning. All they can do is shuffle symbols that are given meaning by a programmer. Note: Rychlack never claims that machines can't think. He just claims that they can't think dialectically or predicationally. He allows that they have demonstrative reasoning. Assertion Denial New synthesis Denial New synthesis 1. Dialectical reasoning involves oppositional and predicational thinking, and is only possible for teleological beings who can "see" from another's point of view. 2. Oppositional thinking proceeds by assertion, denial, and new synthesis. An assertion is made and is opposed by a denial of that assertion. Out of this opposition, a new synthesis arises and transforms the original assertion by changing its context and/or scope. 3. "Predication involves the cognitive act of affirming, denying, or qualifying broader patterns of meaning in relation to narrower or targeted patterns of meaning" (Rychlak, 1991, p. 7). 4. Teleological beings can engage in various goal-directed activities that place them in a meaningful relation to the world. 5. Computers reason mediationally. In other words, they work with symbols in abstraction from their relation to meanings and the world. 6. Computers reason demonstratively. They can draw conclusions based on valid patterns of inference between symbols, but without working with the meanings of those symbols. 7. Computers can not reason dialectically because they are nonteleological and nonpredicational. They do not understand meanings or the relations between them. Postulates of the Dialectical Paradigm 31 John Locke, 1690 Delaying action provides us with free will. Free will stems from the power to restrain from action. Delaying the impulse to act provides room for other actions to be considered. This time in which alternatives are weighed provides the appearance of free will. The choices that are made during the delay need not arise from some metaphysical freedom of the subject; free will is just the consideration of alternatives. is disputed by 28 Joseph Rychlak, 1991 Negative feedback can't explain teleology. Goal- directedness and feedback activity cannot explain agency. An explanation of agency must address the formation of goals and beliefs and the ability to change those goals and beliefs. Current machines only seek the goals they are programmed to seek. 29 Marvin Minsky, 1986, as articulated by Joseph Rychlak, 1991 Delayed mediational processes in machines can generate agency. By telling ourselves that we have made choices, we make ourselves agents. What we know is that free will occurs when decision-making processes are delayed to allow more alternatives to be tested. Systems capable of that kind of delayed decision making have as much free will as humans do, and thus, any computer system with those processes can be said to have agency. is disputed by 27 Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow, 1943 Some machines behave teleologically. Machines with servomechanisms are purposive. In fact, any machine that can respond to negative feedback for guidance behaves teleologically. What separates such machines from humans is that humans have an added ability to make higher-order predictions about the behavior of other objects. 24 Nick Chater and Mike Oaksford, 1990 The 100-step constraint is relevant to the cognitive level. It is not true that the 100-step constraint is an irrelevant implementation detail. An algorithm that classical systems execute in millions of time-steps may not be executable in 100 time-steps. The 100-step constraint poses a nontrivial challenge to classical AI, because it "severely [limits] the class of cognitively plausible algorithms" (p. 95). 18 David Rumelhart, James McClelland, and FARG, 1986 The brain processes information in parallel. Von Neumann machines process information sequentially, one bit at a time. The brain receives and manipulates massive amounts of information at the same time, in parallel. is disputed by 20 Herbert Simon, 1995 Thought is serial despite being implemented in a parallel architecture. At the symbolic level, human thinking is a serial process, despite the parallelism of its neural implementation. For example, to multiply 5 by 12 requires taking several serial steps, in which 5 is multiplied by 2, then by 10, and then the results are added together. 19 Jerry Fodor and Zenon Pylyshyn, 1988 Symbol processing can take place in parallel. It is possible to implement a classical system in a parallel architecture—for example, by executing multiple symbolic processes at the same time. So parallel processing systems like connectionist networks, don't have any principled advantage over classical symbol systems. 23 Jerry Fodor and Zenon Pylyshyn, 1988 The 100-step constraint is directed at the implementation level. All the 100-step constraint demonstrates is the obvious fact that symbolic thought processes are implemented differently in the brain than they are on a digital computer. The 100-step constraint has to do with implementation details rather than with real cognitive processes. The 100-step constraint is a mere implementation detail. 22 Jerome Feldman, 1985 The 100-step constraint. Algorithms that model cognitive processes must meet a 100-step constraint imposed by the timescale of the brain, which performs complex tasks in about 100 time-steps. Classical sequential algorithms currently run in millions of time-steps, so it seems unlikely that they can meet this 100-step constraint. 21 George Hinton, James McClelland, and David Rumelhart, 1986 The brain accesses information by content rather than by memory address. Humans rapidly access memories by way of their contents. For example, memories about the president are accessed by information about the president (his or her name, face, etc.), not by way of an explicit address. Connectionist networks and the brain have "content- addressable" memories of this type. Table 52005 is disputed by 14 Jerry Fodor and Zenon Pylyshyn, 1988 Symbol structures can be distributed. A classical symbol processor can be physically distributed in memory, and can thereby exhibit graceful degradation. So distributed systems like connectionist networks don't have any principled advantage over physical symbol systems. 7 David Rumelhart, James McClelland, and FARG, 1986 Neurons receive thousands of times more input than logic gates. Neurons are connected to 1,000–100,000 other neurons. Logic gates are connected to only a handful of other logic gates. The difference indicates that the brain does not use the kind of logical circuitry found in digital computers. 8 Jack Copeland, 1993 Neurons are diversely structured. Computer logic gates consist of variations on a single structure. The brain, by contrast, consists of many different kinds of neurons. Purkinje cell Pyramidal cell Retinal bipolar cell 9 John von Neumann, 1958, as articulated by Hubert Dreyfus, 1972 The brain is an analogue device. Even if neuron firings are all-or-none, the message pulses that carry neural information are analogue. They involve complex graded and nonlinear factors. So the brain seems to be an analogue device. Note: Von Neumann subjects the relationship between brain and computer to extensive analysis, and his classic lectures on the subject are still relevant today. is supported by 13 David Rumelhart, James McClelland, and FARG, 1986 Processing in the brain is distributed. Processing in the brain is not mediated by some central control. Neural processing is distributed—in other words, many regions contribute to the performance of any particular task. is disputed by is disputed by is disputed by 5 Neurons operate like logic gates. The neurons of the brain are similar to the logic gates of a digital computer. Their all-or- none firing potential gives them a discrete character and allows them to be combined into the AND gates, OR gates, and other logic gates that underlie digital computation. 3 The biological assumption. The brain is the hardware (or "wetware") on which the software of the mind is run. Thinking is a symbolic process that is implemented in the neurons of the brain and that can also be implemented in the circuits of a digital computer. Note: Also, see the "Is the brain a computer?" arguments on Map 1, the "Is biological naturalism valid?" arguments on Map 4, the "Are connectionist networks like human neural networks?" arguments on Map 5, and sidebar, "Formal Systems: An Overview," on Map 7. is similar to 57 The heuristic search assumption. Symbolic data is searched using various methods of estimation or "heuristics," which make the search more efficient. Newell and Simon Heuristic search hypothesis: The solutions to problems are represented as symbol structures. A physical symbol system exercises its intelligence in problem solving by search—that is, by generating and progressively modifying symbol structures until it produces a solution structure (1976, p. 120). 1. A physical symbol system is physical (that is, made up of some physical matter) is a specific kind of system (that is, a set of components functioning through time in some definable manner) that manipulates instances of symbols. 2. Symbols can be thought of as elements that are connected and governed by a set of relations called a symbol structure. The physical instances of the elements (or tokens) are manipulated in the system. 3. An information process is any process that has symbol structures for at least some of its inputs or outputs. 4. An information processing system is a physical symbol system that consists of information processes. 5. Symbol structures are classified into data structures programs. 6. A program is a symbol structure that designates the sequence of information processes (including inputs and outputs) that will be executed by the elementary information processes of the processor. 7. Memory is the component of an information processing system that stores symbol structures. 8. Elementary information processes are transformations that a processor can perform upon symbol structures (e.g., comparing and determining equality, deleting, placing in memory, retrieving from memory, etc.). 9. A processor is a component of an information processing system that consists of: a fixed set of elementary information processes, a short-term memory that stores the input and output symbol structures of the elementary information processes, and an interpreter that determines the sequence of elementary information processes to be executed as a function of the symbol structures in short-term memory. 10. The external environment of the system consists of "readable" stimuli. Reading consists of creating internal symbol structures in memory that designate external stimuli. Writing is the operation of emitting the responses to the external environment that are commanded by the internal symbol structures. Adapted from Newell and Simon (1972, chap. 2). Proponents include Jerry Fodor, Allen Newell, Herbert Simon, John McCarthy, Zenon Pylyshyn, early Marvin Minsky, Doug Lenat, Edward Feigenbaum, and Pat Hayes. Postulates of the Physical Symbol Systems Hypothesis 10 Zenon Pylyshyn, 1974 Analogue systems cannot represent general concepts. Analogue devices only capture particular sensory patterns. They cannot (by themselves) be used to recognize and process universal concepts. For example, an analogue retinal image of a chair is not by itself adequate to represent the universal concept of a chair. The retinal image must be recognized as a typical chair pattern and must be associated with a verbal label by some sort of discrete digital mechanism. Note: Pylyshyn allows that some analogue computation may be important in practice, and he thinks it is likely that practical AI systems will use hybrid analogue–digital mechanisms. He is arguing against the claim that all mental computation is analogue. 11 Zenon Pylyshyn, 1974 Purely analogue machines lack the flexibility of digital machines. A purely analogue device cannot make contingent if-then branches. That is, analogue devices cannot do "one thing under one set of circumstances and a completely different thing under a discretely different set of circumstances" (p. 68). For that reason, purely analogue machines inherently lack the flexibility of universal digital machines. This limitation can be overcome by adding a discrete threshold element, but that still does not make an analogue device the best way to represent cognitive processes. 12 The brain has a von Neumann architecture. The following features of the von Neumann architecture also characterize processing in the brain. Processing is sequential. Symbol strings are stored and accessed at specific memory addresses. There is a central processing unit that controls processing. Note: For more on the assumption of a central control, see "Searle Assumes a Central Locus of Control," Map 4, Box 75. Memory Processing Control Output Central Processing Unit Input 6 Warren McCulloch and Walter Pitts, 1943 The logical calculus of neural activity. Because of their all-or- none threshold, the activity of neurons can be completely described in terms of logical operations. The firing of a neuron is like the assertion of a proposition, and relations between neural firings are like logical relations between propositions. 3 2 1 is supported by 60 Zenon Pylyshyn, 1974 The best heuristics aren't just trial and error. Heuristic searches don't necessarily rely on trial and error. Programs can be structured so that a heuristic method moves the search ever closer to a problem solution without the need for redundant backtracking. Such programs approximate human "zeroing in." 63 Hubert Dreyfus, 1996 Brute-force search is not how humans play chess. I never predicted failure for brute-force techniques in chess playing. All I argue is that brute- force techniques such as heuristic search are not psychologically realistic. Strom and Darden blur the distinction between AI as a form of psychology and AI as any sort of technique that uses symbolic representation. 96 Hubert Dreyfus, 1972 The body is essential to human intelligence. Possession of a body is essential to human reasoning, pattern recognition, and interaction. Understand- ing what a chair is, for example, presupposes knowledge of how the body sits, bends, fatigues, 93 The disembodied mind assumption. Thinking is an abstract process that does not require the presence of a body. Notes: The notion of embodiment is used by many situated action theorists. See, for example, "COG," Box 109. On the notion that the mind is disembodied, see the "Is the brain a computer?" arguments on Map 1 and the "Can functional states generate consciousness?" arguments on Map 6. does not require eth • no • meth • od • ol • o • gy: An approach to social science, deriving from Garfinkel (1957), that focuses on everyday cultural practices and activities in the social world. Ethnomethodology provides useful models for how plans are enacted and designed. 125 Martin Heidegger, 1927 Representations are not involved in concernful activity. In ongoing concernful activity no representations are necessary. A carpenter hammers nails without having any explicit representation of the hammer. It is only in cases of "breakdown" that the hammer emerges from the background of equipment and is represented as an object with properties. For example, if the hammer slips from the carpenter's grasp it might then be seen as an object with the property of being too light, too slippery, and so forth. But prior to the breakdown, the carpenter had no representation of the hammer. Note: See "Computers Never Move Beyond Explicit Rules," Box 44, which clarifies the role representations play in the development of skill. is supported by 110 Terry Winograd and Fernando Flores, 1986 The representational tradition is flawed. Classical symbolic AI places too much emphasis on the role of representations in human intelligence. Actual human thinking only becomes symbolic and representational when normal modes of cognition break down. Future work in AI should concern itself with interactive design, task-specific projects, and the role of computers as machines for the facilitation of communication, rather than with representational symbol systems. Note: Winograd and Flores are influenced by Dreyfus's approach. See sidebar, "Postulates of Dreideggereanism," on this map and "The Critique of Artificial Reason," Box 76. is disputed by 46 The rule-following assumption. Humans, like machines, behave intelligently by following rules, which can, in principle, be spelled out as explicit if-then statements. Notes: Also, see the "Do connectionist networks follow rules?" arguments on Map 5. Because heuristic searches are sometimes described in terms of "heuristic rules," the "Does mental processing rely on heuristic search?" arguments on this map are relevant to this region. If Then is disputed by is supported by is supported by heu • ris • tic search: A search that uses special knowledge about a problem domain to find solutions more efficiently. For example, a search of possible moves in a chess game could be aided by a set of heuristics that tell the computer to avoid useless lines of attack, to maintain center control, and so forth. 2 Allen Newell and Herbert Simon, 1976 Physical symbol systems can think. Thinking in a physical symbol system is a formal computational process characterized by rule-governed symbol manipulation the drawing of inferences from large knowledge bases heuristic search of data structures operations on representational structures planning and goal-directed activity The symbolic processes that constitute thinking are formal in that they are independent of any particular physical instantiation. In humans, thinking is instantiated in the neurons of the brain. In computers, thinking is realized in silicon circuits. Notes: This is a standard interpretation of the field but it is by no means shared by everyone. This summary is meant to emphasize those aspects of artificial intelligence research that are relevant to this map. The physical symbol systems hypothesis and functionalism are closely related. The physical symbol systems hypothesis proposes an architecture for simulating and studying intelligence. Functionalism is a philosophical position that is used to justify this architecture. Functionalism was developed in part as a response to behaviorism (see the "Is the test, behaviorally or operationally construed, a legitimate intelligence test?" arguments on Map 2). For more on functionalism, see the "Can functional states generate consciousness?" arguments on Map 6. The same symbol systems can also be instantiated in a computer. In humans, symbol systems are instantiated in the brain. Rule-governed manipulation of symbolic representational structures Thinking = 72 George Lakoff, 1987 Some conceptual organizations can't be translated into a universal conceptual framework. Different conceptual systems have different conceptual organizations, and those differences are cognitively significant. If the different organizations are translated into a universal conceptual framework, significant organizational differences are eliminated. Humans can shift between frameworks without eliminating these differences. Note: Lakoff cites Mixtec (a native Mexican language), as well as the work of Benjamin Whorf (1956) in making this argument. Mixtec speaker Yuu wa hiyaa cu-mesa. (Stone the be-located belly-table)] 79 John McCarthy, 1996 Logic-based AI is making steady progress. Dreyfus ignores slow but definite progress in logic-based AI. For example, formalized non-monotonic logics have been used successfully to address the problem of understanding ambiguity. Dreyfus gives no compelling reason to suppose that such developments will end in failure. 71 Articulated by George Lakoff, 1987 The universal conceptual framework assumption. There is a neutral and completely general conceptual structure in which all knowledge can be represented. For a machine to understand natural language, it must translate sentences into this universal conceptual framework. 70 John Laird, Allen Newell, and Paul Rosenbloom, 1987 SOAR. SOAR is a general intelligence system that learns by "chunking," that is, by collapsing the work of satisfying a subgoal into a single condition-action rule, or "production." It searches its explicit representations heuristically by means- ends analysis using goals and subgoals. Implemented Model 69 John McCarthy, 1979 Thermostats can have beliefs. Beliefs can reasonably be ascribed to an entity when its actions are: consistent, the result of observation, and in accordance with goals. Because the behavior of thermostats meet those criteria, it is reasonable to say they have beliefs. Note: These are the criteria for belief ascription that McCarthy mentions in connection with the thermostat example. He discusses further conditions elsewhere. 68 Jerry Fodor, 1975 The language of thought. The language of thought (also known as mentalese) is a formal language that mental processes operate on. Like spoken language, the language of thought has a combinatorial syntax and semantics. Just as complex sentences are generated from combinations of words, complex mental representations are generated from combinations of simpler representations. Although no formal theory of the language of thought has yet been fully successful, the search for one is the goal of computational psychology. Note: For more arguments about this aspect of AI, see "The Representationalist Assumption," Box 103, and the "Can connectionist networks exhibit systematicity?" arguments on Map 5. Unmapped Territory Additional language of thought arguments Start Here 1 Alan Turing, 1950 Yes, machines can (or will be able to) think. A computational system can possess all important elements of human thinking or understanding. Alan Turing I believe that at the end of the century ... one will be able to speak of machines thinking without expecting to be contradicted. 82 Hubert Dreyfus, 1972 Creative discoveries radically restructure human knowledge. Large knowledge bases organize and process data in a relatively fixed manner. Human knowledge, by contrast, is subject to radical restructuring on the basis of "creative discoveries," which can alter a person's entire understanding of the world. Such fundamental shifts can take place at personal, conceptual, and cultural levels. Note: In this context Dreyfus also discusses Thomas Kuhn's notion of paradigm shift. 78 Hubert Dreyfus, 1996 Get an AI system to understand, "Mary saw a dog in the window. She wanted it." When faced with the sentences, "Mary saw a dog in the window. She wanted it," it is difficult for an AI system to know whether "it" refers to the dog or the window. John McCarthy says this problem is "within the capacity of some current parsers" (1996, p. 190). But interpreting the sentence requires bodily know-how and empathetic imagination. Figuring out what pronouns refer to in such sentences is the next problem logic-based AI should try to deal with. Note: The example, "Mary saw a dog in the window. She wanted it," comes from Doug Lenat (quoted in Dreyfus, 1992 pp. xixxx). PETS 118 Jerry Fodor and Zenon Pylyshyn, 1981 Affordances are trivial. Affordances are just another name for whatever it is in the environment that makes an organism respond as it does. But such a notion can't provide a substantial explanation of perception. For example, to say we recognize a shoe by perceiving its property of being a shoe doesn't explain anything. Affordances add nothing new to our knowledge of the mechanisms behind perception. Focus Box: The lowest-numbered box in each issue area is an introductory focus box. The focus box introduces and summarizes the core dispute of each issue area, sometimes as an assumption and sometimes as a general claim with no particular author. Arguments with No Authors: Arguments that are not attributable to a particular source (e.g., general philosophical positions, broad concepts, common tests in artificial intelligence) are listed with no accompanying author. Citations: Complete bibliographic citations can be found in the booklet that accompanies this map. Methodology: A further discussion of argumentation analysis methodology can be found in the booklet that accompanies this map. © 1998 R. E. Horn. All rights reserved. Version 1.0 Legend The arguments on these maps are organized by links that carry a range of meanings: A distinctive reconfiguration of an earlier claim. is interpreted as A charge made against another claim. Examples include: logical negations, counterexamples, attacks on an argument's emphasis, potential dangers an argument might raise, thought experiments, and implemented models. is disputed by Arguments that uphold or defend another claim. Examples include: supporting evidence, further argumentation, thought experiments, extensions or qualifications, and implemented models. is supported by As articulated by Where this phrase appears in a box, it identifies a reform- ulation of another author's argument. The reformulation is different enough from the original author's wording to warrant the use of the tag. This phrase is also used when the original argument is impossible to locate other than in its articulation by a later author (e.g., word of mouth), or to denote a general philosophical position that is given a special articulation by a particular author. Anticipated by Where this phrase appears in a box, it identifies a potential attack on a previous argument that is raised by the author so that it can be disputed. This icon indicates areas of argument that lie on or near the boundaries of the central issue areas mapped on these maps. It marks regions of potential interest for future mapmakers and explorers. Unmapped Territory Additional arguments The Issue Mapping™ series is published by MacroVU Press, a division of MacroVU, Inc. MacroVU is a registered trademark of MacroVU, Inc. Issue Map and Issue Mapping are trademarks of Robert E. Horn. The remaining 6 maps in this Issue Mapping™ series can be ordered with MasterCard, VISA, check, or money order. Order from MacroVU, Inc. by phone (206–780–9612), by fax (206–842–0296), or through the mail (Box 366, 321 High School Rd. NE, Bainbridge Island, WA 98110). One of 7 in this Issue Mapping™ series—Get the rest! Novice Expert 81 The knowledge base assumption. Symbolic data can be organized into a knowledge base that represents the entirety of human understanding. However, when AI researchers develop knowledge bases, they generally focus on some particular domain, such as the domain of furniture, animals, restaurants, and so forth. A knowledge base is written in a "representation language" (such as the predicate calculus or LISP). A good knowledge base supports inference, allowing the computer to draw conclusions from available information. For example, a knowledge base representing the domain of furniture should support the inference that a chair is something people sit on. General Structure of an Information Processing System Redrawn from Newell and Simon (1976). Receptors Effectors Processor Memory Action upon Environment Information George Lakoff Rodney Brooks Hubert Dreyfus John McCarthy I believe it's too hot in here. Zenon Pylyshyn A physical symbol system has the necessary and sufficient means for general intelligent action. By "necessary" we mean that any system that exhibits general intelligence will prove upon analysis to be a physical symbol system. By "sufficient" we mean that any physical symbol system of sufficient size can be organized further to exhibit general intelligence (p. 16). Allen Newell Herbert Simon Immanuel Kant John von Neumann Tree Chair Table Seeing Hearing Thinking 99 100 Tree Table Chair D O G D O G DOG D O G is supported by Wow! I'm not the same person I was yesterday! I've made a creative discovery about myself. I'm in love! That changes everything! I built this without a database! Postulates of Situated Action 1. "Situated actions ... [are] actions taken in the context of particular concrete circumstances" (p. viii–ix). 2. "All activity, even the most analytic, is fundamentally concrete and embodied" (p. viii). 3. Even "purposeful actions are inevitably situated action" (p. viii). 4. No action is ever "fully anticipated" by plans because the "circumstances of our actions ... are continuously changing around us" (p. ix). 5. All "our actions, while systematic, are never planned in the strong sense" (p. ix). 6. All or most of life is "primarily ad hoc activity (p. ix). From Suchman (1987). Martin Heidegger Prototype effects: Differences among category members such that some members are more central than others. Basic level categorization: Categories that are cognitively basic are "'in the middle' of a general-to-specific hierarchy" (p. 13). Kinesthetic image schemas: Recurrent structures of ordinary bodily experience. Metaphorical concepts: Cross-domain mappings where knowledge from one domain of the conceptual system is projected onto knowledge in another domain. Postulates of Experiential Realism 1. Experiential realism is "experiential" in that it focuses on: actual and potential experiences genetically acquired makeup of the organism the organism's interactions in the social and physical environment (p. xv). 2. Experiential realism is "realist" in postulating that: there is a real world reality places constraints on concepts truth goes beyond mere internal coherence there is stable knowledge of the world (p. xv). 3. There is more to thought than just representation. Thought is also: embodied, in that "the structures used to put together our conceptual systems grow out of bodily experience and make sense in terms of it" (p. xiv); imaginative, in that concepts not grounded directly in experience (e.g., metaphorical concepts) employ conceptual structures that go beyond the literal representation of reality. 4. Classical categories are inadequate. They are like containers: their members are either in or out. Cognitive models, on the other hand, obey nonclassical "fuzzy" logics, exhibiting degrees of membership. In Out Central Peripheral 5. It is important to focus on conceptual structures and cognitive models, which involve a variety of phenomena. Adapted from George Lakoff (1987). Lakoff's theory draws on the work of a wide range of thinkers, including Mark Johnson, Eleanor Rosch, Ludwig Wittgenstein, and Lotfi Zadeh. • Metonymic concepts: Taking one aspect or part of something and using it to stand for the thing as a whole or for some other part of it. Central: four-legged chair Peripheral: bean bag The ham sandwich at table 10 ... She ran like the wind ... basic level chair specific striped general furniture What is this? This info-mural is one of seven “argumentation maps” in a series that explores Turing’s question: “Can computers think and/or will they ever be able to?” Argumentation mapping is a method that provides: - a method for portraying major philosophical, political, and pragmatic debates - a summary of an ongoing, major philosophical debate of the 20th century - a new way of doing intellec- tual history. What does it contain? How do I get a printed copy? Altogether the seven maps: - summarize over 800 major moves in the debates threaded into claims, rebuttals, and counterrebuttals - 97-130 arguments and rebuttals per map - 70 issue areas in the 7 maps - 32 sidebars history and further background The argumentation maps: - arrange debate so that the cur- rent stopping point of each debate thread is easily seen - identify original arguments by over 380 protagonists world- wide over 40 years - make the current frontier of debate easily identifiable - provide summaries of eleven major philosophical camps of the protagonists (or schools of thought). You can order artist/researcher signed copies of all seven maps from www.macrovu.com for $500.00 plus shipping and han- dling. 1998

3 Can Physical Symbol Systems Think? Can symbolic ... · symbolic medium in which rules can be inspected and modified, ... to model adult intelligence without simulating its development,

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 3 Can Physical Symbol Systems Think? Can symbolic ... · symbolic medium in which rules can be inspected and modified, ... to model adult intelligence without simulating its development,

Tree

Table

Chair

is made of

Woodcomes from

is used for

is made ofis made of

PlasticSitting on

The History and Status of the Debate — Map 3 of 7Can Physical Symbol Systems Think?

An Issue Map™ Publication

3

124 Lucille Suchman, 1987Situated action can explain the use of plans without representations. Plans are enactedin the course of practical activities, and are best explained by ethnomethodology, which doesn'tpresuppose the existence of representations in its explanations of social practices. According toethnomethodology:• Practices sometimes may be explained merely by materials in the local environment of the culture.• A difference in practices does not necessarily mean a difference in plan or internal representation.• The structure of systems of practice are not essentially ordered by rules or norms that can be

is supported by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

Can physicalsymbol systemslearn ashumans do?

Does the situated action paradigmshow that computers can't think?

Can the elements of thinking berepresented in discrete symbolic form?

42Humans learn byadding symbolic datato a knowledge base.Both machines and peoplelearn by adding symbolicallyencoded information to aknowledge base.

44 Hubert Dreyfus and Stuart Dreyfus, 1986Computers never move beyondexplicit rules. In acquiring skills suchas the ability to drive a car or play chess,humans initially use explicit rules and thenadvance through a series of stages in whichperformance becomes increasingly skilled,fluid, and habituated. At the highest levelsof expertise, the rules are no longerconsulted. Computers, on the other hand,are tied to the use of explicit rules and can'tmove beyond them to more flexible formsof expertise.

Input

Can symbolic representations accountfor human thought?

94 George Lakoff, 1987AI models lack the feature of theembodiment of concepts. There isevidence that the body is involved inmany processes that are considered to bepure information processing in classicalAI. These processes include recentdiscoveries concerning basic-levelconcepts, kinesthetic image schemas, andthe experiential basis of metaphoricalconcepts.Note: Also, see sidebar, "Postulates ofExperiential Realism," on this map.

Does thinkingrequire a body?

Can physical symbol systems think dialectically?

30 Joseph Rychlak, 1991Agency is due to predication and choice.Automatic decision-making processes, even if theypossess delaying mechanisms, still cannot choosetheir own goals or reflect on the meaning of theiralternatives. The standards they use to choosebetween alternatives are not a result of their ownstatus as agents.

40 Martin Heidegger, 1927The spatiality of equipment.Human beings organize space intoareas of nearness and "farness"relative to their needs andconcerns. They do not organizespace into a three-dimensionalsystem of discrete coordinates.

Is the relation between hardwareand software similar to thatbetween human brains and minds?

Other physical symbolsystems arguments

Can a symbolic knowledge baserepresent human understanding?

Does mental processingrely on heuristic search?

Do humans use rules asphysical symbolsystems do?

4 Graham Button, Jeff Coulter, John R. E. Lee, and Wes Sharrock, 1995Neurons cannot represent rules of ordinary language. AI assumesthat rules are ultimately represented in the brain. But neurons don't provide asymbolic medium in which rules can be inspected and modified, so they arenot appropriate as a medium for the formulation of rules for ordinary language.We don't use neurons like we use rules.Note: See the "Do humans use rules as physical symbol systems do?" argumentson this map. Also, see sidebar, "Postulates of Ordinary Language," on this map.

Do physical symbol systemsplay chess as humans do?

86The problem of commonsense knowledge. Humancommonsense knowledge is so vast that it can never be adequatelyrepresented by propositional data in a knowledge base. Dreyfusfactors this problem into three parts. "1. How everyday knowledge must be organized so that one can make inferences from it. 2. How skills or know-how can be represented as knowing-that. 3. How relevant knowledge can be brought to bear in particular situations" (1992, p. xviii).This last problem has been called the access problem, that is,the problem of how to efficiently access data in a knowledge base.The access problem is also relevant to heuristic search (see the"Heuristic Search" arguments on this map), because in largeknowledge bases the problem of accessing information is also theproblem of searching for it.

98 Zenon Pylyshyn, 1974The body is not essential tointelligence. The body is importantto the development or genesis ofintelligence (as Jean Piaget hasshown), but not to its ultimate form.By the time a human reachesadulthood, the body is no longeressential. If the body were essentialto intelligence, Dreyfus would haveto claim that an adult quadriplegic isunintelligent. Because it is possibleto model adult intelligence withoutsimulating its development, it ispossible to put intelligence in acomputer without a body.

95 John Haugeland, 1995The mind–body–world system. Mind, body, and worldcommunicate vast amounts of information to one anotheracross "wide-bandwidth" channels—so much so that theyare integrated into a single system. For example, as a persondrives to San Jose, his or her mind doesn't operate like aclassical symbol system, solving problems by communicatingcomparatively tiny instructions at "narrow-bandwidth"transducers. In a trip to San Jose, the brain, the fingers, andthe road are in constant wide-bandwidth "collaboration,"acting as a single, integrated system.Notes:• Haugeland supports his view by citing Dreyfus (see "The Body Is Essential to Human Intelligence," Box 96), Gibson (see "Affordances are Features of the Environment," Box 114), and Brooks (see sidebar, "Postulates of Subsumption Architecture," on this map).• Haugeland also argues, with some help from Brooks, that the mind–body–world system does not use classical representations. Therefore, Haugeland's argument also disputes the representationalist assumption.

84 John McCarthy and Patrick J. Hayes, 1969The frame problem. General reasoning requires that a systemmake relevant inferences while excluding irrelevant inferences.This process requires "frame axioms," which specify those propertiesof the world that remain unchanged when an action is carried out.Specifying such axioms in advance cannot be carried out in anysimple way.Notes:• McCarthy and Hayes are not, strictly speaking, disputing the knowledge base assumption. They raise the frame problem as an important (and solvable) issue for AI researchers to deal with.• This definition of the frame problem is itself the subject of dispute.

43 George Lakoff, 1987Motivational factorscause different rates oflearning. Humans learnmore efficiently whenmotivated by supplementaryknowledge, for example, bymetaphorically motivatedknowledge. But forcomputers the opposite isthe case: they work lessefficiently when forced todeal with supplementaryknowledge. For example, itis easier for us to learn aboutthe flow of electricity givenour knowledge of flowingwaters, but in a computersuch knowledge just addsmore complexity for it todeal with.Note: The electricityexample is drawn fromGentner and Gentner (1982).

91 George Lakoff, 1987The predicate calculuscannot capture humanreasoning. Manyknowledge-based systemsencode information withsome version of thepredicate calculus, whichis based on the classicalconcept of a category. Butthe classical view ofcategories has beendisproved by empiricalevidence.Note: Also, see sidebar,"Postulates of ExperientialRealism," on this map.

com • mon • sense knowl • edge: Our everyday understanding of the world.Generally, such knowledge consists in pretheoretical information that seems obviouswhen explicitly stated: for example, our knowledge that tables generally have 4 legsand are something people put things on, or that people generally bring gifts to birthdayparties. It has been said that commonsense knowledge consists not of the informationcontained in an encyclopedia, but rather in all the information necessary to read andunderstand an encyclopedia in the first place.

isdisputed

by

is supported by

is supported by

is supported by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

byis

disputedby

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

is supported by

is supported by

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by isdisputed

by

isdisputed

by

is supported by

isdisputed

byis supported by

isdisputed

by

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

is supported by

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

is supported by

is supported byis

disputedby

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

is supported by

isdisputed

by

isdisputed

by

is supported by

isdisputed

by

is supported by

is supported by

isdisputed

by

is supported by

isdisputed

byis

disputedby

85 Francisco Varela, Evan Thompson, and Eleanor Rosch, 1991Independent domains can't be aggregated to model commonsense. Current models from the representationalist perspective attempt toreconstruct common sense from the combination of a variety of smalldomains, but the world isn't composed of separate discrete domains ofknowledge that can be represented in isolation from each other. To providean adequate theory of common sense, we must provide an account ofcontext-dependent know-how.

122 William Clancey, 1993Manipulation of symbols doesn'tencompass all of human thought. Everyinteraction with the environment is an instance oflearning. We do not experience the world in termsof preestablished categories; we create categoriesas we go along through a dialectical process ofcoadaptation between cognitive and perceptualsystems. Thus, representations do not have a formalstructure independent of their application.

103The representationalist assumption.Symbol structures are internal representationsof external reality. They are made up ofsymbols, and are operated on by rules, search,and other psychological processes. Symbolicrepresentations have a constituent structure, inthat the meaning of a given representation is afunction of the meaning of its constituent parts.Disputed by"The Front-End Assumption Is Dubious,"Map 1, Box 74.

Notes:• Symbol structures in this sense are often referred to as mental representations or classical representations.• For more on this classical AI theory of representation, see Newell and Simon (1976), Fodor (1975), and Pylyshyn (1984).• Much of the debate between connectionism and classical AI is focused on the issue of mental representation. One of AI's major charges against connectionism is that connectionist networks can't model constituent structure. See the "Can connectionist networks exhibit systematicity?" arguments on Map 5.

The cat is on the mamat

75 John McCarthy, 1990aLighthill's categoriesare irrelevant to AIresearch. The LighthillReport implies that AIresearch is only usefulinsofar as it contributes toindustrial applications(category A) and toneuroscience (category C).But AI has goals of itsown. AI researchers studystructures of informationand problem solvingindependently of howthese structures arerealized in humans andanimals.Note: An early variant ofthis argument was madeby Professor D. Mitchie,in an appendix to theLighthill Report.

80 John McCarthy, 1977Formalized non-monotonic logic.Formalized non-monotoniclogic is a development of formallogic that allows for theintroduction of new axioms toinvalidate old theorems. In thisway non-monotonic logicsaccount for the human abilityto revise assumptions in lightof new observations.Note: This definition is adaptedfrom McDermott and Doyle(1980).

isdisputed

by

isdisputed

by

isdisputed

by77 John McCarthy, 1996What is theeasiest thingcomputers can'tdo? Dreyfus makessome vaguearguments aboutlogic-based AI, buthe never issues aprecise, testablechallenge.

John McCarthy

is supported by

is supported by

is supported by

100 Hubert Dreyfus, 1992Madeleine has bodily and imaginative skills. Although Madeleine is blind and uses awheelchair, she has a body with an inside and an outside and can be moved around in the world.She can also communicate with others and imagine how they encounter the world. The claim thatMadeleine acquired common sense solely from books ignores these bodily and imaginative factors.

101 Harry Collins, 1996If Madeleine can learn commonsense, then so can a computer.If someone with as nonstandard a bodyas Madeleine can acquirecommonsense social knowledge, thena computer with its own nonstandardbody—a fixed metal box—could alsoacquire commonsense knowledge. Ifwe can figure out what processMadeleine went through to become asocialized human, then we might applythat process to a computer as well.Note: For more on Collins's viewsabout socialization, see the "Cancomputers reason scientifically?"arguments on Map 1.

73AI programs are brittle. Because symbolic AIprograms use rigid rules and data structures, they cannotadapt fluidly to changing environments and ambiguouscircumstances. Symbol structures are brittle—theybreak apart under the pressure of a novel or ambiguoussituation.Note: Versions of this claim are widely discussed inthe literature and on these maps. For example,brittleness is discussed by Hofstadter (see "The Front-End Assumption is Dubious," Map 1, Box 74), Brooks(see sidebar, "Postulates of Subsumption Architecture,"on this map), Dreyfus (see sidebar, "Postulates ofDreideggereanism," on this map), and the connectionists(see Map 5). It is also raised in the context of fuzzylogic.

99 Doug Lenat, 1992, as articulated by Hubert Dreyfus, 1992Madeleine reasonswithout a body.Madeleine, a patientdiscussed by Oliver Sacks,used a wheelchair, wasblind, and was unable toread Braille. In effect, shelacked a body. Yet she stillmanaged to acquirecommonsense knowledgefrom books that were readto her. Her experienceshows that having a body isnot essential to humanreasoning.

The girlran downthe street.

26 Joseph Rychlak, 1991Symbol systems can'texhibit agency. Symbolprocessors can never haveagency. Agency requiresteleology, dialectical reasoning,and free will. Computers lackthese traits because they don'tunderstand meanings, therelations between them, and theirrelation to the world.Note: For similar arguments, seethe "Can computers have freewill?" arguments on Map 1.

is supported by

isdisputed

by

51 Ludwig Wittgenstein, 1953The infinite regress ofrules. Assuming that allnonarbitrary human behavioris governed by rules, then rulesmust be specified in order toapply the original rules, andfurther rules must be specifiedfor those rules, ad infinitum.

47 Anticipated by Alan Turing, 1950Impossible to writeevery rule. It is impossibleto provide rules for everyeventuality that a computermight face.

Rule

Rulerequires

requires

isdisputed

by

isdisputed

by

isdisputed

by

is supported by

is supported by

is supported by

isdisputed

by

isdisputed

by

isdisputed

by

56 Noam Chomsky, 1965, 1980Grammars are rule-basedsystems. The grammar of alanguage is a system of rules forthe production of sentences. Theserules are part of the unconscious"deep structure" of language.

55 Immanuel Kant, 1781All concepts are rules.Concepts are rules for combiningthe elements of perception intoobjective representations. This"synthesis" of experiencepresupposes rules ofconsciousness as well as a rule-governed world.

54 Graham Button, Jeff Coulter, John R. E. Lee, and Wes Sharrock, 1995AI rules cannot explain ordinary language. Theproject of accounting for ordinary human language in termsof explicit AI rules runs into the following problems.• Our linguistic practices are too open-ended to be captured by a set of explicit rules.• Human language is essentially embedded in a context of use, and it continually adapts to this context of use.• Ordinary language gets its meaning from practical involvement in "language games," not from its parsable grammatical structures.

52 David Rumelhart, James McClelland, and FARG, 1986Explicit rules areunnecessary. Connectionistnetworks exhibit lawful behaviorwithout following explicit rules.Regularities emerge from theinteractions of low-levelprocessing units, rather than fromthe application of high-level rules.Although it may be possible tocharacterize a network's behavioraccording to high-level rules, noneare involved in its underlyingmechanisms.

58Combinatorial explosion of search. When thenumber of paths in a search space grows exponentially,a combinatorial explosion results: the search becomestoo long to be carried out, given time and memoryconstraints. To make such searches more efficient,methods of estimation called "heuristics" have beeninvented.Note: Combinatorial explosion also affects the problemof representing commonsense knowledge. See"Combinatorial Explosion of Knowledge," Box 92.

59 Hubert Dreyfus, 1972Trial and error is different from essential discrimination. Humans are able to intuitivelygrasp what is essential or inessential about a problem. Symbol systems lack the capacity for"essential discrimination," and proceed blindly by brute-force trial and error. Adding heuristicrules to the system is only a stopgap measure. Humans recognize what is necessary automatically,by zeroing in on the essential nature of a problem.Note: Dreyfus derives his notion of essential discrimination from the work of Gestalt psychologistMax Wertheimer.

isdisputed

by

is supported by

Dreideggereanism is Hubert Dreyfus's application of Heideggerean phenomenology toissues in AI and philosophy of mind.

1. Our basic way of being in the world is coping with equipment (rather than relating to the world by way of mental representations).

2. Coping skills can't be formalized.

3. Coping takes place against a background of general familiarity.

4. Familiarity is a kind of coping, but it is not directed at any particular task. Heidegger calls this background familiarity an understanding of being.

5. On the basis of familiarity, human beings are able to determine what is relevant to what, what to pay attention to, and what to do in any given situation.

6. Expertise consists of responding to a situation similar to one that has occurred in the past in a way similar to a response that has worked in the past.

7. Similarity is basic and cannot be analyzed in terms of shared features.

8. Expertise is arrived at in 5 stages, beginning with rule-like responses to specific features and ending with responses to whole situations.

9. Situations cannot be specified in terms of features.

Adapted from Dreyfus (1972, 1991) and Dreyfus and Dreyfus (1986).

Postulates of Dreideggereanism

isdisputed

by

49 Hubert Dreyfus, 1972Humans behave in an orderly manner without recourse to rules.Human activity may be described by rules, but these rules are not necessarilyfollowed in producing the activity. For example, if I wave my hand in the air,touch something accidentally, and move my hand back to that spot, I amperforming a complex series of movements which can be described geometrically,but the only principle I follow is, "Do that again." Similarly, the planets arenot solving differential equations as they revolve around the sun, even if theirmovements can be described by differential equations.

What doI needrules for?

isdisputed

by

116 James Greeno and Joyce Moore, 1993Affordances cannot beredefined as symbols.Affordances aren't likesymbols. They don't mediatebetween the organism and itsenvironment. Affordancesare directly "picked up" bythe organism. So it isinappropriate to redefineaffordances as symbols.

Either Or

In Either Case

50 Hubert Dreyfus, 1972Programmedbehavior is eitherstrictly rule-likeor arbitrary. Inconfronting a newusage of language,machines face adilemma.

The machine must treat thenew usage of language as acase that falls under existingrules,

in which case

rules covering all cases mustbe built in beforehand (see"The Infinite Regress ofRules," Box 51).

The machine must take a"blind stab" at interpretationand then update its rule base,

in which case

the machine is behaving inan arbitrary, and hencenonhuman, fashion.

The machine is not like a human. A native speaker, bycontrast, is embedded in a context of human life, whichallows him or her to make sense of utterances in a non-rule-like yet nonarbitrary way.

Explicit data cannot account for the understanding of natural language.Humans avoid this antinomy because they recognize the present situationas a continuation of past situations, and on that basis determine whatis relevant to understanding a sentence.

There is an ultimate context thatrequires no interpretation,

in which case

we are forced to postulate a setof facts that have fixedrelevance, regardless of thesituation. But no such set offacts exist.

There is a broader context offacts that determines whichfacts are relevant to a givensentence,

in which case

that context must itself beinterpreted, so that we facean infinite regress of broaderand broader contexts.

41 Hubert Dreyfus, 1972The context antinomy. For a machine to understand sentences in anatural language, it must place those sentences in a context. However,because machines use explicit bits of data, they run up against anantinomy.

In Either Case

Either Or

87 Douglas Lenat and Edward A. Feigenbaum, 1991Computers will be able to demonstratecommon sense with a large enough database.A large database containing more than 10 millionstatements of facts about everyday life, history,physics, and so forth will be able to exhibit commonsense.

We've got to "bite thebullet" and enter inlots of information.

88 Hubert Dreyfus, 1992CYC will not be able todemonstrate common sense.CYC is inconsistent with thephenomenology of skilled coping. Theproject of encoding human knowledgein a vast database is beset by thefollowing problems.• Human beings cope successfully without consulting facts in a knowledge base.• Skills and know-how resist representation as propositional knowing-that.• The more human beings know, the more quickly they bring the relevant knowledge to bear in a situation. In a computer, the reverse is true.

83 Søren Kierkegaard, 1844, as articulated by Herbert Dreyfus, 1972The leap. There are timeswhen a person makes a "leap"to a new sphere of existence,which infuses his or her beingwith a new order of significance.Such leaps are so radical thatafterwards we cannot imaginehow life could have ever beenotherwise.

Unmapped Territory

Additionalnon-monotonic

arguments

Unmapped TerritoryAdditional

frame problemarguments

Unmapped Territory

AdditionalChomsky

arguments

is supported by

107 Leopold Lowenheim, 1915; Thoralf Skolem, 1922The Lowenheim-Skolem theorem. If a countable collection of sentencesin a first-order language describes a model (that is, some state of affairs inthe world), then it describes more than one model.Note: Hilary Putnam's version (1981) of this argument employs a strengthenedversion of the Lowenheim-Skolem theorem and claims that the theoremshows that any statement in natural language has an unintended interpretationif it has any interpretation at all.

isdisputed

by

isdisputed

by

16 David Rumelhart, James McClelland, and FARG, 1986Graceful degradation. Because processing in the brain is distributed, itsperformance diminishes in proportion to the degree of neuronal damage or noisyinput. The performance of the brain "gracefully degrades" in problematiccircumstances. In von Neumann machines, by contrast, a single disruption or glitchwill generally have catastrophic consequences for the system as a whole.

17 Keith Butler, 1993aClassical machines implemented in connectionistnetworks can gracefully degrade. If a classical symbolsystem were implemented in a connectionist network, thenthe symbol structures could have the kind of internaldistribution that Chater and Oaksford require. In such a case,the symbol system could exhibit graceful degradation.

15 Nick Chater and Mike Oaksford, 1990Spatial distribution is inadequate.Fodor and Pylyshyn claim that symbolicrepresentations can be physicallydistributed in memory. But that kind ofdistribution does not afford the right kindof damage tolerance. Representationsmust be internally distributed in aconnectionist activation pattern ratherthan merely spatially distributed inmemory.Note: For more on connectionistrepresentations, see the "Can connectionistnetworks exhibit systematicity?" argumentson Map 5.

That's thewrong kind ofdistribution.

isdisputed

by

97 Maurice Merleau-Ponty, 1962The body is a synergic system.All parts of the body are interrelated ina "synergic system." Hands, feet, arms,and so forth are not juxtaposed in acoordinate space, but are enveloped ina frontier of interrelations: sights can beheard, sounds can be seen, and motorabilities can pass from one limbto another.

35 John Barnden, 1987Patterns of activity may be "fuzzed" symbols.The "distinctive, stereotypic state of activity" (p. 174)that Skarda and Freeman correlate with the inhalationof a learned odor may correspond to the symbolic outputof the olfactory system, which the brain then uses inclassical symbol-manipulation style. It is consistentwith AI to allow that these symbols "embody a certainamount of 'fuzz'" (p. 174). That is, the states of activityproduced by the olfactory system may correspond torough approximations of classical discrete symbols.

36 Christine A. Skarda and Walter J. Freeman, 1987Fuzzed symbols are not classical symbols. The patterns ofactivity that Barnden correlates with symbols are not like classicalsymbols, which can be composed and manipulated by well-understoodlogical operations. Dynamic patterns of neural activity are contextdependent and are only roughly correlated with events in the world.Note: For discussion of nonclassical symbolic representations, see the"Can connectionist networks exhibit systematicity?" arguments on Map 5.

37 Hubert Dreyfus, 1972Explicit values cannot organize a field of experience. Humaninterest organizes a field of experience that cannot be captured by explicitgoals and values.• Specific goals must be checked at preset intervals, but human concerns pervade experience.• When concerns are made explicit they lose their pervasive character.• Human needs only become explicit after they have been fulfilled.

34 Christine A. Skarda and Walter J. Freeman, 1987Nonsymbolic explanations of smelling. Rabbits discriminate odors by a process of chaotic self-organizing activity at the levelof the neural tissue. Symbols and symbol structures don't explain this process as well as the mathematics of nonlinear dynamics andchaos does.Note: Skarda and Freeman use similar reasoning to dispute "The Rule-Following Assumption," (Box 46) (because neurons don't followrules) and the central control aspect of "The Brain Has a von Neumann Architecture," (Box 12) (because neural activity is self-organizing).They also argue against representationalism in general, with its assumption of the existence of plans, goals, scripts, and so forth.

33 As articulated by Hubert Dreyfus, 1972Reductionistic science paradigm. To model andunderstand the world, divide phenomena into simpleelements and relations. Scientists, from Galileo onwards,have analyzed the world using this reductionist programand have made significant progress.

38 John Dewey, 1922Goals pervadeactivity. Humansexperience goals aspervasive elements ofpresent activity, ratherthan as fixed ends tobe worked toward.

The idea that the world can be understood as composed of discrete elements has a long history in the philosophicaltradition.

Plato, for example, believed that all true knowledge (knowledge of piety, justice, the good, etc.) must be statablein explicit definitions that would act like rules telling us how to behave. René Descartes proposed that certaintyin knowledge could only be obtained by relying on what he called "clear and distinct ideas," that is, ideas thatare clear in themselves and distinct from other ideas.

Gottfried Leibniz, one of the originators of the idea that a machine could think, also introduced the notion ofa monad, which, as a basic component of reality "is nothing but a simple substance that enters into composites—simple, that is, without parts" (Leibniz, 1714, p. 1).

The British Empiricists (notably, John Locke, George Berkeley, and David Hume) held that all knowledgecomes to us through experience in the form of discrete "simple ideas," which combine to form "complex ideas."The notion that ideas are discrete continued in the work of Hume, who claimed that "every distinct perceptionwhich enters into the composition of the mind, is a distinct existence, and is different, distinguishable, andseparable from every other perception, either contemporary or successive" (1739, p. 259).

In the 19th century, Gottlob Frege invented the first complete formalization of predicate logic. Frege regardedthoughts as discrete entities composed of concepts and relations between them. In this century, BertrandRussell founded logical atomism, in which the world is understood in terms of atomic propositions that areeither true or false. Ludwig Wittgenstein articulated his early version of logical atomism as follows. "1. Theworld is all that is the case. 1.1. The world is the totality of facts, not of things. 1.11. The world is determinedby the facts, and by their being all the facts" (1922, p. 5).

Much of this history was first recounted (in the context of artificial intelligence) by Dreyfus (1972).

History of the Symbolic Data Assumption

39 Martin Heidegger, 1927Humans beings are called to take a stand on who they are. Human beingstake a stand on who they are by living "for the sake of" being teachers, doctors, andso forth. Such "for-the-sake-of-whiches" structure human activity in a pervasive andnondeterminate way.

32The symbolic data assumption. The elementsof thinking are represented in discrete symbolic form.Beliefs, desires, goals, values, plans, and so forth arerepresented by such data and by their combination intohigher-order symbol structures like knowledge bases.

89 Doug Lenat and R. V. Guha, 1990CYC. CYC is a massive databasecontaining millions of statements and acomplex inference engine that representscommonsense knowledge well enough tointelligently process data.

Implemented Model

92 Hubert Dreyfus, 1972; James Lighthill, 1973; and othersCombinatorial explosion of knowledge. Representing all of the information relevant to an open-ended domain, or to humancommonsense understanding in general, is an impossible task, because it results in a combinatorial explosion of relevant information.The number of facts that must be encoded to scale up from a series of small, independent domains to the totality of commonsenseknowledge is insurmountably large.Notes:• Combinatorial explosion also affects the problem of searching databases. See "Combinatorial Explosion of Search," Box 58.• This problem has been raised in the context of the Turing test (see "Combinatorial Explosion Makes the All-Possible-Conversations Machine Impossible," Map 2, Box 103).

90 James Lighthill, 1973Past disappointments. The number of facts necessary to capture the entirety of commonsense knowledge isinsurmountably large. Machine agents can only cope successfully in limited domains, such as the game of checkers, where asmall number of facts exhaustively describe the agent's world. To scale up from such artificial worlds to the real world simplyby adding more and more facts is not possible, because this leads to a combinatorial explosion of the number of waysin which elements can be grouped in a knowledge base. Because of these problems, AI workers have little more than "pastdisappointments" to their credit.Supported by"The Lighthill Report," Box 74.

Note: This argument summarizes a number of Lighthill's opinions, and it represents in early form versions of the problem oftoy worlds, brittleness, combinatorial explosion, and commonsense knowledge, all of which are well-known problems today. 1. A subsumption architecture machine (SAM) has sensors and actuators for

interaction with the real world.

2. A SAM is comprised of "layers," that is, "activity-producing subsystems" (p. 146). These layers interact with each other and with the world.

3. Each layer is a complete functional unit; that is, it functions autonomously of other layers, without depending on any central control unit.

4. Layers can be added gradually to SAMs, which can thus develop incrementally. "We must incrementally build up the capabilities of intelligent systems, having complete systems at each step of the way and thus automatically ensure that the pieces and their interfaces are valid" (p. 140).

5. Each layer enables a specific behavior (e.g., obstacle avoidance, exploration, aluminum-can retrieval, etc.).

6. Any given layer can "subsume" the action of attached layers in order to further its own goals, without completely wresting control from the attached layers.

7. A SAM interacts with the "real world," that is, the dynamic world that humans act in; it does not act in a refined, abstracted, or limited "toy world." "At each step we should build complete intelligent systems that we let loose in the real world with real sensing and real action. Anything less provides a candidate with which we can delude ourselves" (p. 140).

From Brooks (1991). "Subsumption architecture machine" (SAM) is our own term.

Postulates of Subsumption Architecture

109 Cog Shop, 1998COG. Led by Rodney Brooks, the Cog Shophas developed a humanoid robot composed of aseries of interconnected (but independentlyfunctional) systems controlled by a set of parallelprocessors. COG currently consists of a trunk,head, arms, and a sophisticated visual system.Planned additions include hands, vocalizationability, a vestibular system, and skin. COG isintended to be as fully a part of the real world aspossible, both by having a body and by interactingin a real (as opposed to a "toy") environment.

Implemented Model

=

108 Rodney Brooks, 1991The world is its own best representation. Modelingintelligence requires that a system be directly interfaced withthe world through sensors and actuators. These "layers" ofinteraction with the environment approximate thesensorimotor dynamics of the human body and allow theworld to act as its own representation. This removes thenecessity of using the inefficient global representationsfavored by AI and has the advantages of:• better response time, because no global representation has

to be changed whenever the environment changes;• increased robustness, because the system is less likely to

crash due to some unpredictable change in the world.

105 George Lakoff, 1987Objectivism is in conflict with empirical studies of naturalhuman categories. The objectivist paradigm—which influencesmost contemporary cognitive science—rests on a classical theoryof categories that is disproved by a wide body of empirical evidenceconcerning basic-level concepts, kinesthetic image schemas,metaphorical concepts, metonymic models, and others.

"On" is anembodiedconcept.

The cat ison the mat. To see the contradiction, notice the following: Changing the meaning

of the parts of a sentence should change the meaning of the sentenceas a whole (by 2). This implies that the truth value of this sentence willalso change for some possible world (by 1). But, we can construct asentence in which the meaning of its parts changes but its truth valueremains the same in all possible worlds. So, we have a contradiction.Note: In unpacking the claim, Lakoff draws on an argument by Putnam(1981), which extends Lowenheim-Skolem's theorem from first-orderlogic to higher-order logic.

106 George Lakoff, 1987The objectivist account of cognition is inconsistent. The objectivist account makes inconsistent assumptions.1. The meaning of a sentence is a function that assigns a truth value to the sentence for each possible world. (This is a standard definition of meaning in objectivist semantics.)

2. The meaning of the parts cannot be changed without changing the meaning of the whole. (This is a requirement of any theory of meaning.)

dic t

ion

con

tra

104 As articulated by George Lakoff, 1987The objectivistaccount of cognition.Symbol structures aremeaningful by virtue of theircorrespondence with entitiesand categories in the world.They are mirrors of nature,which "re-present" or "makepresent again" what exists inexternal reality. Without theirconnection to the world,symbol structures aremeaningless.

115 Alonso Vera and Herbert Simon, 1993aAffordances are justsymbolicrepresentations.Affordances are nothing butinternal representationswhose symbolic character isconcealed from conscious-ness. Acquiring affordancesis a matter of encodingsensory stimuli as symbolicrepresentations.

121 Alonso Vera and Herbert Simon, 1993bThat the symbol systems approachis a worldview is not crucial. We dopresent the symbol systems approach as aworldview. But our criticisms of situatedaction are specific. We address severalprincipal theses propounded by the situatedaction school.

123 Alonso Vera and Herbert Simon, 1993cClancey's account of symbols is toolimited. Clancey errs in his criticism ofthe symbol system hypothesis and in hisclaims for situated action.• The symbol systems hypothesis does not limit itself to linguistic symbols: anything is a symbol that is both patterned and denotative.• Clancey's ideas about the creation of new categories are too radical and are not supported by the research. Some relatively constant categories are necessary and their existence has been demonstrated in animals.• The situated action paradigm is untestable, whereas there is much experimental evidence in support of the symbol systems hypothesis.

120 Phillip E. Agre, 1993The symbol systems approach is a worldview, nota theory. What is central to the symbol systems approachis a set of metaphors, such as "inside" and "outside." Thesemetaphors constitute a worldview rather than a theory,because they provide a framework for explaining the factsbut don't provide explanations themselves. This worldviewcan't explain interactional entities that don't, strictly speaking,belong to the external world or to the internal mind.

112 J. Y. Lettvin, Humberto Maturana, Warren McCulloch, and Walter Pitts, 1959Frog retinas provide informationwithout processing representations.Instead of sending a detailed representation ofthe environment to its brain, the frog retinaonly sends that information that is mostrelevant to the frog's needs.

111 Humberto Maturana, 1970, as articulated by Terry Winograd and Fernando Flores, 1986Biological action is situated. Cognition andlanguage need to be interpreted biologically. The notionof representation is inadequate for this task. Cognitionand thinking are best described as a history of "structuralcoupling" between an organism and its environmentalniche.

113The situated action paradigm. This approach toartificial intelligence emphasizes the situated andembodied character of cognition, and the role ofaffordances in the acquisition of perceptual informationand the control of behavior.

117 Alonso Vera and Herbert Simon, 1993cAffordances need to beredefined. Explaining themechanisms of perceptionand cognition requiresunderstanding howinformation is encoded in thebrain. Because affordancesare not representations, theyprovide no help inunderstanding that codingprocess. On the other hand,those models that have beenhelpful in explaining thecoding process have beensymbolic. So it is best toredefine affordances in termsof symbols.

is supported by

119 Alonso Vera and Herbert Simon, 1993aThe mechanisms described by situated action are symbol systems. All of the mechanismssuggested by the situated action program can be interpreted as physical symbol systems. The claims ofsituated action have been achieved in existing symbolic models.

114 James Gibson, 1977, 1979Affordances are features of the environment. Affordances are directly perceivedhigher-order properties of things in the environment. For example, water affordsswimming and floating; hills afford climbing; plains afford walking and running; foodsafford eating; and so on. Affordances aren't representations used as guides. They areenvironmental factors that an organism directly "picks up" in order to maintain itself.

sit-on-abledrinkable

climb-on-able Unmapped Territory

AdditionalGibson AIarguments

102 Hubert Dreyfus, 1996Collins's understanding ofMadeleine is science fiction.Madeleine was nothing like animmobile box. In crawling, kicking,balancing, overcoming obstacles,finding optimal distances for listening,and so forth, the baby Madeleine hadenough of a body structure to allowher to be socialized into our humanworld.

67 Feng-hsiung Hsu, Thomas Anantharaman, Murray Campbell, and Andreas Nowatzyk, 1990; IBM, 1998Deep Blue. A chess-playing computer that beat world champion Andrei Kasparov in 2 outof 6 games at a match in 1996, Deep Blue is an IBM parallel computer running 256 chessprocessors, which allow it to consider 50–100 billion moves in the 3 minutes allotted to a player.In addition to deep search of possible moves, Deep Blue consults a database of openinggames and endgames played by chess masters over the last 100 years. Deep Blue's success isbased on engineering rather than on emulation. Even if doesn't think like a human, it isrelevant to business applications that require rapid management of large amounts of data. DeepBlue is the descendant of earlier work by graduate students at Carnegie Mellon University.

Unmapped Territory

Additionalchess and othergame-playing

arguments

Implemented Model

61Computers playexpert-level chessusing heuristicsearch. Chessprograms play expert-level chess usingadvanced searchtechniques andheuristics.

64 Hubert Dreyfus, 1972Heuristic search is inconsistent with human phenomenology.Human experts play chess by "zeroing in" on relevant moves in fringeconsciousness, rather than by iterating though a list of possibilities.Note: Dreyfus uses similar considerations to dispute machine translation,natural language understanding, and pattern recognition, which also requirefringe consciousness and zeroing in.

66 William James, 1890, as articulated by Hubert Dreyfus, 1972Humans zero in on information in fringe consciousness.The fringes of consciousness provide marginal awareness of"background" information. For example, the experience of thefront of a house is fringed by an awareness of the back of the house.In chess, "cues from all over the board, while remaining on thefringes of consciousness, draw attention to certain sectors bymaking them appear promising, dangerous, or simply worth lookinginto" ( p. 104).Note: In applying James's theory of the fringe to chess, Dreyfusis drawing on the work of Michael Polanyi (1962).

62 John Strom and Lindley Darden, 1996Deep heuristic search has been used to achieveexpert performance. Deep Thought (the precursor toDeep Blue) plays chess at the grandmaster level byconsidering more moves than any previous system. Itssuccess demonstrates the effectiveness of deep heuristicsearch, and shows that AI is not, as Dreyfus thinks it is,a degenerating research program. In fact, given thesuccess of AI, it may be Dreyfus's criticisms that aredegenerating.

65 Hubert Dreyfus, 1972Humans see the chess board as a Gestaltwhole. Humans see a chess board as anorganized pattern or Gestalt. Any move is partof the unfolding Gestalt pattern. Past experiencesand the history of the current game work togetherto build up an integrated awareness of "lines offorce, the loci of strength and weaknesses, aswell as specific positions" (p. 105). In this way,chess masters zero in on unprotected pieces andpromising areas for attack or defense.Note: Also, see the "Can computers recognizeGestalts?" arguments on Map 5.

76 Hubert Dreyfus, 1972, 1979, 1992The critique of artificial reason. Artificial intelligence is theculmination of a flawed tradition in philosophy that tries to explainhuman reason in terms of explicit rules, symbols, and calculatingprocedures. The assumptions of AI are implicitly critiqued by a rangeof 20th-century thinkers (Maurice Merleau-Ponty, Martin Heidegger,Michael Polanyi, Thomas Kuhn, etc.), who show how the nature ofhuman activity and being differ from that of calculating machines likecomputers.Note: This is a general statement of Dreyfus's critique. Specificinstances of his critique are spread throughout this map.

What's the leastcomplexintellectualbehavior thatyou thinkhumans can doand computerscan't? (p. 149)

Work in categories A and C is legitimate. But work in category B is plagued by a variety of problems (including combinatorialexplosion, past failures, limited worlds, and poor performance) and therefore seems unlikely to succeed.Notes:•  Also, see "Past Disappointments," Box 90.• The Lighthill Report was commissioned by the Science Research Council in Britain to help the council make funding decisions for work in AI. Lighthill was chosen as a member of the scientific community who could make an unbiased assessment of the field. His report is said to have had devastating effects on AI funding in Britain during the '70s.

74 James Lighthill, 1973The Lighthill Report. There are 3 kinds of work in AI.

A: Advanced automation.The use of computers toreplace human beings invarious military, industrial,and scientific tasks.

C: Computer-based study of thecentral nervous system. The useof computers in the study of thebrain, as a way of testing hypothesesabout the cerebellum, visual cortex,and so forth.

B: Bridge activity. The use of computers to study phenomenathat fall between categories A and C—inparticular, the building of robots as a way ofstudying general intelligence.

45 Joseph Rychlak, 1991Learning is process of interpretation. Most AI theoristsmodel learning on a Lockean paradigm that takes repetitionand contiguity of perceptions as the primary way in which newconcepts are learned. For example, we learn how to spell byrepeatedly seeing how words are spelled and which letters arecontiguous to each other. But such a model does not paysufficient attention to the role of meaning in learning. Learningoccurs when a mind that reasons dialectically and predicationallyinterprets what it perceives in terms of the meaning of whatit perceives. Because computers don't work with meanings,interpretation and the relevant kind of learning is impossiblefor them.Note: Rychlak's further arguments about artificial intelligencecan be found in the "Can physical symbol systems thinkdialectically?" arguments on this map.

Unmapped Territory

Otherapproaches

to AI learning

1. The purpose of philosophy is the resolution of conceptual confusion through the analysis of ordinary language.

2. Ordinary language is our primary medium of thought.

3. Problems of philosophy arise through the misuse or incomplete understanding of ordinary language.

4. The proper method for analyzing language is through the explication of commonsense everyday use of language.

5. Language can be studied in terms of "language games," in which rules of usage are analyzed as if they were the rules of a game.

6. Language can only be understood semantically from within ordinary language and language games. This means there is no formal metalanguage relevant to philosophy.

7. The rules of ordinary language are adapted to the various practical purposes of human conduct, and as such they must be open to examination and modification.

8. Rules for the use of language are like rules for playing games; they are not like the parsing rules posited by AI researchers. Ordinary language rules provide guides and signposts for the use of language rather than a description of how language is generated.

Proponents include Ludwig Wittgenstein; Graham Button, JeffCoulter, John R. E. Lee, and Wes Sharrock; Charles Karelis(Map 2); Hans Obermeier (Map 4); and Gilbert Ryle (Maps 2and 6). Other notable proponents include John L. Austin, JohnWisdom, and Stanley Cavell.

Postulates of Ordinary Language

53 Jerry Fodor and Zenon Pylyshyn, 1988Classicists are not committedto explicit rules. The possibilityof implicit rules does not argue againstthe classical symbolic framework,because there is a wide body of workwithin the classicist camp that showshow implicit rules can be modeled.In fact, most classicists agree that atleast some rules must be implicit.However, the possibility of explicitrule-based systems does argue againstconnectionism, because connectionistnetworks cannot encode such rules.Note: For an example of an argument(cited by Fodor and Pylyshyn) thatsays that explicit rules can't beencoded by connectionist networks,see "The Past-Tense Model Does NotArgue Against Rule-BasedExplanation," Map 5, Box 24.

48 Alan Turing, 1950Being regulated by natural laws impliesbeing a rule-governed machine. The rulesobjection confuses rules of conduct with laws ofbehavior. It is true that we cannot formulate acomplete set of rules of conduct for humanperformance. But human behavior is still governedby natural laws. And because these laws of behaviorcan, in principle, be given a mechanical description,it is also possible to build a machine to fit thisdescription. Thus, humans are a kind of machinegoverned by rules.

laws of be • hav • ior: "Laws of nature asapplied to man's body, such as 'If you pinch himhe will squeak'" (Turing, 1950, p. 452).rules of con • duct: "Precepts, such as 'Stop if

you see red lights,' on which one can act, and ofwhich one can be conscious" (Turing, 1950, p. 452).

is supported by

25 Joseph Rychlak, 1991Symbol systems cannot thinkdialectically. Computers can't reasondialectically because they can't synthesizeopposing meanings into a new meaning. Allthey can do is shuffle symbols that are givenmeaning by a programmer.Note: Rychlack never claims that machinescan't think. He just claims that they can't thinkdialectically or predicationally. He allows thatthey have demonstrative reasoning.

Assertion Denial

New synthesis Denial

New synthesis

1. Dialectical reasoning involves oppositional and predicational thinking, and is only possible for teleological beings who can "see" from another's point of view.

2. Oppositional thinking proceeds by assertion, denial, and new synthesis. An assertion is made and is opposed by a denial of that assertion. Out of this opposition, a new synthesis arises and transforms the original assertion by changing its context and/or scope.

3. "Predication involves the cognitive act of affirming, denying, or qualifying broader patterns of meaning in relation to narrower or targeted patterns of meaning" (Rychlak, 1991, p. 7).

4. Teleological beings can engage in various goal-directed activities that place them in a meaningful relation to the world.

5. Computers reason mediationally. In other words, they work with symbols in abstraction from their relation to meanings and the world.

6. Computers reason demonstratively. They can draw conclusions based on valid patterns of inference between symbols, but without working with the meanings of those symbols.

7. Computers can not reason dialectically because they are nonteleological and nonpredicational. They do not understand meanings or the relations between them.

Postulates of the Dialectical Paradigm

31 John Locke, 1690Delaying action provides us withfree will. Free will stems from thepower to restrain from action.Delaying the impulse to act providesroom for other actions to beconsidered. This time in whichalternatives are weighed provides theappearance of free will. The choicesthat are made during the delay neednot arise from some metaphysicalfreedom of the subject; free will isjust the consideration of alternatives.

isdisputed

by

28 Joseph Rychlak, 1991Negative feedbackcan't explainteleology. Goal-directedness andfeedback activity cannotexplain agency. Anexplanation of agencymust address theformation of goals andbeliefs and the ability tochange those goals andbeliefs. Currentmachines only seek thegoals they areprogrammed to seek.

29 Marvin Minsky, 1986, as articulated by Joseph Rychlak, 1991Delayed mediational processes in machines can generate agency.By telling ourselves that we have made choices, we make ourselves agents.What we know is that free will occurs when decision-making processes aredelayed to allow more alternatives to be tested. Systems capable of that kindof delayed decision making have as much free will as humans do, and thus,any computer system with those processes can be said to have agency.

isdisputed

by

27 Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow, 1943Some machines behaveteleologically. Machines withservomechanisms are purposive. Infact, any machine that can respond tonegative feedback for guidancebehaves teleologically. Whatseparates such machines from humansis that humans have an added abilityto make higher-order predictions aboutthe behavior of other objects.

24 Nick Chater and Mike Oaksford, 1990The 100-step constraintis relevant to the cognitivelevel. It is not true that the100-step constraint is an irrelevantimplementation detail. Analgorithm that classical systemsexecute in millions of time-stepsmay not be executable in 100time-steps. The 100-step constraintposes a nontrivial challenge toclassical AI, because it "severely[limits] the class of cognitivelyplausible algorithms" (p. 95).

18 David Rumelhart, James McClelland, and FARG, 1986The brain processes information inparallel. Von Neumann machines processinformation sequentially, one bit at a time.The brain receives and manipulatesmassive amounts of information at thesame time, in parallel.

isdisputed

by

20 Herbert Simon, 1995Thought is serial despite being implemented in aparallel architecture. At the symbolic level, humanthinking is a serial process, despite the parallelism of itsneural implementation. For example, to multiply 5 by 12requires taking several serial steps, in which 5 is multipliedby 2, then by 10, and then the results are added together.

19 Jerry Fodor and Zenon Pylyshyn, 1988Symbol processing can takeplace in parallel. It is possible toimplement a classical system in aparallel architecture—for example, byexecuting multiple symbolic processesat the same time. So parallel processingsystems like connectionist networks, don'thave any principled advantage overclassical symbol systems.

23 Jerry Fodor and Zenon Pylyshyn, 1988The 100-step constraint is directed at theimplementation level. All the 100-step constraintdemonstrates is the obvious fact that symbolic thought processesare implemented differently in the brain than they are on adigital computer. The 100-step constraint has to do withimplementation details rather than with real cognitive processes.

The 100-step constraint isa mere implementationdetail.

22 Jerome Feldman, 1985The 100-step constraint. Algorithmsthat model cognitive processes must meet a100-step constraint imposed by thetimescale of the brain, which performscomplex tasks in about 100 time-steps.Classical sequential algorithms currently runin millions of time-steps, so it seemsunlikely that they can meet this 100-stepconstraint.

21 George Hinton, James McClelland, and David Rumelhart, 1986The brain accesses information by content ratherthan by memory address. Humans rapidly accessmemories by way of their contents. For example, memoriesabout the president are accessed by information about thepresident (his or her name, face, etc.), not by way of an explicitaddress. Connectionist networks and the brain have "content-addressable" memories of this type.

Table 52005

isdisputed

by

14 Jerry Fodor and Zenon Pylyshyn, 1988Symbol structures can be distributed. A classicalsymbol processor can be physically distributed in memory,and can thereby exhibit graceful degradation. So distributedsystems like connectionist networks don't have any principledadvantage over physical symbol systems.

7 David Rumelhart, James McClelland, and FARG, 1986Neurons receive thousands oftimes more input than logic gates. Neurons are connected to 1,000–100,000other neurons. Logic gates are connectedto only a handful of other logic gates.The difference indicates that the braindoes not use the kind of logical circuitryfound in digital computers.

8 Jack Copeland, 1993Neurons are diversely structured. Computer logic gates consistof variations on a single structure. The brain, by contrast, consists ofmany different kinds of neurons.

Purkinje cellPyramidal cell Retinal bipolar cell

9 John von Neumann, 1958, as articulated by Hubert Dreyfus, 1972The brain is an analogue device. Evenif neuron firings are all-or-none, the messagepulses that carry neural information areanalogue. They involve complex graded andnonlinear factors. So the brain seems to bean analogue device.Note: Von Neumann subjects the relationshipbetween brain and computer to extensiveanalysis, and his classic lectures on the subjectare still relevant today.

is supported by

13 David Rumelhart, James McClelland, and FARG, 1986Processing in the brain is distributed.Processing in the brain is not mediated bysome central control. Neural processing isdistributed—in other words, many regionscontribute to the performance of any particulartask.

isdisputed

by

isdisputed

by

isdisputed

by

5Neurons operate like logicgates.The neurons of the brain aresimilar to the logic gates of adigital computer. Their all-or-none firing potential givesthem a discrete character andallows them to be combined intothe AND gates, OR gates, andother logic gates that underliedigital computation.

3The biological assumption. The brain is the hardware (or"wetware") on which the software of the mind is run. Thinkingis a symbolic process that is implemented in the neurons of thebrain and that can also be implemented in the circuits of a digitalcomputer.Note: Also, see the "Is the brain a computer?" arguments on Map1, the "Is biological naturalism valid?" arguments on Map 4, the"Are connectionist networks like human neural networks?"arguments on Map 5, and sidebar, "Formal Systems: An Overview,"on Map 7.

issimilar

to

57The heuristic search assumption. Symbolic data issearched using various methods of estimation or "heuristics,"which make the search more efficient.

Newell and Simon

Heuristic search hypothesis: The solutions to problemsare represented as symbol structures. A physical symbolsystem exercises its intelligence in problem solving bysearch—that is, by generating and progressivelymodifying symbol structures until it produces a solutionstructure (1976, p. 120).

1. A physical symbol system • is physical (that is, made up of some physical matter) • is a specific kind of system (that is, a set of components functioning through time in some definable manner) that manipulates instances of symbols.

2. Symbols can be thought of as elements that are connected and governed by a set of relations called a symbol structure. The physical instances of the elements (or tokens) are manipulated in the system.

3. An information process is any process that has symbol structures for at least some of its inputs or outputs.

4. An information processing system is a physical symbol system that consists of information processes.

5. Symbol structures are classified into • data structures • programs.

6. A program is a symbol structure that designates the sequence of information processes (including inputs and outputs) that will be executed by the elementary information processes of the processor.

7. Memory is the component of an information processing system that stores symbol structures.

8. Elementary information processes are transformations that a processor can perform upon symbol structures (e.g., comparing and determining equality, deleting, placing in memory, retrieving from memory, etc.).

9. A processor is a component of an information processing system that consists of: • a fixed set of elementary information processes, • a short-term memory that stores the input and output symbol structures of the elementary information processes, and • an interpreter that determines the sequence of elementary information processes to be executed as a function of the symbol structures in short-term memory.

10. The external environment of the system consists of "readable" stimuli. Reading consists of creating internal symbol structures in memory that designate external stimuli. Writing is the operation of emitting the responses to the external environment that are commanded by the internal symbol structures.

Adapted from Newell and Simon (1972, chap. 2).

Proponents include Jerry Fodor, Allen Newell, Herbert Simon, John McCarthy, Zenon Pylyshyn, early MarvinMinsky, Doug Lenat, Edward Feigenbaum, and Pat Hayes.

Postulates of the Physical Symbol Systems Hypothesis

10 Zenon Pylyshyn, 1974Analogue systems cannot represent general concepts. Analogue devices only capture particularsensory patterns. They cannot (by themselves) be used to recognize and process universal concepts. Forexample, an analogue retinal image of a chair is not by itself adequate to represent the universal conceptof a chair. The retinal image must be recognized as a typical chair pattern and must be associated with averbal label by some sort of discrete digital mechanism.Note: Pylyshyn allows that some analogue computation may be important in practice, and he thinks it islikely that practical AI systems will use hybrid analogue–digital mechanisms. He is arguing against theclaim that all mental computation is analogue.

11 Zenon Pylyshyn, 1974Purely analogue machines lack the flexibility of digital machines. A purely analogue devicecannot make contingent if-then branches. That is, analogue devices cannot do "one thing under one setof circumstances and a completely different thing under a discretely different set of circumstances" (p. 68).For that reason, purely analogue machines inherently lack the flexibility of universal digital machines. Thislimitation can be overcome by adding a discrete threshold element, but that still does not make an analoguedevice the best way to represent cognitive processes.

12The brain has a von Neumann architecture. Thefollowing features of the von Neumann architecture alsocharacterize processing in the brain.• Processing is sequential.• Symbol strings are stored and accessed at specific memory addresses.• There is a central processing unit that controls processing.Note: For more on the assumption of a central control, see"Searle Assumes a Central Locus of Control," Map 4, Box75.

Memory

Processing Control

Output

Central Processing Unit

Input

6 Warren McCulloch and Walter Pitts, 1943The logical calculus of neuralactivity. Because of their all-or-none threshold, the activity ofneurons can be completely describedin terms of logical operations. Thefiring of a neuron is like the assertionof a proposition, and relationsbetween neural firings are likelogical relations betweenpropositions.

3

2

1

is supported by

60 Zenon Pylyshyn, 1974The best heuristicsaren't just trial and error.Heuristic searches don'tnecessarily rely on trial anderror. Programs can bestructured so that a heuristicmethod moves the searchever closer to a problemsolution without the need forredundant backtracking.Such programs approximatehuman "zeroing in."

63 Hubert Dreyfus, 1996Brute-force search is not how humans play chess. I never predictedfailure for brute-force techniques in chess playing. All I argue is that brute-force techniques such as heuristic search are not psychologically realistic.Strom and Darden blur the distinction between AI as a form of psychologyand AI as any sort of technique that uses symbolic representation.

96 Hubert Dreyfus, 1972The body is essentialto humanintelligence.Possession of a body isessential to humanreasoning, patternrecognition, andinteraction. Understand-ing what a chair is, forexample, presupposesknowledge of how thebody sits, bends, fatigues,

93The disembodied mind assumption.Thinking is an abstract process that does notrequire the presence of a body.Notes:• The notion of embodiment is used by many situated action theorists. See, for example, "COG," Box 109.• On the notion that the mind is disembodied, see the "Is the brain a computer?" arguments on Map 1 and the "Can functional states generate consciousness?" arguments on Map 6.

does notrequire

eth • no • meth • od • ol • o • gy: An approach to social science,deriving from Garfinkel (1957), that focuses on everyday culturalpractices and activities in the social world. Ethnomethodologyprovides useful models for how plans are enacted and designed.

125 Martin Heidegger, 1927Representations are not involved inconcernful activity. In ongoing concernful activity no representations are necessary. A carpenter hammers nailswithout having any explicit representation of the hammer. It is only incases of "breakdown" that the hammer emerges from the background ofequipment and is represented as an object with properties. For example,if the hammer slips from the carpenter's grasp it might then be seen as anobject with the property of being too light, too slippery, and so forth. Butprior to the breakdown, the carpenter had no representation of the hammer.Note: See "Computers Never Move Beyond Explicit Rules," Box 44,which clarifies the role representations play in the development of skill.

is supported by

110 Terry Winograd and Fernando Flores, 1986The representational tradition is flawed. Classical symbolic AI places too much emphasis on the role of representations in human intelligence. Actual humanthinking only becomes symbolic and representational when normal modes of cognition break down. Future work in AI should concern itself with interactive design,task-specific projects, and the role of computers as machines for the facilitation of communication, rather than with representational symbol systems.Note: Winograd and Flores are influenced by Dreyfus's approach. See sidebar, "Postulates of Dreideggereanism," on this map and "The Critique of Artificial Reason,"Box 76.

isdisputed

by

46The rule-followingassumption. Humans, likemachines, behave intelligently byfollowing rules, which can, inprinciple, be spelled out as explicitif-then statements.Notes:• Also, see the "Do connectionist networks follow rules?" arguments on Map 5.• Because heuristic searches are sometimes described in terms of "heuristic rules," the "Does mental processing rely on heuristic search?" arguments on this map are relevant to this region.

IfThen

isdisputed

by

is supported by

is supported by

heu • ris • tic search: A search that uses specialknowledge about a problem domain to find solutionsmore efficiently. For example, a search of possiblemoves in a chess game could be aided by a set ofheuristics that tell the computer to avoid useless lines ofattack, to maintain center control, and so forth.

2 Allen Newell and Herbert Simon, 1976Physical symbol systems can think.Thinking in a physical symbol system is a formal computationalprocess characterized by• rule-governed symbol manipulation• the drawing of inferences from large knowledge bases• heuristic search of data structures• operations on representational structures• planning and goal-directed activityThe symbolic processes that constitute thinking are formal in thatthey are independent of any particular physical instantiation. Inhumans, thinking is instantiated in the neurons of the brain. Incomputers, thinking is realized in silicon circuits.Notes:• This is a standard interpretation of the field but it is by no means

shared by everyone. This summary is meant to emphasize thoseaspects of artificial intelligence research that are relevant tothis map.

• The physical symbol systems hypothesis and functionalism areclosely related. The physical symbol systems hypothesisproposes an architecture for simulating and studying intelligence.Functionalism is a philosophical position that is used to justifythis architecture. Functionalism was developed in part as aresponse to behaviorism (see the "Is the test, behaviorally oroperationally construed, a legitimate intelligence test?" argumentson Map 2). For more on functionalism, see the "Can functionalstates generate consciousness?" arguments on Map 6.

The same symbolsystems can also beinstantiated in acomputer.

In humans,symbol systemsare instantiated inthe brain.

Rule-governedmanipulation of symbolicrepresentational structures

Thinking =

72 George Lakoff, 1987Some conceptual organizationscan't be translated into a universal conceptual framework. Different conceptualsystems have different conceptual organizations, and those differences are cognitivelysignificant. If the different organizations are translated into a universal conceptualframework, significant organizational differences are eliminated. Humans can shiftbetween frameworks without eliminating these differences.Note: Lakoff cites Mixtec (a native Mexican language), as well as the work of BenjaminWhorf (1956) in making this argument.

Mixtec speaker

Yuu wa hiyaacu-mesa.(Stone thebe-locatedbelly-table)]

79 John McCarthy, 1996Logic-based AI is making steadyprogress. Dreyfus ignores slow butdefinite progress in logic-based AI. Forexample, formalized non-monotoniclogics have been used successfully toaddress the problem of understandingambiguity. Dreyfus gives no compellingreason to suppose that suchdevelopments will end in failure.

71 Articulated by George Lakoff, 1987The universalconceptual frameworkassumption. There is a neutral andcompletely general conceptualstructure in which all knowledge canbe represented. For a machine tounderstand natural language, it musttranslate sentences into this universalconceptual framework.

70 John Laird, Allen Newell, and Paul Rosenbloom, 1987SOAR. SOAR is a general intelligence system that learnsby "chunking," that is, by collapsing the work of satisfyinga subgoal into a single condition-action rule, or "production."It searches its explicit representations heuristically by means-ends analysis using goals and subgoals.

Implemented Model

69 John McCarthy, 1979Thermostats can havebeliefs. Beliefs canreasonably be ascribed to anentity when its actions are:consistent, the result ofobservation, and in accordancewith goals. Because thebehavior of thermostats meetthose criteria, it is reasonableto say they have beliefs.Note: These are the criteria forbelief ascription that McCarthymentions in connection with thethermostat example. Hediscusses further conditionselsewhere.

68 Jerry Fodor, 1975The language of thought. Thelanguage of thought (also known asmentalese) is a formal language thatmental processes operate on. Like spokenlanguage, the language of thought has acombinatorial syntax and semantics. Justas complex sentences are generated fromcombinations of words, complex mentalrepresentations are generated fromcombinations of simpler representations.Although no formal theory of thelanguage of thought has yet been fullysuccessful, the search for one is the goalof computational psychology.Note: For more arguments about thisaspect of AI, see "The RepresentationalistAssumption," Box 103, and the "Canconnectionist networks exhibitsystematicity?" arguments on Map 5.

Unmapped TerritoryAdditional

language ofthought

arguments

Start Here

1 Alan Turing, 1950Yes, machines can(or will be able to)think. A computationalsystem can possess allimportant elements ofhuman thinking orunderstanding.

Alan Turing

I believe that at the endof the century ... onewill be able to speak ofmachines thinkingwithout expecting to becontradicted.

82 Hubert Dreyfus, 1972Creative discoveriesradically restructurehuman knowledge. Largeknowledge bases organize andprocess data in a relativelyfixed manner. Humanknowledge, by contrast, issubject to radical restructuringon the basis of "creativediscoveries," which can alter aperson's entire understanding ofthe world. Such fundamentalshifts can take place atpersonal, conceptual, andcultural levels.Note: In this context Dreyfusalso discusses Thomas Kuhn'snotion of paradigm shift.

78 Hubert Dreyfus, 1996Get an AI system to understand, "Mary sawa dog in the window. She wanted it." Whenfaced with the sentences, "Mary saw a dog in thewindow. She wanted it," it is difficult for an AI systemto know whether "it" refers to the dog or the window.John McCarthy says this problem is "within thecapacity of some current parsers" (1996, p. 190). Butinterpreting the sentence requires bodily know-howand empathetic imagination. Figuring out whatpronouns refer to in such sentences is the next problemlogic-based AI should try to deal with.Note: The example, "Mary saw a dog in the window.She wanted it," comes from Doug Lenat (quoted inDreyfus, 1992 pp. xix–xx).

PETS

118 Jerry Fodor and Zenon Pylyshyn, 1981Affordances are trivial. Affordancesare just another name for whatever it is inthe environment that makes an organismrespond as it does. But such a notion can'tprovide a substantial explanation ofperception. For example, to say werecognize a shoe by perceiving its propertyof being a shoe doesn't explain anything.Affordances add nothing new to ourknowledge of the mechanisms behindperception.

Focus Box: The lowest-numbered box in each issue area is an introductory focus box.The focus box introduces and summarizes the core dispute of each issue area, sometimesas an assumption and sometimes as a general claim with no particular author.

Arguments with No Authors: Arguments that are not attributable to a particularsource (e.g., general philosophical positions, broad concepts, common tests in artificialintelligence) are listed with no accompanying author.

Citations: Complete bibliographic citations can be found in the booklet that accompaniesthis map.

Methodology: A further discussion of argumentation analysis methodology can befound in the booklet that accompanies this map.

© 1998 R. E. Horn.All rights reserved.Version 1.0

LegendThe arguments on these maps are organized by links that carry a range of meanings:

A distinctive reconfiguration of an earlier claim.is interpreted as

A charge made against another claim. Examples include:logical negations, counterexamples, attacks on an argument'semphasis, potential dangers an argument might raise, thoughtexperiments, and implemented models.

isdisputed

by

Arguments that uphold or defend another claim. Examples include:supporting evidence, further argumentation, thought experiments,extensions or qualifications, and implemented models.

is supported by

As articulated by Where this phrase appears in a box, it identifies a reform-ulation of another author's argument. The reformulationis different enough from the original author's wording towarrant the use of the tag. This phrase is also used whenthe original argument is impossible to locate other than inits articulation by a later author (e.g., word of mouth), orto denote a general philosophical position that is given aspecial articulation by a particular author.

Anticipated by Where this phrase appears in a box, it identifies a potentialattack on a previous argument that is raised by the authorso that it can be disputed.

This icon indicates areas of argument that lie on or near theboundaries of the central issue areas mapped on these maps.It marks regions of potential interest for future mapmakersand explorers.

Unmapped Territory

Additionalarguments

The Issue Mapping™ series is published by MacroVU Press, adivision of MacroVU, Inc. MacroVU is a registered trademarkof MacroVU, Inc. Issue Map and Issue Mapping are trademarksof Robert E. Horn.

The remaining 6 maps in this Issue Mapping™ series can be ordered with MasterCard, VISA, check,or money order. Order from MacroVU, Inc. by phone (206–780–9612), by fax (206–842–0296),or through the mail (Box 366, 321 High School Rd. NE, Bainbridge Island, WA 98110).

One of 7 in this Issue Mapping™ series—Get the rest!

Novice

Expert

81The knowledge base assumption. Symbolicdata can be organized into a knowledge base thatrepresents the entirety of human understanding.However, when AI researchers develop knowledgebases, they generally focus on some particulardomain, such as the domain of furniture, animals,restaurants, and so forth. A knowledge base iswritten in a "representation language" (such asthe predicate calculus or LISP). A good knowledgebase supports inference, allowing the computerto draw conclusions from available information.For example, a knowledge base representing thedomain of furniture should support the inferencethat a chair is something people sit on.

General Structure of an Information Processing System

Redrawn from Newell and Simon (1976).

Receptors

EffectorsProcessor Memory

Action uponEnvironment

Information

George Lakoff

Rodney Brooks

Hubert Dreyfus

John McCarthy

I believeit's too hotin here.

Zenon Pylyshyn

A physical symbol system has thenecessary and sufficient means forgeneral intelligent action. By"necessary" we mean that anysystem that exhibits generalintelligence will prove upon analysisto be a physical symbol system. By"sufficient" we mean that anyphysical symbol system of sufficientsize can be organized further toexhibit general intelligence (p. 16).

Allen Newell Herbert Simon

Immanuel Kant

John von Neumann

Tree

Chair

Table

Seeing

Hearing

Thinking

99100

Tree

TableChair

D

O

G

DOG

DOG

D O G

is supported by

Wow! I'm not the same personI was yesterday! I've made acreative discovery about myself.

I'm in love!That changeseverything!

I built thiswithout adatabase!

Postulates of Situated Action

1. "Situated actions ... [are] actions taken inthe context of particular concretecircumstances" (p. viii–ix).

2. "All activity, even the most analytic, isfundamentally concrete and embodied" (p.viii).

3. Even "purposeful actions are inevitablysituated action" (p. viii).

4. No action is ever "fully anticipated" by plansbecause the "circumstances of our actions... are continuously changing around us" (p.ix).

5. All "our actions, while systematic, are neverplanned in the strong sense" (p. ix).

6. All or most of life is "primarily ad hocactivity (p. ix).

From Suchman (1987).

Martin Heidegger

• Prototype effects: Differences among category members such that some members are more central than others.

• Basic level categorization: Categories that are cognitively basic are "'in the middle' of a general-to-specific hierarchy" (p. 13).

• Kinesthetic image schemas: Recurrent structures of ordinary bodily experience.

• Metaphorical concepts: Cross-domain mappings where knowledge from one domain of the conceptual system is projected onto knowledge in another domain.

Postulates of Experiential Realism1. Experiential realism is "experiential" in that it focuses on: • actual and potential experiences • genetically acquired makeup of the organism • the organism's interactions in the social and physical environment (p. xv).

2. Experiential realism is "realist" in postulating that: • there is a real world • reality places constraints on concepts • truth goes beyond mere internal coherence • there is stable knowledge of the world (p. xv).

3. There is more to thought than just representation. Thought is also: • embodied, in that "the structures used to put together our conceptual systems grow out of bodily experience and make sense in terms of it" (p. xiv); • imaginative, in that concepts not grounded directly in experience (e.g., metaphorical concepts) employ conceptual structures that go beyond the literal representation of reality.

4. Classical categories are inadequate. They are like containers: their members are either in or out. Cognitive models, on the other hand, obey nonclassical "fuzzy" logics, exhibiting degrees of membership.

In

Out

Central

Peripheral

5. It is important to focus on conceptual structures and cognitive models, which involve a variety of phenomena.

Adapted from George Lakoff (1987). Lakoff's theory draws on the work of a wide range ofthinkers, including Mark Johnson, Eleanor Rosch, Ludwig Wittgenstein, and Lotfi Zadeh.

• Metonymic concepts: Taking one aspect or part of something and using it to stand for the thing as a whole or for some other part of it.

Central:four-legged

chairPeripheral:bean bag

The hamsandwich attable 10 ...

She ran likethe wind ...

basic levelchair

specificstriped

generalfurniture

What is this?This info-mural is one of seven “argumentation maps” in a series that explores Turing’s question: “Can computers think and/or will they ever be able to?” Argumentation mapping is a method that provides:- a method for portraying major philosophical, political, and pragmatic debates- a summary of an ongoing, major philosophical debate of the 20th century- a new way of doing intellec- tual history.

What does it contain?

How do I get a printed copy?

Altogether the seven maps:- summarize over 800 major moves in the debates threaded into claims, rebuttals, and counterrebuttals- 97-130 arguments and rebuttals per map- 70 issue areas in the 7 maps - 32 sidebars history and further background

The argumentation maps:- arrange debate so that the cur- rent stopping point of each debate thread is easily seen- identify original arguments by over 380 protagonists world- wide over 40 years- make the current frontier of debate easily identifiable- provide summaries of eleven major philosophical camps of the protagonists (or schools of thought).

You can order artist/researcher signed copies of all seven maps from www.macrovu.com for $500.00 plus shipping and han-dling.

1998