5
Knowledge processing and commonsense 1 R. Narasimhan C/o CMC Ltd, Bangalore 560 001, India Received 15 May 1997; accepted 29 May 1997 Abstract Symbolic expert systems have been developed during the last three decades to model knowledge-based human intelligent behaviour. A general criticism of such expert systems is that they lack commonsense. But there is little consensus among AI workers on what constitutes commonsense. In this paper a characterization of commonsense is attempted. The limitations and open problems relating to current approaches to expert systems design are discussed. In addition, open problems that need to be studied to adequately model commonsense behaviour are discussed. Our basic arguments hinge on the distinctions between tacit knowledge and propositionizable knowledge. The thesis is that commonsense behaviour is essentially underpinned by tacit knowledge. q 1997 Elsevier Science B.V. Keywords: Symbolic expert systems; Commonsense intelligence; Tacit knowledge; Skill-based expertise 1. Rule-based knowledge processing and commonsense The core received view of artificial intelligence (AI) has recently been succinctly stated by Kirsh [1] as follows: One’s commitment is that knowledge and con- ceptualization lie at the heart of AI; that a major goal of the field is to discover the basic knowledge units of cognition... The basic idea that knowledge and conceptualization lie at the heart of AI stems from the seductive view that cognition is inference. Intelligence skills...are composed of two parts: a declarative knowledge base and an inference engine. It is further believed that while the declarative knowledge base has to be necessarily domain-specific, it is possible to construct the inference engine to be universal (i.e., domain independent). In conformity with the above belief, symbolic expert sys- tems methodology has been developed and brought to a high level of competence during the last two decades. Symbolic expert systems are engineered systems that try to exploit the strength of AI techniques through the use of the following strategy: 1. choose a well-delimited application area (i.e., task domain) of practical value; 2. propositionize the knowledge of this area; 3. organize this knowledge in a usable form; 4. work out strategies to use the knowledge to go from typical problem statements to plausible solutions. Many critics—both from within the AI community and from outside—point out that symbolic expert systems as presently constituted lack commonsense. However, there is no general agreement on what constitutes commonsense. Sometimes it has been interpreted very narrowly as the ability to reason from insufficient knowledge of the problem situation [2]. Further discussion has been concentrated on determining whether suitable augmentations to logical reasoning could model this activity. McCarthy [3] identifies a much longer list of issues relating to commonsense. Some of the more significant ones—apart from issues such as reasoning from insufficient knowledge, representation and use of meta-knowledge, etc.—are: dealing with situations that change in time; dealing with simultaneous occurrences of several events with mutual interactions; dealing with physical events of the everyday world (i.e., implementing aspects of the naive physical knowledge about the world); dealing with the agentive aspects of oneself and of others (i.e., intentions, beliefs, wants, abilities, etc.); and so on. Analogously there are issues relating to commonsense reasoning. The basic issue again is whether commonsense Knowledge-Based Systems 10 (1997) 147–151 0950-7051/97/$17.00 q 1997 Elsevier Science B.V. All rights reserved PII S0950-7051(97)00025-7 1 Revised version of the article originally published in Knowledge-based Computer Systems: Research and Applications. (Eds K.S.R. Anjaneyulu, M. Sasikumar and S. Ramani) Narosa Publishing House, New Delhi, 1997.

Knowledge processing and commonsense

Embed Size (px)

Citation preview

Knowledge processing and commonsense1

R. Narasimhan

C/o CMC Ltd, Bangalore 560 001, India

Received 15 May 1997; accepted 29 May 1997

Abstract

Symbolic expert systems have been developed during the last three decades to model knowledge-based human intelligent behaviour. Ageneral criticism of such expert systems is that they lack commonsense. But there is little consensus among AI workers on what constitutescommonsense. In this paper a characterization of commonsense is attempted. The limitations and open problems relating to currentapproaches to expert systems design are discussed. In addition, open problems that need to be studied to adequately model commonsensebehaviour are discussed. Our basic arguments hinge on the distinctions between tacit knowledge and propositionizable knowledge. The thesisis that commonsense behaviour is essentially underpinned by tacit knowledge.q 1997 Elsevier Science B.V.

Keywords:Symbolic expert systems; Commonsense intelligence; Tacit knowledge; Skill-based expertise

1. Rule-based knowledge processing and commonsense

The core received view of artificial intelligence (AI) hasrecently been succinctly stated by Kirsh [1] as follows:

One’s commitment is that knowledge and con-ceptualization lie at the heart of AI; that a majorgoal of the field is to discover the basic knowledgeunits of cognition... The basic idea that knowledgeand conceptualization lie at the heart of AI stemsfrom the seductive view that cognition is inference.Intelligence skills...are composed of two parts: adeclarative knowledge base and an inference engine.

It is further believed that while the declarative knowledgebase has to be necessarily domain-specific, it is possible toconstruct the inference engine to be universal (i.e., domainindependent).

In conformity with the above belief, symbolic expert sys-tems methodology has been developed and brought to a highlevel of competence during the last two decades. Symbolicexpert systems are engineered systems that try to exploit thestrength of AI techniques through the use of the followingstrategy:

1. choose a well-delimited application area (i.e., taskdomain) of practical value;

2. propositionize the knowledge of this area;

3. organize this knowledge in a usable form;4. work out strategies to use the knowledge to go from

typical problem statements to plausible solutions.

Many critics—both from within the AI community andfrom outside—point out that symbolic expert systems aspresently constituted lack commonsense. However, thereis no general agreement on what constitutes commonsense.Sometimes it has been interpreted very narrowly as theability to reason from insufficient knowledge of the problemsituation [2]. Further discussion has been concentrated ondetermining whether suitable augmentations to logicalreasoning could model this activity.

McCarthy [3] identifies a much longer list of issuesrelating to commonsense. Some of the more significantones—apart from issues such as reasoning from insufficientknowledge, representation and use of meta-knowledge,etc.—are:

• dealing with situations that change in time;• dealing with simultaneous occurrences of several events

with mutual interactions;• dealing with physical events of the everyday world (i.e.,

implementing aspects of the naive physical knowledgeabout the world);

• dealing with the agentive aspects of oneself and ofothers (i.e., intentions, beliefs, wants, abilities, etc.);

• and so on.

Analogously there are issues relating to commonsensereasoning. The basic issue again is whether commonsense

Knowledge-Based Systems 10 (1997) 147–151

0950-7051/97/$17.00q 1997 Elsevier Science B.V. All rights reservedPII S0950-7051(97)00025-7

1 Revised version of the article originally published in Knowledge-basedComputer Systems: Research and Applications. (Eds K.S.R. Anjaneyulu,M. Sasikumar and S. Ramani) Narosa Publishing House, New Delhi, 1997.

knowledge and reasoning, interpreted in this manner, can beaccommodated in some suitable extension to traditionalfirst-order logic. Davis [4] gives an excellent account ofthe extensive work that has been, and is being done, toextend formal logic and reasoning suitably to cope withthe representation and use of commonsense knowledge.

However, to address the issues involved here moresystematically, it is useful to group knowledge underpinninghuman behaviour in two broad categories as shown inTable 1.

In terms of the two varieties of knowledge illustratedin Table 1, to say that symbolic expert systems lackcommonsense is to say the following:

• Tacit knowledge underpins our behaviour in theperceptual-motor domain, and plausibly also much ofour communication competence in natural language(see Ref. [5] for further elaborations).

• All this knowledge is taken for granted when humanbeings plan and solve problems in given problemdomains.

• This tacit knowledge base is normally missing inmainstream AI systems and

• if any part of it is needed for planning and problem-solving, it has to be fully articulated and explicitlyrepresented in the (declarative) knowledge base thatunderpins the AI system’s performance.

Two serious problems arise at this stage. First, we do notknow what is the nature of this tacit knowledge and, there-fore, we are unable to articulate it successfully. Second, atthe representational level, we are unable to unify this tacitknowledge effectively with the problem-domain knowledge(usually expressed in predicate logic formalism and/or ascondition–action rules). However, CYC [6] is an ambitiouslong-term research project attempting to accomplish pre-cisely this. It remains to be seen how natural and successfulthe outcome will be for general purpose use in AI.

Referring to Table 1 again, the division of knowledge intotwo categories has significance and value in accounting for

the pragmatics of behaviour. Literacy (i.e., writing, reading,the use of symbols, notations, etc.), in general, is a pre-requisite to the acquisition of knowledge that is propo-sitionizable. This is not the case for the acquisition of tacitknowledge. These two kinds of knowledge are, however,interdependent in the sense that one’s perceptual-motorcompetence may be modified by one’s professional knowl-edge. An expert may, and quite often does, see the worlddifferently from the way an ordinary person sees it.

On the basis of the above differentiation, common-sense—in so far as it is generally available to all humanbeings—must underpin an individual’s perceptual-motorcompetence and his/her ability to acquire and use skills.Articulation of knowledge and reasoning in the common-sense mode is accomplished through the use of natural lan-guage (in most cases, through the use of one’s own mothertongue). As indicated in Table 1, increase in one’s compe-tence to function successfully in the commonsense modemust be accounted for in terms of the following aspects:

1. developmental factors (in the early stages);2. exposure to examples;3. rehearsal and practice;4. apprenticeship (in the case of complex skill acquisition).

Notice that all the above features apply not only to theacquisition of sensori-motor competence, but also to acqui-sition of competence in informal natural language use (seeRef. [5] for further elaboration of this viewpoint).

For computational modelling, the open problems con-cerning commonsense behaviour, then, are accounting forthe competence of ordinary (i.e., non-professional) humanbeings in coping with their day-to-day interactions with thephysical world and with other human beings in the per-ceptual-motor and natural language modalities. These,clearly, are the fundamental issues that need to be addressedif we are to be able to deploy information processing sys-tems to function successfully as free-agents navigating,exploring, manipulating objects, and interacting with othersimilar agents or with human beings.

Table 1Two kinds of knowledge

Knowledge that is tacit Knowledge that is propositionizable

Underpins what behaviour? Perceptual-motor competence Puzzles, games defined through explicit rulesSkill acquisition and use Problem-solving in an articulated task domainNaive natural language behaviour

How acquired? Through informal means: Through formal means:exposure to examples systematic formal tuitionapprenticeship learning based on theoriesrehearsal and practice text books

What is it called? Commonsense knowledge Professional or expert knowledgeCraft knowledge

Who has it? Everybody (artisans and craftsmenwhen skill-based)

Professionals (experts)

148 R. Narasimhan/Knowledge-Based Systems 10 (1997) 147–151

What is the nature of commonsense knowledge (i.e., whatis the nature of its internal representation) and what is themode of commonsense reasoning? In other words, how areknowledge and control deployed to support behaviour in thecommonsense mode? One way of approaching these issuesis to clarify our notion of what constitutes intelligentbehaviour in the tacit or commonsense mode.

2. Commonsense intelligence: Vision and languagebehaviour

What is ‘‘intelligence’’? Here is one definition of it byMcCarthy and Hayes [7]:

We shall say an entity is intelligent if it has an ade-quate model of the world (including the intellectualworld of mathematics), understanding of its own goalsand other mental processes; if it is clever enough toanswer a wide variety of questions on the basis of thismodel; if it can get additional information from theexternal world when required and can perform suchtasks in the external world as its goals demand and itsphysical activities permit.

If this is meant to be a minimal definition of intelligence,most of us in the world are unlikely to be classified asintelligent! Surely, as mentioned earlier, our primary con-cern must be with the ordinary activities of ordinary peopleand not with the deployment of professional skills byexperts. Everything that an ordinary person does in his/hernormal course of living in this world, in so far as it succeedsat all, must be underpinned by intelligence. Behaviourmediated by vision and natural language play key roleshere. Understanding commonsense must start with under-standing the nature of the mediating roles involved in thesetwo modalities and how they come about. What are relevantquestions to ask to this end?

It is significant to note that behaviour precisely in thesetwo modalities—vision and natural language—have beenextremely difficult to model computationally. How wouldone account for this? I would identify at least two majorreasons:

1. our attempts to base modelling in these two modalities onpropositionized knowledge; in other words, opting forformal rule-based approaches;

2. our attempts to process information in these twomodalities in a wholly de-contextualized fashion.

Consider visually-mediated behaviour first. The receivedview stemming from the forcefully argued theses by Marr[8] is that the primary task of perception is generatingdescriptions. According to him: ‘‘...perception is construc-tion of description... That is the core of the thing and a reallyimportant point to come to terms with’’. The crucial ques-tion, even granting this viewpoint is: ‘‘What kind of adescription?’’ Surely, this cannot be answered unless one

takes into account the end-use of the description. Marr’swhole work is preoccupied with a totally de-contextualizedtask: ‘‘How can the visual system generate acompletedescription of a visual scene in terms of its constituentobjects and their layout?’’ The implicit assumption is thatonce such a (task-independent) complete description hasbeen generated, the information compiled could be put toa variety of task-dependent uses as needed. Many visionspecialists, however, are beginning to move away fromthis wholly de-contextualized approach. See, for instance,the relevant position statements included in Ref. [9].

Second, visually-mediated behaviour spans a very widespectrum. We can list some of these as follows:

1. recognizing, identifying: resulting in naming, categoriz-ing, etc.;

2. discriminating: resulting in perception of similarity/dif-ference;

3. describing, interpreting: resulting in verbal statements(formal/informal) or in non-verbal representations (pic-torial/discrete symbolic);

4. exploring, searching;5. navigating.

There would seem to be little reason to assume that visionmediates in all these cases in a standard way.

In contrast to Marr, Ramachandran [10] has recentlyargued that vision is really opportunistic, rather than rule-based and highly systematic. According to him:

It may not be too far-fetched to suggest that the visualsystem uses a bewildering array of special purposetailor-made tricks and rules of thumb to solve its pro-blems. If this pessimistic view of perception is correct,then the task of vision researchers ought to be touncover these rules rather than to attribute to the sys-tem a degree of sophistication that it simply does notpossess. Seeking over-arching principles may be anexercise in futility.

Crick [11] has endorsed this viewpoint by noting that

This approach is at least compatible with what weknow of the organization of the cortex in monkeys,and with Franc¸ois Jacob’s idea that Evolution is atinkerer.

We find an analogous state of affairs when we come tostudying natural language behaviour. In almost all currentcomputational approaches dealing with natural language,the strategy employed is to delink the language processingstage from the stage where the processed output is put tosome well-defined use (e.g., to carry out some action). Lan-guage processing is thus dealt with as a wholly decontex-tualized (i.e., unsituated) autonomous activity.

On the other hand, our focus must always be on the end-behaviour and how this could be mediated through thelanguage input. Language processing should, clearly, bedetermined by the end-use to be addressed. The manner of

149R. Narasimhan/Knowledge-Based Systems 10 (1997) 147–151

processing the input, as well as the extent of processing,should be determined by the end-use of the processed output.

3. Knowledge and control in commonsense behaviour

So, what can we say about knowledge and control incommonsense behaviour? Some indications to answer thisquestion may be found in the fact that in human beings andother animals, sensori-motor behaviour is underpinned bydedicated mechanisms—in the visual, auditory, tactile andmanipulatory modalities. In all these modalities hard-wiredprograms carry out low-level (i.e., initial stage) processingof the sensori-motor information. It has been persuasivelyargued that low-level processing is data-driven and notknowledge-based [8]. We can perhaps argue that knowledgeis tacit at this level because it is implicit in the mechanismsthat support performance. Knowledge and control are notseparated in perceptual-motor behaviour. Knowledge isimplicit in the control structure. The mechanism is itselfultimately the representation of the expertise in thesemodalities.

Looked at in this perspective, computational issues likethe ‘‘frame problem’’ [12] for a navigating, manipulatingrobot assume quite different formulations from the ones thatare usually presented in the AI literature. Motor actions arecontrolled by—among other factors—perceptual inputs.And, as the world changes due to the actions performed,the perceptual inputs change accordingly and the controlfor subsequent actions is suitably modified. So, while actionis being performed, changes in the world need not bereflected in a fully articulated knowledge base which isassumed to control and drive action. The resulting changesin the perceptual inputs should be able to provide the neededmonitoring and control information for performing furtheraction.

These observations are similar in spirit to Brooks’ thesesarguing for ‘‘intelligence without representation’’ [13],‘‘intelligencewithoutreason’’ [14], and so on. To quote him:

When we examine very simple level intelligence, wefind that explicit representations and models of theworld simply get in the way. It turns out to be betterto use the world as its own model... It may be the casethat our introspective descriptions of our internalrepresentations are completely different from whatwe really use...[13]

Brooks has been able to demonstrate successfully thedeployment of reasonably complex behaviour by insect-like robotic creatures in their interaction with the realworld. The subsumption architecture used in the design ofthese creatures is made up of several layers, each layerconsisting of a fixed-topology network of simple finitestate machines (FSMs). There is no central locus of control.The FSMs are data-driven by the messages they receivefrom the real world, as well as from within and between

layers. The real unanswered question is how complex canthe behaviour of such creatures be without the use of acentral representation. More specifically, it is unclear howa subsumption architecture of this sort can come to gripswith the language modality of behaviour which is aprerequisite to planning, ratiocination, instruction-basedlearning, and so on.

Connectionist architecture has been proposed andextensively studied as another alternative to mainstreamsymbolic AI to come to grips with tacit knowledge andlearning without representation. Despite the large varietyof connectionist systems that have been built (see, forexample, Refs. [15,16]), there is as yet no clear indicationhow higher-level intelligent activities could be coherentlymodelled in terms of connectionism. For instance, how doesone link high-level vision (e.g., recognition of a scene, oreven a complex object) to the low-level information pro-cessing layers? How does one relate articulation in thelanguage modality to tacit abstractions in the perceptual-motor modalities? These are crucial issues that need to beaddressed before acceptable computational models ofintelligent behaviour can be worked out.

Referring to Table 1, at the knowledge level what isneeded to distinguish commonsense behaviour from expert(or professional) behaviour is to understand better thedifference between skill-based knowledge and theory-based knowledge. It must be emphasized that thisdistinction is not exactly equivalent to that between pro-cedural knowledge and declarative knowledge. Deploymentof skill involves both unarticulated know-how and articula-tion of aspects of this know-how and related situationaldetails in ordinary natural language. Analogously, as weshall presently see, deployment of theory-based knowledgeby professional experts is not equivalent to operating strictly(or exclusively) within a deductive formalism. Skill-basedexpertise is more akin to commonsense behaviour than toexplicit, theory-based, symbol-manipulation behaviour.Skill-based expertise should, therefore, be a valuabledomain of study for a computational-level understandingof commonsense behaviour.

Finally, what about commonsense reasoning? Sinceaccording to our definition the language of commonsenseis natural language, the mode of reasoning must be the modethat people use in their normal interactions with the world(including other people). There are persuasive arguments toshow that the actual reasoning process of commonsense isnot the inferential process as the logicians define it. In otherwords, it is not deduction in a well-defined logic [17]. Com-putationally describing the nature of this process is still verymuch an open problem. It is worth noting that commonsenseunderpins creativity and generation of new knowledge. Thisis sufficient indication that reasoning strictly within stan-dard logical formalisms would not help.

Dreyfus [18] argues that expertise is based more onrecognition than on laborious inferential reasoning. This iscertainly true of skill-based expertise and may equally well

150 R. Narasimhan/Knowledge-Based Systems 10 (1997) 147–151

be true of the intuitive grasp of problem situations thatexperienced professionals exhibit. In other words, asexperience builds up, situational aspects are recognized atsufficiently high global levels and not laboriously built outof condition–action primitives from bottom up. Suchglobally recognized situational aspects directly suggestappropriate (i.e., plausible) strategies, plans or actions.The ‘‘expertise’’ of an expert would, then, seem to consistin being able, at the perceptual level, to narrow the problemsituation to its salient aspects and, based on this and priorexperience, to delimit the solution space (i.e., decision–action space) to a small, potentially profitable one. It iswell established that experienced chess players function inthis mode [19].

4. Concluding remarks

We set out by discussing symbolic expert systemsmethodology that has been developed within AI duringthe last two or three decades to model knowledge-basedhuman intelligent behaviour. We saw that in the design ofthese expert systems, typically, knowledge is explicitly pro-positionized and reasoning for purposes of monitoring andcontrol is handled through making inferences in a well-defined logic. A general criticism of such expert systemsis that they lack commonsense. We then tried to characterizethe nature of commonsense and argued that the knowledgethat underpins commonsense is tacit and not propo-sitionized. And also reasoning in commonsense is basedon the use of natural language and does not, for the mostpart, conform to any strict formalized logical inference.Computational modelling of commonsense behaviour can-not, therefore, be handled through extensions to currentsymbolic expert systems methodologies. Connectionismand subsumption architectures are being promoted as alter-native frameworks for modelling behaviour based on tacitknowledge. In these models knowledge and control areintertwined and there is no attempt to base behaviour on apropositionized knowledge base. But we saw that suchmodelling attempts are at a very rudimentary stage andseveral fundamental open problems remain to be formulatedand solved before these models can be deployed to handlesignificant aspects of commonsense behaviour.

Mainstream symbolic-level AI has so far been exclu-sively preoccupied with attempts to simulate humanperformance based on propositionizable knowledge. How-ever, we would be approaching behaviour modelling fromthe wrong end if we started out by assuming that ‘‘propo-sitionizing’’ is a central principle of behaviour. Scientistsand logicians are intensely preoccupied with propositioniz-ing and propositionizable knowledge and tend to forget thatno other animal propositionizes. But all animals, one wouldsuppose, are able to build up a tacit knowledge of theirworlds. Much of everyday, informal human behaviour, wehave argued in this paper, is also based on tacit knowledge.

Phylogenetic continuity (i.e., continuity at the level ofevolution) of knowledge-based behaviour, hence, must besought in the domains of tacit knowledge. Human common-sense behaviour qualitatively differs from the tacit knowl-edge-based behaviour of other animals through theavailability and use of the (natural) language modalityexclusively to humans. Understanding, at the computa-tional-level, these similarities and differences betweenhuman behaviour and the behaviour of other animals isreally the most significant challenge to behaviour modellingand, by extension, to AI.

References

[1] D. Kirsh, Foundations of AI: The big issues, Artificial Intelligence 47(1991) 3–30.

[2] R.E. Moore, The role of logic in knowledge representation andcommonsense reasoning, Proceedings of AAAI-82, 1982, pp. 428–433.

[3] J. McCarthy, Some expert systems need commonsense, in: V.Lifschitz (Ed.), Formalizing Commonsense: Papers by JohnMcCarthy, Ablex, Norwood, NJ, 1990, pp. 189–197.

[4] E. Davis, Representations of Commonsense Knowledge, MorganKaufmann, San Mateo, CA, 1990.

[5] R. Narasimhan, Language Behaviour: Acquistion and EvolutionaryHistory, National Centre for Software Technology (NCST), Bombay,1996.

[6] D.B. Lenat, R.V. Guha, Building Large Knowledge-based Systems:Representation and Inference in the CYC Project, Addison-Wesley,Reading, MA, 1989.

[7] J. McCarthy, P. Hayes, Some philosophical problems from the stand-point of artificial intelligence, in: B. Meltzer, D. Michie (Eds.),Machine Intelligence 4, Edinburgh University Press, Edinburgh,UK, 1969, pp. 463–502.

[8] D. Marr, Vision, Freeman, San Francisco, CA, 1982.[9] AI Symposium: Position Statements, ACM Computing Surveys, Sep-

tember 1995.[10] V.S. Ramachandran, Interactions between motion, depth, color and

form: The utilitarian theory of perception, in: C. Blackmore (Ed.),Vision: Coding and Efficiency, Cambridge University Press, Cam-bridge, UK, 1990.

[11] F. Crick, What Mad Pursuit, Penguin Books, London, 1990, p. 156.[12] P. Hayes, The frame problem and related problems in AI, in: A.

Elithorn, D. Jones (Eds.), Artificial and Human Thinking, Elsevier,Amsterdam, 1973, pp. 45–59.

[13] R.A. Brooks, Intelligence without representation, ArtificialIntelligence 47 (1991) 139–159.

[14] R.A. Brooks, Intelligence without reason, Proceedings of IJCAI,1991, pp. 569–595.

[15] D.E. Rumelhart, J.L. McClelland (Eds.), Parallel Distributed Pro-cessing 1: Foundations, MIT Press/Bradford Books, Cambridge,MA, 1986.

[16] J.L. McClelland, D.E. Rumelhart (Eds.), Parallel Distributed Proces-sing 2: Psychological and Biological Models, MIT Press/BradfordBooks, Cambridge, MA, 1986.

[17] P.N. Johnson-Laird, Reasoning without logic, in: T. Myers, K. Brown,B. McGonigle (Eds.), Reasoning and Discourse Processes, New York,Academic Press, 1986, pp. 13–50.

[18] S.E. Dreyfus, The nature of expertise, Panel Discussion, Proceedingsof IJCAI, vol. 2, Morgan Kaufmann, San Mateo, CA, 1985, pp. 1306–1309.

[19] W.G. Chase, S.A. Simon, The mind’s eye in chess, in: W.G. Chase(Ed.), Visual Information Processing, New York, Academic Press,1973, pp. 251–281.

151R. Narasimhan/Knowledge-Based Systems 10 (1997) 147–151