16
Residual Dualism in Computational Theories of Mind Paul TIBBETTS* At length I am forced to admit that there is nothing, among the things I once believed to be true, which it is not permissible to doubt.. . Thus I must carefully withhold assent no less from these thiigs than from the patently false. .. But it is not enough simply to have made a note of this; I must take care to keep it before my mind. For long-standing opi- nions keep coming back again and again, almost against my will; they seize upon my credulity, as if it were bound over to them by long use and the claims of intimacy. Descartes, Meditation I What is the ground of the relation of that in us which we call “representation” to the object? . .. [On one reading,] if the ob- ject itself were created by the representation. . . the conformity of these representations to their objects could be understood. Kant (Letter to Marcus Herz, 1772) Almost everyone regards the statement that there is no mind- independent reality, that there are just the ‘versions’, or there is just the ‘discourse’, or whatever, as itself intenselyparadoxical. Because one cannot talk about the transcendent or even deny its existence without paradox, one’s attitude to it must, per- haps, be the concern of religion rather than of rational philos- OPhY. Hilary Putnam (1983:226) summary This paper argues that an epistemological duality between mind/brain and an external world is an uncritically held working assumption in recent computational models of cognition. In fact, epistemological dualism (ED) largely drives computational models of mentality and representation: An assumption regarding an external world of perceptual objects and distal stimuli requires the sort of mind/brain capable of representing and inferring true accounts of such objects. Hence we have two distinct ontologies, one denoting external world objects, the other cognitive events and neural transformations. A basic question is then raised and ex- plored: If the elements of these two different ontologies are so radically different, how can the one access the other? The last part of the paper sketches an alternative to ED and the associated two ontologies account. Instead of external world objects being construed as independent givens, initiating perceptual processes, they are rather interpreted as ‘computational constructs’. It is argued at length that this account is far more consistent with recent computational models of mind/brain * University of Dayton, U.S.A. Dialectics Vol. 50, No 1 (1996)

Residual Dualism in Computational Theories of Mind

Embed Size (px)

Citation preview

Page 1: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind

Paul TIBBETTS*

At length I am forced to admit that there is nothing, among the things I once believed to be true, which it is not permissible to doubt.. . Thus I must carefully withhold assent no less from these thiigs than from the patently false. . . But it is not enough simply to have made a note of this; I must take care to keep it before my mind. For long-standing opi- nions keep coming back again and again, almost against my will; they seize upon my credulity, as if it were bound over to them by long use and the claims of intimacy.

Descartes, Meditation I

What is the ground of the relation of that in us which we call “representation” to the object? . . . [On one reading,] if the ob- ject itself were created by the representation. . . the conformity of these representations to their objects could be understood.

Kant (Letter to Marcus Herz, 1772)

Almost everyone regards the statement that there is no mind- independent reality, that there are just the ‘versions’, or there is just the ‘discourse’, or whatever, as itself intensely paradoxical. Because one cannot talk about the transcendent or even deny its existence without paradox, one’s attitude to it must, per- haps, be the concern of religion rather than of rational philos- OPhY. Hilary Putnam (1983:226)

summary This paper argues that an epistemological duality between mind/brain and an external

world is an uncritically held working assumption in recent computational models of cognition. In fact, epistemological dualism ( E D ) largely drives computational models of mentality and representation: An assumption regarding an external world of perceptual objects and distal stimuli requires the sort of mind/brain capable of representing and inferring true accounts of such objects. Hence we have two distinct ontologies, one denoting external world objects, the other cognitive events and neural transformations. A basic question is then raised and ex- plored: If the elements of these two different ontologies are so radically different, how can the one access the other?

The last part of the paper sketches an alternative to ED and the associated two ontologies account. Instead of external world objects being construed as independent givens, initiating perceptual processes, they are rather interpreted as ‘computational constructs’. It is argued at length that this account is far more consistent with recent computational models of mind/brain

* University of Dayton, U.S.A.

Dialectics Vol. 50, No 1 (1996)

Page 2: Residual Dualism in Computational Theories of Mind

38 Paul Tibbetts

than the two ontologies model associated with ED. It is further claimed that an ontology of cog- nitive and neural states, as well as an external world ontology, have no other epistemic status that as contingent working hypotheses. As such, both ontologies are in principle amenable to revision as evidence, explanatory models, and scientific practices mutually readjust and re- define one another over time.

A number of theories of mind in contemporary cognitive science explicitly adopt an epistemological dualism (hereafter, ‘ E D ) between mind and an ex- ternal world. In this respect, such theories continuea dualist paradigm dating back to 17th century philosophies regarding empirical knowledge. On the other hand, proponents of computational and connectionist models of mind claim to have extricated themselves from the mind-body metaphysical dual- ism of the Cartesian legacy. Even if this might be the case, I claim that an epi- stemological duality of mind (viz., cognitive states/processes) and an external world (viz., material objects and distal stimuli) continues as a working as- sumption in much of the cognitive science literature today. This distinction between the agent’s point of view and “the way things are in themselves” is, for Putnam (1987:70), ‘the double set of books’ philosophers have tradition- ally kept for themselves.

It will be claimed that once assumptions concerning ED are introduced then they will in large part drive one’s concept of mentality. That is, if we as- sume an external world of objects and events then we require a mind with the appropriate capabilities to construct true (or at least highly plausible) ac- counts of this external world. Could it be, then, that to avoid epistemological scepticism, we construe and constrain our accounts of ‘external world,’ ‘men- tality’ and ‘cognitive/neural processes’ to negate or at least neutralize the spectre of scepticism? That is, is a conceptual sleight-of-hand, transcendental argument at work here? Namely, given the claim that empirical knowledge is (or at least ought to be) possible, what must a mind - and an external world - be like in order to account for such knowledge?

Nor is this transcendental argument voided when the concepts ‘brain’ or ‘neural states’ are substituted for ‘men€& or ‘cognitive states.’ For now these postulated neural processes must als$ be conceptually construed Fnd con- strained to render possible empirical knowledge and effective action. In this respect, then, the non-reductionkt proposals of Searle (1992; 1984) and the eliminativist, neurocomputational model of Paul Churchland (1989) are equally programmatic and share at least one overriding concern: To construct

For gestalt psychologists, the concept of a ‘distal stimulus’ referred to the external ob- ject pducing the ‘proximal stimulus.’ The proximal stimulus, in turn, at least in the case of vis- ual perception, is synonymous with the retinal image. (See Pastore, 1971:272.)

Page 3: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind 39

an account(s) which makes possible knowledge acquisition. Obviously, differ- ent theorists have adopted different strategies to achieve this end, including: Marr’s (1970) processing subsystems; Kosslyn’s (1983) retinotopically-or- ganized visual buffer; Kosslyn and Koenig’s (1992) three-layer feed-forward neural networks; distributed representation; the PDP model; and so forth.

Is one’s account of mind or brain perhaps less driven by assumptions of ED if one rejects the project of epistemically-foundational knowledge? No, for the issue is not foundationalism. For example, Goldman (1986:55; also, 1991) de- fends a non-foundational, ‘reliable-process account’ of belief acquisition and justification. Still, he also refers to ‘real-world regularities’ that may have been incorporated .into perceptual systems in the course of evolution (1986: 191). Now for Goldman’s sleight-of-hand. To make possible empirical knowledge, the mind must somehow close this posited epistemic gap to the external world. To achieve this, Goldman (1986:43-51) proposes the notion of a reliable cog- nitive process to close a gap he has himself opened! This is evident in the fol- lowing statement, along with what might be construed as question-begging (defining what is justified in terms of what is reasonably reliable):

a person’s belief [regarding the external world] is justified just in case it is produced by a process that is reasonably reliable relative to the resources of the human cognitive system. . . and the native endowments of human cognizers. (1926:251)

So, Goldman’s position regarding human cognition and the possibility of knowledge can be read as a response to a question he raised earlier. Notice the explicit E D in the way Goldman (1988:31,36) formulates the question:

If the mind has direct acquaintance only with its own contents, how can beliefs be reliably formed about physical objects? . . . [Then there is] the fear that physical objects may be inaccessible to the mind.

Accordingly, one can reasonably ask whether Goldman’s reliable-process account of belief-formation and -justification is driven by his prior dualistic assumptions regarding the relation between external objects, evidence, cogni- tive faculties, and ‘real-world regularities.’2

* Regarding knowledge and propositional content of the external world, Churchland and Churchland (1983: 12) shift from Goldman’s reliable-processes account to the causal relations that obtain between representations and states of the world: For example, it may be that a specific representation R occurs in a creature’s perceptual belief- register only when something in its environment is F. We could thus ascribe “(x) (Fx)” as R’s propositional content. For perceptually-sensitive representations, one can indeed ascribe propositional content in this way, and on the basis of real causal connections with the world. . . The states of living creatures do indeed carry systematic information about the environment, in virtue of their law-governed connections with it.

Ironically, this is not unlike a position Goldman (1992:69-83) has himself defended in “A Causal Theory of Knowing.”

Page 4: Residual Dualism in Computational Theories of Mind

40 Paul Tibbetts

To further illustrate the presence of ED in the recent literature, I turn to another example, Churchland and Sejnowski’s, The Computational Mind (1992). Here the authors explicitly endorse a dualism of braidexternal world. In a chapter with the auspicious title, “Representing the World” (1992: 142-143), they write:

Sensory transducers (i.e., receptor cells) are the interface between the brain and the world. . . Constrained by transducer output the brain builds a model of the world it in- habits. That is, brains are world-modelers, and the venfying threads - the minute feed- in points for the brain’s voracious intake of world-information - are the neural transdu- cers in the various sensory systems.

Now that this dualistic assumption is firmly in place, is it really any sur- prise, then, that Churchland and Sejnowski ‘discover’ the kind of brain and neural processes that make possible brain - world interface, namely, a com- putational brain? As they remark (1992: 144), from the transducer output the brain constructs:

a visual world, full of enduring objects, locateable in space-time coordinates, replete with color, motion and shape, [which] is what we perceive.

Nor is it any wonder that the computational model of mind is so appealing. For, as its advocates claim, top-down computational processes transform a discrete, informationallpimpoverished, constantly changing stimulus input (transducer output) into the coordinated, stable perceptual world of visual ex- perience. Accordingly, the computational brain bridges the epistemk gap be- tween “what’s out there” and empirical knowledge of such.

To address suspicions that only philosophers are susceptible to dualist as- sumptions, let us briefly look at the work of the neuroscientist Rodolfo Llinds ( 1987). (The following proposals regarding perception, sensorimotor coordi- nation and vector analysis are also pursued by Paul Churchland (1989:77- 110) in, “Some Reductive Strategies in Cognitive Neurobiology.”)

In a provocative paper, “‘Mindness’ as a Functional State of the Brain” (1987:343), Llinis begins, as did Churchland and Sejnowski, and Goldmann, with an epistemological question:

How can a system such as the developing brain, initially devoid of any knowledge of the properties of the external world, acquire such information by evolutionary means so that it may, in turn, predict?

Now notice the question-begging in Llinds’ (1987:343) reply:

Natural selection has favoured those living organisms having cell-biological rules by which neurons (through their connectivity) may incorporate sensory referred properties of the external world into the internal functional states. (Emphasis added)

Page 5: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind 41

Accordingly (1987: 344), The internalization of the properties of the external world into an internal func-

tional space is at this stage the central problem regarding brain function.

Later in this article, Llinas (1987:351) relates a story of how one of his medical school students, in the context of a discussion of the nervous system, raised this question:

‘But, now that I have learned neuroscience, I find that I still do not understand, for instance, how I see. . . I can follow the whole system and its properties but I still have no conception at all of what it is to see.’

LlinBs’ response to this student’s inquiries is to shift from ‘seeing’ as a re- sponse to a process initiated by an external object, to ‘seeing’ as a complex series of neurological events. In lieu of the student’s naive realism regarding our perception of external objects, Llinas recasts the entire issue of visual cog- nition into constructivist/computational terms. Accordingly, for Llinis (1987:351-352) the supposed problem arose because

we forget to tell our students that seeing is reconstructing the external world, based not on the reflecting properties of light on external objects but, rather, on the transfor- mation of such visual sensory input (a vector) into perception vectors in other sets ofco- ordinate systems [elsewhere in our body] . . . [It] is only through the ability that our brain has to transform measurements in one set of coordinates (the visual system) into com- parable sets of measurements (visually guided motor execution) provided by other sen- sory inputs (for example, touch from fingertips) that one can truly develop the necess- ary semantics to be able to understand what one sees.

Consequently, the student’s way of raising the ‘seeing’ issue presupposed: (i) a naive realist account of seeing and (ii) that the computations in the visual coordinate system were independent of computations in tactile and motor systems. As Llinas (1987:352) remarks,

The point is that understanding the functional connectivity of the visual systems is not sufficient to understand vision. Rather, putting vision into the context of coordi- nates that are intrinsic to the body is the essential step needed to ‘make sense’ of the vis- ual information.

Still, in spite of the sophistication of Llink’ views (including, by implica- tion, his disavowal of the mind-as-mirror-of-nature metaphor (Rorty, 1979)), even a Llinas is capable of slipping back into a naive realism. For example, in the following quote, ask yourself whether Llinis is talking about the “orange” as something cons‘tructed from coordinating neural systems or whether “orange” denotes something initially given to perception? It is a legitimate and provocative question whether the following remarks by Llinds (1987:353) are in fact consistent with his constructivist model of mind and visual cognition:

Page 6: Residual Dualism in Computational Theories of Mind

42 Paul Tibbetts

Thus all inputs produced by sensory messages relating to an external object send messages that, having a common origin, are co-variant vector components of such stimuli (I see, feel, smell, and ‘measure’ the orange with my surrounding fingers as I bring it close to my mouth) . . . Following this view, the firing in each nerve fibre in a nerve bundle represents a vector component of the sensory vector (the message), each component being totally independent of the presence of other vector components.. ., despite the fact that the different vector components are covariant components of the same sensory stimulus.

Let us pursue this issue of ED and the computational theory of mind in a somewhat different direction. Any theory of the brain and cognition presup- poses an ontology of neural processes and structures over which the theory’s laws and law-like generalizations can be instantiated. E.g., ‘transducer inputs and outputs,’ ‘top-down processes,’ ‘vector transformations,’ ‘informationally- encapsulated modules,’ etc. constitute the (empirically contingent) Ontology ( I ) for computational models. However, given the ED documented above, an Ontology (ZZ), one denoting the so-called ‘external world’ and its properties must also be introduced. The question arises whether the Ontology ( I ) associ- ated with the computational model can provide any intelligible account of the properties of Ontology (ZZ), that is, of the concept ‘external object’ and particu- larly that of ‘external world’? As we have seen, on this model there is no obvious way to conceptually disentangle the referents of such concepts from the compu- tational/constructivist activities of “densely interactive neural networks.”

So just where is this visual world of Churchland and Sejnowski’s “full- blown enduring objects”? The brain, or rather its neural nets, trades only in the coin of voltage drops across cell membranes, action potentials, synaptic transmissions, vector transformations, etc. Isn’t this what Ontology ( I ) finally comes down to? It is far from obvious to me how Churchland and Sejnowski’s (1922: 144) earlier reference to “enduring objects. . . replete with color, mo- tion, and shape,” can find room and board in Ontology ( I ) . For it is precisely the externalityof these “enduring objects” that supposedly differentiates On- tology (ZZ) from Ontology ( I ) ! That is, if the computational model were con- sistently deployed then it would not be empirically possible to know anything at all except as it crunches/mulches through the brain’s computational algo- rithms and vector transformations. (Regarding the term ‘mulches,’ see ref- erence3.) After all, it is not as though we have any epistemically-independent access to this so-called “external world” once E D is set in place!

In an interview, Patricia Churchland (1990:38) suggested that cognition may be understood by sort of a mulch of actiwty on the part of neurons. I suspect that activity won’t look anything like logic. It may also turn out to be the w e that some of what we think of as reasoning involves only a very little bit of logic. It‘s sort of mulch, mulch, mulch, mulch - whatever that turns out to be.

Page 7: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind 43

Relevant here is Goldman’s (1983:36) distinction between access and cau- sal theories of empirical knowledge:

Whereas the access metaphor conveys the impression that the mind must somehow makes its way ‘to’ the external world, the spirit of the causal theory is that it suffices for the objects to ‘transmit information’ to the mind, via energy propagation, sensory trans- duction, and the like. In short, the direction of epistemicaccess is not from the mind to the object, but from the object to the mind (Emphasis added)

Churchland and Churchland (1983: 14) also discuss a causal account, though of the relation between the external world and brain states:

The backbone of what we are calling calibmtional content is the observation that there are reliable, regular, standardized relations obtaining between specific neural re- sponses on the one hand, and types of states in the world.

However, after discussing three problems with the causal analysis of knowledge, Churchland and Churchland (1983: 13) add the caveat that:

a causal analysis must disappoint some of our original expectations regarding a general account of how epistemic, representing creatures ‘hook up’ to the world.

However, Goldman, no more than computational theorists, would en- dorse any simplistic account of access such as naive empiricism.4 Recall G.E.Moore’s (1939: 183) supposed proof for an external world:

I can prove now. . . that two human hands exist. How? By holding up my hands, and saying, as I make a certain gesture with the right hand, ‘Here is one hand‘, and adding, as I make a certain gesture with the left, ‘and here is another’. And if, by doing this, I have proved ips0 factothe existence of external things, you will all see that I can also do it now in numbers of other ways. . .

Unfortunately, the eyes-as-windows-on-the-world is only a metaphor, and not a very good one at that (Tibbetts, 1990b). And just why are we right- fully suspect of Moore’s so-called proof of an external world? Because it pre- supposes the very common-sense realist ontology it sets out to establish. Half a century later, have we really extricated ourselves from the common-sense objects of Moore’s ontology? I think not. On my reading of the cognitive science literature, references to ‘enduring objects replete with color and shape’ (i.e., Ontology (ZZ) expressions) bear a remarkable family resemblance to Moore’s naive >earism.

Regarding an empiricist’s defense of an access theory of knowledge, Russell (1954; quoted in Hanson, 195850) even went so far as to claim that, The chain of causation can be traced by the inquiring mind from any given point backward to the creation of the world.

Page 8: Residual Dualism in Computational Theories of Mind

44 Paul Tibbetts

So just what is the difference between the common-sense ontology of a Moore, on the one hand, and the full-blown-enduring objects ontology of a Churchland and Sejnowski, or the sensory-referred-properties-of-the-exter- nal-world of a LlinBs, on the other? Is it that Moore could have known nothing of neural computations and networks? The computational theorists discussed above are obviously knowledgeable of what Moore was largely ignorant. Still, this does not stop such theorists from employing an Ontofogy (ZZ)! (An inter- esting question I can not pursue here is in what ways, if any, computational theorists’ Ontofogy (ZZ) differs from Moore’s common-sense ontology?)

Nowfor a big question: Is it possible, even in principle, to formulate a com- putational model of mind and cognition with no reference whatsoever to On- tofogy (ZZ) referents? Remember, I argued earlier that not only common- sense objects (hands, trees, etc. or, to use Moore’s phrase, “objects to be met with in space”) but even the external world (as in “enduring objects”) belong to Ontology (ZZ), not to Ontology (I). By implication, then, the concept of ‘externality’ as deployed in the language of ED is not available to computa- tional theorists. We could let computational theorists have the concept of ‘ex- ternality’ but only if they promise not to couple it with a naive realist ontology (as in ‘my surrounding fingers are external to the orange’ and ‘the orange is ex- ternal to my sensory receptors’ - to use LlinA.9 language). By implication, these prohibitions we set for computational theorists also extend to the con- cept of ‘internal.’ If seeing, for example, is going to be said to occur ‘within’ a brain, it cannot be ‘inside’ in the sense in which a Searlean translator is in a Chinese room or the orange is inside its skin.

As you are probably beginning to suspect, the concepts of ‘external‘ and ‘internal‘ as we commonly use them are embedded in an Ontofogy (ZZ) that was around long before the Ontology (I) of computational theorists. Conse- quently, if we jettison Ontology (ZZ) just how would we fix the semantics of ‘external‘ and ‘internal‘? Unfortunately, by posing the question in this way, we are doing what Bishop Berkeley once admonished philosophers for: ‘They raise a cloud of dust then complain they can’t see!’ The cloud of dust in our case may be the naive realism of Ontology (ZZ). So perhaps we have it all backward concerning reality, cognition and representation. Hacking (1983: 136) suspected as much:

First there is the human thing, the making of representations. Then there was the judging of representations as real or unreal, true or false, faithful or unfaithful. Finally comes the world, not first but second, third or fourth.

So we appear to have a dilemma: To mix Ontologies (I) and (ZZ) only re- sults in the conceptual confusion I documented in the first section of this

Page 9: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind 45

paper. On the other hand, it is indeed problematic whether Ontology (ZZ) is reducible to the neurobiological language of Ontology ( I ) . But without Onto- logy (ZZ) where are we left regarding the ‘enduring objects’ and ‘external world’ that continue to figure so prominently in the accounts of computa- tional theorists?

(11) If we now readjust the magnification we see that what we have been dis-

cussing all along is the ploblem of representation. If we misstate the problem as, “How does a brain come to represent an external world?” we throw the en- tire issue back into the arena of ED. Churchland and Churchland (1983: 13), for example, state the problem in just this way:

How. . . does neuroscience expect to deal with the question of how representational systems hook up to the world? For if it sees the brain as syntactic, than it does seem mi- raculous that a sequence of events in a herring gull’s brain results in its asking for food, or a sequence in a bee’s brain results in its taking a particular flight path to nectar-heavy blossoms.

Now the interesting question is: Can the representation issue even be ad- dressed without presupposing some sort of ED? On one interpretation unless representations (whether linguistic, pictorial or the pattern of neural weights in hidden units) represent something then our representations would seem- ingly have no reference and therefore no semantic content. The methodologi- cal solipsism of a Fodor is the price to pay for this move! To quote Fodor (1980:486,488) on this:

If mental processes are formal [i.e., ‘they apply to representations, in virtue of the syntaxof the representation’] then they have access only to the formal properties of such representations as the senses provide. Hence, they have no access to the semantic properties of such representations, including the property of being true, of having refer- ents, or, indeed, the property of being representations of the environment.

In response to this dilemma of two distinct ontologies, some theorists va- cillate between some version of solipsism, on the one hand, and ED on the other. E.g., this is evident in Stereny’s The Representational Theory of Mind (1990:34). First the solipsistic move:

In now current terminology, the processes that operate on mental representations are sensitive to the individualist or narrow properties of those representations. For in- stance, we probably have brain functions specialized for face recognition. That reader has no direct access to the distal causes of the stimulated states of my visual system. It must be honed to the relevant intrinsic features of my visual apparatus. For that’s its only input. (Emphasis added)

Page 10: Residual Dualism in Computational Theories of Mind

46 Paul Tibbetts

Now for Sterelny’s (1990:35,40) ED and the reintroduction of the ‘external world’ which causally initiates mental representations:

The causal origins of A [i.e., states of my visual cortex] play a direct causal role in mental processes only to the extent that they leave traces in A itself.. . [Accordingly,] a computational theory of cognition plus an account of the causal relations between mind and world explain how we can have representing minds. (Emphasis added)

Conversely, if representation is largely construed as an intra-cranial com- putational process, with no direct access to the causal ancestry of those rep- resentational states, then one is seemingly driven to a ‘neurological solipsism.’ However, a way out is suggested by Sterelny (1990:35), though is under- veloped.

Cognitive psychologists think that many cognitive processes are inference-like. Per- ception, for example, is often seen as a process in which hypotheses about the three- dimensional world are confirmed or disconfirmed . . . (Emphasis added)

Let us briefly pursue this cognition-as-hypothesis-generator model. The proposal that the braidmind is a sort of inference engine is certainly not a new proposal (Helmholtz, 1867). It is a central feature of the computational model. For example, one representative text in cognitive psychology (Best, 1989:96), states that on a constructivist account, perception is possible only because of extensive neurological computations:

stimuli become informative only after the central nervous system has added its own processing into the recipe. In that sense, the categories we perceive as being out there in the world aren’t necessarily out there. . . [Stimuli] are inherently ambiguous and could be organized in any number of possible ways by the brain, with the result that how we look at the world and what we recognize would be markedly different.

Accordingly, our perception of an orange, say, is the result of a number of computational processes distributed over a number of areas in the visual and associative cortex. E.g., perception of the edge of the orange involves lateral inhibitions or central-surround inhibition which enhances contrast. For Fischler and Firschein (1987:219), this lateral inhibition is made possible by “neural circuits mathematically differentiating discontinuity in illuminationl’

In effect, then, this distributed account of cognition rejects the ‘unifying executive’ model of mind. As Paul Churchland (1988:95) remarks, on the computational approach, conscious intelligence does not emerge as having a single unitjing essence, or a simple unique nature. Rather, intelligent creatures are represented as a loosely interconnected grab bag of highly computational procedures. . . Long-term natural selection makes it likely that surviving creatures enjoy a smooth interaction with the environment, but the internal mechanisms that sustain that interaction may well be ar- bitary, opportunistic, and juryrigged.

Page 11: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind 47

That perception of our orange is spread over distinct neural sites further rein- forces the claim that objects are computationally constructed as (non-con- scious) working inferences. Continuing with our orange example: V5 of the visual cortex is specialized for visual motion; cells in V4 are selective for spe- cific wavelengths of light; cell assemblies in V3 are selective for form recogni- tion but not for color; and so on (Zeki, 1993). The overall resulting computa- tional inference is but one of a number of “best guesses” or abductive inferen- ces based on past experience. Fischler and Firschein (1987:233) conclude that:

No finite organism can completely model the infinite universe, . . . the senses can only provide a subset of the needed information; the organism must correct the measured values-and guess at the needed missing ones. In most organisms these guesses are made automatically by algorithms embedded in their neural circuitry, and are the best bet the organism can make based on the past experience of its species.. . Indeed, even the best guesses can only be an approximation to reality - perception is a creative process.

This conclusion nicely coincides with the remark made earlier by Church- land and Sejnowski (1992: 142-143) that “the brain builds a model of the world it inhabits. That is, brains are world-modelers . . I’ However, I would en- tirely disengage the notion of brains as world-modelers from any realist or dualist language-game assumptions!

So far, my criticism of the computational theorists is that they have been far too conservative regarding the implications of the cognition-as-computa- tional-inference (or CCZ) thesis. If the brain is an inference maker, and ob- ject-talk such as LlinBs’ orange is a computational inference, a best interpreta- tion, then why stop here? Why should ‘stimuli,’ say, be taken as ontological givens rather than as inferential constructs? Surely the concepts of ‘distal‘ or

In Criticism of a unified theory of cognition and representation, Churchland and Churchland (1983: 16) suggest that: More likely, we possess an integrated hierarchy of quite different computational/representa- tional systems, facing very different problems and pursuing quite different strategies of solu- tion.

Concerning whether cognition and representation are quasi-linguistically coded, Churchland and Churchland (1983: 16) add that: The bulk of cognition may take place in other sub-systems, and follow principles inapplicable in the lin ’stic domain. cam aware that this claim regarding perceptual beliefs as computational states has not gone unchallenged. E.g., Putnam (1987: 15) argues that [there] are good arguments to show that mental states are not only compositionally plastic but also computationally plastic. . . I do not believe that even all humans who have the same be- lief.. . have in common a physical cum computational feature which could be ‘identified with’ that belief. The ‘intentional level’ is simply not reducible to the ‘computational level’ any more that it is to the ‘physical level.’

Page 12: Residual Dualism in Computational Theories of Mind

48 Paul Tibbetts

‘proximal stimuli’ are, at the very least, low-level theoretical terms. And what of action potentials, distributed processing, long-term potentiation, neuronal motility, and ontogenetic columns? Who would claim there is no inferential/ theoretical basis to such explanatory concepts?

What does all of this suggest? That even OntoZogy (I) is a best interpreta- tion, an “inference to the best explanation,” to use Harman’s (1973) expres- sion. In other words, OntoZogy (I) is as much an inferential construct as On- toZogy (IZ), unless one wants to argue for inference-free, theory-neutral ob- servation claims! (Tibbetts, 1990a).

I further claim that the concepts of ‘external world,’ ‘internal representa- tions,’ etc. belong to one mapping schema, and the language of neural pro- cesses and vector transformations belong to another. They are simply differ- ent explanatory accounts. The only thing in common between ‘world’ in the Cartesian dual-substance framework, and ‘world‘ in the connectionist model, is the phonetics of the word. They are as differents as ‘matter’ was to Aristotle as it was to Heisenberg.

Actually, my claim is even stronger than this example of ‘matter’ suggests. No one is an Aristotelian today. In our physics, we all inherited and inter- nalized the paradigm shift initiated by Copernicus and Galileo. But have computational theorists achieved the same paradigm shift regarding the con- cept of the ‘external world’? Do they still retain some of the conceptual coor- dinates associated with ED? Perhaps the computational theorists discussed in Part (I) are not unlike those mariners who simultaneously employed a Ptole- maic model of a starry firmament revolving around a fixed earth, while reject- ing the beliefs of the ancients.

Let me provide one example of sailing with two different sets of coordi- nates. In the following quotation from Jackendoffs Consciousness and the ComputationaZ Mind (1987: 132- 133), notice how realist coordinates are slipped in:

Harman’s proposal is based on the following reasoning: From the empirical claim by Harman (1973:181) that there does not seem to be any basic level of experience not itself the product of inference, and used itself as data for inference to how things look, Harman 1973: 184-185) draws the conclusion that, the data iof perception] cannot be provided by sensory experience, since that experience is constituted by representations which are themselves the products of inference of a more or less automatic sort.

Page 13: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind 49

How then can a mind relate its internal symbol systems, including conceptual struc- ture, to the ”world of non-symbols” . . .? Presumably, through the information about the world garnered by the senses, encoded in symbols specialized to particular sense mo- dalities, then translated into expressions of conceptual structure. In other words, the connection our language has to Real Reality is explicated by psychological theories,. . . [regarding how an observer] is constrained to internally represent [truth-conditions].

Conversely, on a consistent reading of the CCZ (cognition-as-computa- tional-inference) thesis, this ‘world of non-symbols’ or ‘external world of en- during objects’ (to use Churchland and Sejnowski’s language) is a best inter- pretation of what comes to be contingently labeled ‘sensory stimuli.’ As Best (1989:96) remarked above, the categories we employ to describe perception “tell us far more about how our brains work than they do about the factual na- ture of the world.’’ This claim is particularly obvious in cases where our em- pirical knowledge claims are underdetermined by sensory input alone. As Spelke (1990: 121) notes, the CCZ thesis is particularly evident in visual per- ception:

perceiving objects may be more akin to thinking about the physical world than to sensing the immediate environment. That suggestion, in turn, echoes suggestions from philosophers and historians of science that theories of the world determine the objects one takes to inhabit the world (Quine 1960; Kuhn 1962). Just as scientists may be led by their conceptions of biological activities and processes to divide living beings into or- gans, cells, and molecules, so infants [and adults] may divide perceived surfaces into objects in accord with implicit conceptions that physical bodies move as wholes, separ- ately from one another, on connected paths. lo

In spite of this ‘grammatical‘ account of truth, Jackendoff is still reluctant to disengage this account from the notion of internal representations of an external world. As Jackendoff (1987: 131-133) remarks, if both the “symbols” and the “world of non-symbols” are constrained by the nature of the mind and its construal of reality, it is hard to see the relation between them as not being similarly con- strained. Truth too must be regarded as a characteristic of the world as construed. The notion ‘true’ [is] itself an element of conceptual structure. . . The argument here is that

‘truJ is a predicate entirely on a par with “grammati&. . . But even Jackendoff (1987: 133) is not prepared to relinquish the metaphor of the Carte-

sian theater: IAccordingly,] we are replacing the notion of absolute satisfaction of truth-conditions with the judgement of satisfaction of truth-conditions as the observer is constrained to internally repre- sent them.

lo I realize that Gibson (1979; 1967) would totally disagree with the conclusions drawn here. For Gibson there is no need to draw upon the unconscious inferences and algorithmic cal- culations associated with Helmholtz, Marr, Spelke or the CCI model. Empirical knowledge can be directly extracted for the ‘stimulus array’ without resorting to a priori principles, mental models, interpretive schemata, etc. Depth perception; three-dimensionality; size, shape and object constancy while we are in motion; all can be directly derived from ‘texture gradients’ in the stimulus arry. Gardner (1985:309-310) observes that Gibson arrived at extreme skepticism about the whole computational approach. He objected to the notion of mental representations, mental operations, the processing (as opposed to the di- rect “pickup”) of information, and other cognate concepts. Inferences were completely un- necessary.

Page 14: Residual Dualism in Computational Theories of Mind

50 Paul Tibbetts

Are we denying there is a so-called ‘external world’ which is the causal an- cestry of sensory experience? In response, and rather than play the realist’s language game, I propose a shift from questions of ontology (or “What must the external world be like in order to account for our knowledge?”) to ques- tions of cognition (or “How, on the CCZmodel, do we construct and recon- struct Ontologies (I) and (ZZ)?”). To be fully consistent here, we could even go so far as to retreat from assigning ontological priority to computational neural states! This conclusion is clearly at odds with the one drawn by Paul Church- land (1988:34) concerning the ontological independence of neurological states:

The difference between a person who knows all about the visual cortex but has never enjoyed the sensation-of-red, and a person who knows no neuroscience but knows well the sensation-of-red, may reside not in what is respectively known by each (brain states by the former, nonphysical qualia by the latter), but rather in the different type, or medium, or level of representation each has of exactly the same thing: brain states.

Concerning this claim, we can return to the question we raised earlier: “Are propositions regarding ‘brain states’ based on direct observation or are they too theory-laden?” To be consistent with what he has claimed elsewhere, the former option is not available to Churchland (1988:47):

The fact is, all observation occurs within some system of concepts, and our observa- tion judgments are only as good as the conceptual framework in which they are ex- pressed.

And where do we move from here? Don’t we have to presuppose some- thing? No, not if this entails an a priori, Archimedean fixed point. l1 The ex- pressions, “ontologically grounded referents,” and “ the World” or, alterna- tively, “Reality,” sound very profound but exactly what do they denote? And who is to say just whatthese referents consist of? And just when in the history of neuroscientific theorizing are we to say, ‘‘Here is a framework-independent account of neural processes and what is entailed regarding Ontology (ZZ)!”

Gardner 1985:311) goes on to conclude that,

[ 19821 phrased it, the detection of invariants is exactly and precisely an information-processing problem. . . The only way to understand how the detection works is to treat it as an informa- tion-processing problem.

the idea that there is an Archimedean point, or a use of ‘exist‘ inherent in the world itself, from which the question ‘How many [and what sort of] objects really exist?’ makes sense, is an illu- sion.

It is here tL at cognitwe scientists of almost every stripe have locked horns with Gibson. As Marr

l1 Putnam (1987:20) once remarked:

Page 15: Residual Dualism in Computational Theories of Mind

Residual Dualism in Computational Theories of Mind 51

Given the CCZmodel of mind, answers to these questions are exercises in con- structed ontologies.

Is this to say that propositions regarding these respective ontologies are merely exercises in fiction? Given the line of argument developed in this paper, a metaphysical realist would draw this conclusion. Alternatively, an epistemological consequence of the CCZ thesis is that even the ‘real‘-‘fic- tional‘ distinction is only drawn within the context of cognition and inference. Given the CCZ thesis, where else could this or any epistemological distinction be drawn? (For a very provocative discussion on this see Putnam’s (1981) paper, “Why there isn’t a ready made world.’’)

Regarding the dichotomy between cognition and an external reality po- sited by ED, and the ‘objective reality’ associated with realism, Hacking (1983: 139) proposes an account convergent with my own:

with the growth of knowledge we may, from revolution to revolution, come to in- habit different worlds. New theories are new representations. They represent in differ- ent ways and so there are new kinds of reality. So much is simply a consequence of my account of reality as an attribute of representation.

The way around ED, then, is not to give up talking about ‘external objects’ or an ‘external world’ but to recognize that such expressions are contingent constructs from very complex computational and inferential processes. There is no reason to assume that such constructs have any other epistemic status than as working hypotheses. They are in principle amenable to revision as evi- dence, explanatory models and one’s ontological commitments and practices mutually readjust and redefine one another over time.

In conclusion, then, I have argued that some proponents of the computa- tional model of mind/brain continue to navigate with two sets of charts. Where the coordinates of one set of charts are, ostensively, aligned with a scientific, computational model of mind/brain, the other set is oriented to- wards a two-worlds dualist account associated with a questionable epistemo- logy and metaphysics. It is not obvious how effectively a journey can proceed which employs such conflicting navigational bearings.

So, the admonition at the beginning of this paper from Descartes to sus- pend judgment concerning what he had previously taken for granted, clearly applies to our own cognitive science dualist and realist biases.

But it is not enough simply to have made a note of this; I must take care to keep it be- fore my mind. For long-standing opinions keep coming back again and again, almost against my will; they seize upon my credulity, as if it were bound over to them by long use and the claims of intimacy.

Page 16: Residual Dualism in Computational Theories of Mind

52 Paul Tibbetts

REFERENCES

BEST, John B. 1989. Cognitive Psychology. St. Paul, MN: West Publishing Company. CHURCHLAND, Paul M. 1989. A Neumcomputational Perspective. Cambridge: The MIT Press. CHURCHLAND, Paul M. 1988. Matter and Consciousness. Cambridge: The MIT Press. CHURCHLAND, Patricia S. and SEJNOWSKI, Terrence. 1992. The Computational Brain. Cam-

bridge: The MIT Press. CHURCHLAND, Patricia S. 1990. “Interview,” in Bill Moyers, A World of Ideas, vol. I1 New York:

Doubleday. CHURCHLAND, Patricia S. 1989. Neumphilosophy. Cambridge: The MIT Press. CHURCHLAND, Patricia S. and CHURCHLAND, Paul M. 1983: “Stalking the Wild Epistemic En-

gine.” Nous 17, 5-22. FISCHLER, Martin and FISCHEIN, Oscar. 1987. Intelligence: The Eye, the Brain, and the Com-

puter. Reading, MA: Addison-Wesley Publishers. FLANAGAN, Owen. 1991. The Science ofthe Mind. 2 /e . Cambridge: The MIT Press. FODOR, Jerry. 1980. “Methodological Solipsism Considered as a Research Strategy in Cognitive

Psychology.” Reprinted in D. Rosenthal, ed., The Nature of Mind. New York: Cambridge University Press.

GIBSON, James J. 1979. The Evolutionary approach to Visual Perception. Boston: Houghton- MiMin.

GIBSON, James J. 1967. “New Reasons for Realism.” Synthese 17, 162-172. GOLDMAN, Alvin I. 1992. Liaisons: Philosophy Meets the Cognitive Sciences. Cambridge: The

GOLDMAN, Alvin I. 1986. Epistemology and Cognition. Cambridge: Harvard University Press. HACKING, Ian. 1983. Representing and Intervening. New York: Cambridge University Press. HARMAN, Gilbert. 1973. Thought. Princeton: Princeton University Press. HELMHOLTZ, Hermann von. 1867 (1962). Treatise on Physiological Optics. New York: Dover. JACKENDOPP, Ray. 1987. Consciousness and the Computational Mind Cambridge: The MIT

KOSSLYN, Stephen and KOENIG, Oliver. 1992. Wet Mind. New York: The Free Press. KOSSLYN, Stephen. 1983. Ghosts in the Mind’s Machine. New York: W.W.Norton. LLIN~S, Rodolfo. 1987. “‘Mindness’ as a Functional State of the Brain,” in C. Blakemore and S.

Greenfield, eds., Mindwaves. Cambridge, MA: Blackwell. MARR, David. 1982. Vision. San Francisco: W. H. Freeman. MOORE, George, E. 1939. “Proof of an External World,” in W. Barrett and H. Aiken, eds.,

Philosophy in the Twentieth Century, vol. 2. New York: Harper and Row. NEISSER, Ulrich. 1976. Cognition and Reality. San Francisco: W. H. Freeman. PASTORE, Nicholas. 1971. Selective History of Theories of Visual Perception: 1650-1950. New

York: Oxford University Press. PUTNAM, Hilary. 1988. Representation and Reality. Cambridge: The MIT Press. PUTNAM, Hilary. 1987. The Many Faces of Realism. LaSalle, I L Open Court. PUTNAM, Hilary. 1981. “Why there isn’t a ready made world,” reprinted in his Realism and Rea-

son. Philosophical Papers, vol. 3, Cambridge, MA: Cambridge University Press. SEARLE, John. 1992. The Rediscovery of the Mind. Cambridge, MA: The MIT Press. SEARLE, John. 1984. Minds, Brains, and Science. Cambridge, MA: Harvard University Press. SPELKE, Elizabeth S. 1990. “Origins of Visual Knowledge,” in D. Osherson, S. Kosslyn and J.

Hollerbach, eds., Visual Cognition andAction. Cambridge, MA: The MIT Press. TIBBETTS, Paul. 1990a. “Representation and the Realist-Constructivist Controversy,” in M.

Lynch and S. Woolgar, eds., Representation in Scientific Practice. Cambridge, MA: The MIT Press.

TIBBETTS, Paul. 1990b. “Threading-the-Needle: The Case For and Against Common-Sense Realism.” Human Studies 13, 309-322.

ZEKI, Semir. 1993. A Vision of the Brain. London: Blackwell Scientific Publications.

MIT Press.

Press.

Dialectica Vol. 50, No 1 (1996)