73
HOW TO THINK ABOUT MENTAL CONTENT Frances Egan Department of Philosophy Center for Cognitive Science Rutgers University New Brunswick, NJ

How to Think about Mental Content

  • Upload
    willem

  • View
    61

  • Download
    0

Embed Size (px)

DESCRIPTION

How to Think about Mental Content. Frances Egan Department of Philosophy Center for Cognitive Science Rutgers University New Brunswick, NJ. Aims of Talk. - PowerPoint PPT Presentation

Citation preview

Page 1: How to Think about Mental Content

HOW TO THINK ABOUT MENTAL CONTENT

Frances EganDepartment of PhilosophyCenter for Cognitive ScienceRutgers UniversityNew Brunswick, NJ

Page 2: How to Think about Mental Content

AIMS OF TALKTo explicate the role that

representational content plays in the cognitive sciences that propose to explain our representational capacities – in particular, computational cognitive psychology and computational neuroscience.

To situate this explanatory project with respect to so-called ‘intrinsic intentionality.’

Page 3: How to Think about Mental Content

THE PLAN FOR THE TALK Introduction: Representationalism The ‘received view’: hyper representationalism The Chomskian challenge:

ersatz representationalism Two examples A third view (two kinds of content) Computational models and intrinsic

intentionality

Page 4: How to Think about Mental Content

INTRODUCTION:REPRESENTATIONALISM

Most theorists of cognition endorse some version of representationalism, which I will understand as the view that the mind is an information-using system, and that human cognitive capacities are to be understood as representational capacities.

Page 5: How to Think about Mental Content

INTRODUCTION: REPRESENTATIONALISM

As a first pass, mental representations are “mediating states of an intelligent system that carry information.” (Markman & Dietrich, 2000)

They have two important features:Physically realized, and so have causal

powersIntentional; they have meaning/content

Page 6: How to Think about Mental Content

INTRODUCTION: REPRESENTATIONALISM

Assumed here is a distinction between a representational vehicle, which is the causally efficacious physical state/structure (e.g. a string of symbols, a spike train), and its content.

Page 7: How to Think about Mental Content

EXAMPLE -- ADDITIONn, m n + m

p1, p2 p3

Page 8: How to Think about Mental Content

INTRODUCTION: REPRESENTATIONALISM

There is significant controversy about what counts as a representation. This issue concerns both vehicle and content, but I will focus on content.Recall Markman & Dietrich’s definition: representations are “mediating states of an intelligent system that carry information.”

Question: how is the notion of carrying information to be understood?

Page 9: How to Think about Mental Content

THE ‘RECEIVED VIEW’:HYPER REPRESENTATIONALISM Most philosophers of mind (e.g. Fodor) take a particularly strong view of the nature of the representations that figure in cognitive scientific theorizing.On this view, which I shall dub ‘hyper representationalism’, these representations are not only essentially intentional, but they have their particular intentional contents essentially.

Page 10: How to Think about Mental Content

THE ‘RECEIVED VIEW’:HYPER REPRESENTATIONALISM To say that representations have their contents essentially is to say that if a particular cognitive representation had a different content, or no content at all, it would be a different (type of) representation altogether.

Page 11: How to Think about Mental Content

THE ‘RECEIVED VIEW’: HYPER REPRESENTATIONALISM

The hyper representationalist construal of cognitive scientific representations allows for the possibility of misrepresentation.

E.g., a frog can misrepresent a BB as a

fly precisely because the representation has a specific content – it’s a fly-representation and not a moving-black-dot-representation.

Page 12: How to Think about Mental Content

THE ‘RECEIVED VIEW’:HYPER REPRESENTATIONALISM

Hyper representationalism also requires that some naturalistic relation hold between the representing mental state/structure and the represented object or property, though just what relation has been a matter of dispute:

The relation might be causal (‘information-theoretic’); it might be teleological (i.e. evolutionary function), or some other relation that is specifiable in non-semantic and non-intentional terms.

Page 13: How to Think about Mental Content

THE ‘RECEIVED VIEW’:HYPER REPRESENTATIONALISMWhy must the content-determining relation be naturalistic? Only if the relation is specifiable in naturalistic (i.e. non-semantic and non-intentional) terms will computational cognitive science deliver on its promise to provide a fully mechanical account of the mind, and provide the basis for a naturalistic account, not only of cognitive capacities, but also of intentionality.

Page 14: How to Think about Mental Content

THE ‘RECEIVED VIEW’:HYPER REPRESENTATIONALISMThe idea is that intentionality is not a fundamental property of the natural world.

It is this promise that accounts for much of the interest among philosophers of mind in computational cognitive science, since it seems to promise a naturalistic reduction of intentionality.

Page 15: How to Think about Mental Content

THE ‘RECEIVED VIEW’:HYPER REPRESENTATIONALISMTo summarize, Hyper Representationalism requires:

1) That mental representations have their contents essentially

2) That misrepresentation is possible3) That content is determined by a

privileged naturalistic property or relation.

Page 16: How to Think about Mental Content

THE CHOMSKIAN CHALLENGE:ERSATZ REPRESENTATIONALISM

Noam Chomsky has argued that the so-called ‘representational’ states invoked in accounts of our cognitive capacities should not be construed as about some represented objects or properties.

The notion of ‘representation’ when used in computational contexts, Chomsky claims, should not be understood relationally, as in “representation of x”, but rather as specifying a monadic property, as in “x-type representation.”

Page 17: How to Think about Mental Content

THE CHOMSKIAN CHALLENGE:ERSATZ REPRSENTATIONALISM

So understood, the individuating condition of a given internal structure is not its relation to an ‘intentional object’, there being no such thing according to Chomsky, but rather its (causal) role in cognitive processing.

Reference to what looks to be an

intentional object is simply a convenient way of type-identifying structures with the same role in cognitive processing.

Page 18: How to Think about Mental Content

THE CHOMSKIAN CHALLENGE:ERSATZ REPRESENTATIONALISM

Chomsky rejects categorically the idea that intentional attribution plays any explanatory role in cognitive science. Characterizing a structure as ‘representing an edge’ is just loose talk, at best a convenient way of sorting structures into kinds determined by their role in processing. We shouldn’t conclude that the structure is a representation of anything, Chomsky cautions; to do so would be to conflate the theory proper with its informal presentation.

Page 19: How to Think about Mental Content

THE CHOMSKIAN CHALLENGE:ERSATZ REPRESENTATIONALISM

One of Chomsky’s motivations in promoting non-relational representation is to dispel talk of cognitive systems ‘solving problems’, and related talk of ‘misperception’, ‘misrepresentation’, and ‘error’.

Chomsky claims such intentional and

normative notions reflect our parochial interests and have no place in scientific theorizing about cognition.

Page 20: How to Think about Mental Content

THE CHOMSKIAN CHALLENGE:ERSATZ REPRESENTATIONALISMI will argue that Chomsky needs to retain a genuinely relational notion of representation if he is to preserve the idea that cognitive theories explain some cognitive capacity or competence, since these, the explananda of such theories, are described pre-theoretically in intentional terms.

I will claim that his ersatz notion of representation does not allow him to do this.

Page 21: How to Think about Mental Content

THE CHOMSKIAN CHALLENGE:ERSATZ REPRESENTATIONALISM

In the rest of the talk I will sketch the notion of representation that computational cognitive science both needs and actually uses.

It is neither Hyper nor Ersatz Representation-alism, but it captures what is right about Chomsky’s claim that representationalist talk is “informal presentation, intended for general motivation.”

Page 22: How to Think about Mental Content

THE PLAN FOR THE TALK Introduction: Representationalism The ‘received view’: hyper representationalism The Chomskian challenge:

ersatz representationalism Two examples A third view (two kinds of content)Computational models and intrinsic

intentionality

Page 23: How to Think about Mental Content

Courtesy of Reza Shadmehr

EXAMPLE #1: MOTOR CONTROL OF HAND MOVEMENT

Page 24: How to Think about Mental Content

HOW DOES THE BRAIN COMPUTE THE LOCATION OF THE HAND?

Forward kinematics: computing location of the hand in visual coordinates from proprioceptive information from the arm, neck, and eye muscles

f(θ) = Xee

Page 25: How to Think about Mental Content

COMPUTING A HIGH LEVEL PLAN OF MOVEMENT

Computing the difference vector – the displacement of the hand from its current location to the target’s location

Xt – Xee = Xdv

Page 26: How to Think about Mental Content

HOW IS THE PLAN TRANSFORMED INTO MOTOR COMMANDS?

Inverse kinematics/dynamics: The high-level motor plan, corresponding to a difference vector, is transformed into joint angle changes and force motor commands.

f(Xdv) = Δθ (inverse kinematics)

Page 27: How to Think about Mental Content

SUMMARY OF NECESSARY COMPUTATIONS

f(θ) = Xee (forward kinematics)Xt – Xee = Xdv (difference vector)f(Xdv) = Δθ (inverse kinematics)

Page 28: How to Think about Mental Content

EXAMPLE #2: MARRIAN FILTER

Page 29: How to Think about Mental Content
Page 30: How to Think about Mental Content

SOME MORALSThe Shadmehr/Wise model explains our

capacity to grasp nearby objects. The Marrian filter helps explain our ability to see ‘what is where’ in the nearby environment.

It is assumed from the outset that we are successful at these tasks; crucially, this success is the explanandum of the theory.

The question for the theorist is how we manage to do it.

Page 31: How to Think about Mental Content

SOME MORALSIn both examples, the theory specifies the

function computed by the mechanism. This mathematical characterization – what

I call a function-theoretic characterization – gives us a deeper understanding of the device.

We already understand such mathematical functions as vector subtraction, Laplacean of Gaussian filters, integration, etc.

And we already have some idea of how such functions can be executed.

Page 32: How to Think about Mental Content

SOME MORALSA function-theoretic characterization is

‘environment-neutral’: the task is characterized in terms that prescind from the environment in which the mechanism is normally deployed.

E.g., the Marrian mechanism would compute the same function even if it were to appear (per mirabile) in an environment where light behaves differently than on earth, or in a brain in a vat.

Page 33: How to Think about Mental Content

SOME MORALSIt would also compute the same function if

it were part of a different cognitive system – say, the auditory system. (Environment-independence includes the internal environment in which it is normally embedded.)

It is not implausible to suppose that each sensory modality has one of these Marrian filters, since it just computes a particular curve-smoothing function, a computation that may be put to a variety of different uses in different contexts.

Page 34: How to Think about Mental Content

SOME MORALSThe function-theoretic characterization

thus provides a domain-general character-ization of the mechanism underlying the cognitive capacity or competence.

Page 35: How to Think about Mental Content

SOME MORALSThere is, of course, a kind of content that is

essential in the two accounts: mathematical content.

Inputs to the visual filter represent numerical values over a matrix; outputs represent rate of change over the matrix. Inputs to the motor control mechanism represent vectors and outputs represent their difference. More generally, inputs represent the arguments and outputs the values of the mathematical function that canonically specifies the task executed by the device in the course of exercising the competence or capacity.

Page 36: How to Think about Mental Content

SOME MORALSIt is not clear how this mathematical

content could be naturalized, or why this would be desirable.

Certainly, the legitimacy of computational theorizing does not depend on it.

Page 37: How to Think about Mental Content

A CRUCIAL QUESTION:But how does computing the specified mathematical function explain how the mechanism manages to perform its cognitive task, e.g. enabling the subject to grasp an object in nearby space, or to see ‘what is where’ in the nearby environment?

Page 38: How to Think about Mental Content

ANSWERThe function-theoretic description provides an environment-neutral, domain-general characterization of the mechanism.But the theorist must still explain how computing this function, in the subject’s normal environment, contributes to the exercise of the particular cognitive capacity that is the explanatory target of the theory. For only in some environments would computing the Laplacean of a Gaussian help us to see:

Page 39: How to Think about Mental Content

ANSWERIn our environment, this computation produces

a smoothed output that facilitates the detection of sharp intensity gradients across the retina, which, when these intensity gradients co-occur at different scales, correspond to physically significant boundaries in the visual scene.

One way to make this explanation perspicuous is to talk of inputs and outputs of the mechanism as representing light intensities and discontinuities of light intensity respectively; in other words, to attribute contents that are appropriate to the relevant cognitive domain, in this case, vision.

Page 40: How to Think about Mental Content

ANSWERAt some point the theorist needs to show that

the computational/mathematical character-ization addresses the explanandum with which she began.

And so theorists of vision will construe the posited mechanisms as representing properties of the light, e.g. light intensity values, changes in light intensity, and, further downstream, as representing changes in depth and surface orientation.

Similarly, theorists of motor control will construe the mechanisms they posit as representing positions of objects in nearby space and changes in body joint angles.

Page 41: How to Think about Mental Content

ANSWERWe will call these contents that are

specific to the cognitive task being explained cognitive contents.

We will call the specification of these contents the cognitive interpretation.

Cognitive contents are assigned primarily for explicative/elucidatory purposes. They show how, in context, the mathematically characterized device contributes to the exercise of the cognitive capacity to be explained.

Page 42: How to Think about Mental Content

COGNITIVE CONTENTS(1) The explanatory context fixes the domain of cognitive interpretation. Content assignment is constrained primarily by the pre-theoretic explanandum, that is, by the cognitive capacity that the theory aims to explain. Thus, a vision theorist assigns visual contents to explain the organism’s capacity to see ‘what is where’ in the scene, and so the theorist must look to properties that can structure the light in appropriate ways.

Page 43: How to Think about Mental Content

COGNITIVE CONTENTS(2) No naturalistic relation is likely to pick out a single, determinate content. Any number of relations may hold between the representing state or structure and the object or property to which it is mapped in the cognitive interpretation. But it is unlikely that a naturalistic relation determines a unique representational content, as Hyper Representationalism requires.

Page 44: How to Think about Mental Content

COGNITIVE CONTENTS(3) Use is crucial. Even if some naturalistic relation were to uniquely hold between the posited structures and elements of the target domain, the relation would not be sufficient to determine their cognitive contents. The structures have their cognitive contents only because they are used in certain ways by the device, ways that facilitate the cognitive task in question.

Page 45: How to Think about Mental Content

COGNITIVE CONTENTSThe fact that tokenings of the structure are regularly caused by some distal property tokening – and so can be said to ‘track’ that property – is part of the explanation of how a device that uses the posited structure can accomplish its cognitive task, but the causal relation (or an appropriate homomorphism) would not justify the content ascription in the absence of the appropriate use.

Page 46: How to Think about Mental Content

COGNITIVE CONTENTS(4) In addition to the explanatory context of explaining a particular cognitive capacity, other pragmatic considerations play a role in determining cognitive contents. Given their role in explanation, candidates for cognitive content must be salient or tractable. In general, contents are assigned to internal structures constructed in the course of processing mainly as a way of helping us keep track of the flow of information in the system.

Page 47: How to Think about Mental Content

COGNITIVE CONTENTSPut more neutrally, cognitive contents help us keep track of changes in the system caused by both environmental events and internal processes, all the while with an eye on the cognitive capacity that is the explanatory target of the theory.Cognitive contents thus play mainly an expository role. The assignment of these contents will be responsive to such considerations as ease of explanation, and so may involve considerable idealization and abstraction.

Page 48: How to Think about Mental Content

COGNITIVE CONTENTS(5) The assignment of cognitive content allows for misrepresentation, but only relative to a particular cognitive task or capacity. The cognitive interpretation which assigns visual contents may assign a content – say, edge – to a structure which is occasionally tokened in response to a shadow or some other distal feature. The mechanism misrepresents a shadow as an edge.

Page 49: How to Think about Mental Content

COGNITIVE CONTENTSIn this sort of case, the mechanism computes the same mathematical function it always does, but in an ‘abnormal’ environment computing this mathematical function may not be useful for executing the cognitive capacity. Misrepresentation is something we attribute to the device when, in the course of doing its usual mathematical task (given by the function-theoretic description), it fails to accomplish the cognitive task that is the explanatory target of the theory.

Page 50: How to Think about Mental Content

COGNITIVE CONTENTS(6) The representational structures posited by the computational theory do not have their cognitive contents essentially. If the mechanism which is characterized in mathematical terms by the theory were differently embedded in the organism, perhaps sub-serving a different cognitive capacity, then the structures would be assigned a different cognitive content.

Page 51: How to Think about Mental Content

COGNITIVE CONTENTSBy the same token, if the subject’s normal environment were different, so that the use of these representational structures would not in this environment facilitate the specified cognitive task, then these structures might be assigned no cognitive content at all. And the various pragmatic considerations mentioned earlier might motivate the assignment of different cognitive contents to the structures.

Page 52: How to Think about Mental Content

COGNITIVE CONTENTS(7) Cognitive contents are not part of the computational theory proper – they are part of the ‘intentional gloss’. The theory proper comprises a specification of (i) the mathematical function(s) computed by the device (what I am calling the function-theoretic characterization), and (ii) the specific algorithms involved in the computation of this function, including (iii) the representational structures that the algorithm maintains, and (iv) the computational processes defined over these structures.

Page 53: How to Think about Mental Content

COGNITIVE CONTENTSThese four elements provide an environment-independent characterization of the device. They have considerable counterfactual power: they provide the basis for predicting and explaining the behavior of the device in any environment, including environments where the device would fail miserably at computing the cognitive function it computes in its normal environment.

Page 54: How to Think about Mental Content

COGNITIVE CONTENTSBut since the theory must explain the organism’s success at computing the cognitive function in its normal environment, it must also advert to general facts about that environment that explain why computing the mathematical function specified by the theory suffices, in context, for the exercise of the cognitive capacity.

Page 55: How to Think about Mental Content

COGNITIVE CONTENTSThus the ‘theory proper’ will also include (v) such environment-specific facts as that a co-incidence of sharp intensity gradients at different scales is likely to be physically significant, corresponding to object boundaries or reflectance or illumination changes in the world.

Page 56: How to Think about Mental Content

COGNITIVE CONTENTSCognitive contents, on the other hand, are not part of the essential characterization of the device, and are not fruitfully regarded as part of the computational theory proper. They are ascribed to facilitate the explanation of the target cognitive capacity and are sensitive to a host of pragmatic considerations. Hence, they form what I am calling an intentional gloss, a gloss that shows, in a perspicuous way, how the computational/mathematical theory manages to explain the intentionally described explanandum with which we began, and which it is the job of the theory to explain.

Page 57: How to Think about Mental Content

COMPARISON WITH HYPER REPRESENTATIONALISM

Computationally characterized cognitive states do not have their representational contents (what I am calling their cognitive contents) essentially.

They do not stand in a robust, naturalistic relation to what they are about.

But they do satisfy the third HR constraint: they can misrepresent, but only relative to ‘success criteria’ provided by the cognitive capacity to be explained (i.e. by the pre-theoretic explanandum).

Page 58: How to Think about Mental Content

COMPARISON WITH CHOMSKY’S ERSATZ REPRESENTATIONALISMComputational states are construed

relationally in the theory proper as representations of: they are representations of mathematical objects, and they have their mathematical contents essentially.

This is just to say that a computational characterization is a function-theoretic characterization. To characterize a mechanism as computing a function (in the mathematical sense) just is to construe its inputs and outputs as representing (respectively) the arguments and values of that function.

Page 59: How to Think about Mental Content

COMPARISON WITH CHOMSKY’S ERSATZ REPRESENTATIONALISMI mentioned earlier that Chomsky needs to retain some notion of representation to preserve the idea that the theory describes a cognitive capacity or competence. The characterization of a phenomenon as a competence or a capacity is itself normative, given pre-theoretically, prior to the theory’s characterization of the computational mechanism underlying the competence. The mechanisms must bear some fairly transparent relation to these capacities.

Page 60: How to Think about Mental Content

COMPARISON WITH CHOMSKY’S ERSATZ REPRESENTATIONALISMThe assignment of appropriate representa-tional contents to the structures posited by the theory – taking them to be representations of edges or joint angles or noun phrases – secures that connection between the mechanisms described in the theory in mathematical (and physical) terms, and the cognitive capacity (competence) that is the explanatory target.

Page 61: How to Think about Mental Content

COMPARISON WITH CHOMSKY’S ERSATZ REPRESENTATIONALISMChomsky is right that such notions as misrep-resentation and error are not part of the theory proper. But it doesn’t follow that these notions are dispensable. Maybe it is rather parochial of us to want to see the processes described in non-intentional terms in the theory as constituting the exercise of a capacity or competence, and the exercise of that capacity in certain circumstances as mistakes. But there is no reason why science shouldn’t aim to explain the features of our experience that interest us, even if it tells us that these features do not go very deep.

Page 62: How to Think about Mental Content

THE PLAN FOR THE TALK Introduction: Representationalism The ‘received view’: hyper representationalism The Chomskian challenge:

ersatz representationalism Two examples A third view (two kinds of content) Computational models and intrinsic

intentionality

Page 63: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

Other things that have meaning – utterances, marks on the page – have their meanings only derivatively, in virtue of conventions governing their use. Much of the interest among philosophers of mind in computational cognitive science is based in the hope that it will go some way toward explaining the intrinsic intentionality of mental states.

Page 64: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

I have argued that computational states do not have the earmarks of intrinsic intentionality.States with intrinsic intentionality have determinate content. Computational states have determinate content only by appeal to various pragmatic considerations such as ease of explanation and connections to our commonsense practices. We do not find the grounds of determinacy in the computational theory itself.

Page 65: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

States with intrinsic intentionality are said to have their contents essentially. But at most, computational states have their mathematical content essentially. We might be tempted to conclude that they really represent not distal objects and properties, but mathematical objects. But I don’t think this is the right way to think of things. The canonical characterization of a mechanism given by a computational theory is function-theoretic, and to characterize a mechanism in these terms just is to construe its inputs and outputs as representing the arguments and values of a function.

Page 66: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

But even this ‘essential’ mathematical content has a pragmatic rationale – it reflects the commitment of computational practice to providing a domain-general characterization of a cognitive competence in terms of a ‘toolbox’ of mathematical functions that are already well understood. The fact that the structures are assigned mathematical content in the theory doesn’t preclude them from being assigned different cognitive contents in the gloss. Multiple content ascriptions serve different explanatory purposes.

Page 67: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

The idea that computational cognitive science is looking for, or fixing on, the content of mental states find no support from actual theorizing.Given that computational cognitive science aims to provide a foundation for the study of cognition, it should not be surprising that we don’t find full-blooded intrinsic intentionality in the theories themselves. How could they advert to it, and yet purport to be explanatory of it?

Page 68: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

We should not look for intrinsic intentionality in computational models, any more than we should look for representations that have their cognitive contents essentially – structures that have their representational roles independently of how they are used in specific environments.

Page 69: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

Computational cognitive science has reductive ambitions – it aims to explain the represen-tational capacities of minds, without simply assuming these representational capacities. It would be an explanatory failure if computa-tional models posited states and structures that have all the marks of intrinsic intentionality. The ultimate goal is to explain how meaning and intentionality fit into the natural world. Explanatory progress is not made by positing more of the same mysterious phenomenon.

Page 70: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

I am not suggesting that computational cognitive science has succeeded in reducing content and intentionality. Computational theories appeal to unreduced mathematical content. But we can see how a well-confirmed computational theory that included an account of how a mechanism is realized in neural structures would make some progress toward a reductive explanation of intentionality in the cognitive domain in question.

Page 71: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

What we normally think of as representational contents – contents defined on distal objects and properties – are not in the theory. They are in the explanatory gloss that accompanies the theory, where they are used to show that the theory addresses the phenomena for which we sought an explanation. The gloss allows us to see ourselves as solving problems, exercising rational capacities, occasionally making mistakes, and so on. It characterizes the computational process in ways congruent with our commonsense understanding of ourselves, ways that the theory itself eschews.

Page 72: How to Think about Mental Content

COMPUTATIONAL MODELS AND INTRINSIC INTENTIONALITY

A parallel: If we ever succeed in solving the so-called ‘hard problem’ of consciousness – providing a reductive explanation of phenomenal experience – we will undoubtedly need a phenomenal gloss that employs phenomenal concepts – concepts eschewed by the theory itself – to show that the theory addresses the phenomenon that is its explanatory target (i.e. to bridge the explanatory gap).

Page 73: How to Think about Mental Content

THANK YOU!