52
The problem of consciousness To be conscious is to be able to have some kind of subjective experience or awareness of something. 1 We can only experience something if we are conscious, and if we are conscious it means we can have experiences. Conscious creatures can experience something external in the environment or something internal to the body. It can be the experience of a feeling or of a thought of any type. An experience is positive when the subject enjoys it, is satisfied with it, or is pleased by it. It is negative when it involves some form of suffering. To suffer is to have a negative experience. All emotions and feelings that we have are experiences, and we can also have experiences that are caused only by our thoughts. We can have these experiences insofar as we are conscious; indeed, the fact itself of having experiences is, as noted above, to be conscious. The word “sentience” is sometimes used instead of consciousness. Sentience refers to the ability to have positive and negative experiences caused by external affectations to our body or to sensations within our body. The difference in meaning between sentience and consciousness is slight. All sentient beings are conscious beings. Though a conscious being may not be sentient if, through some damage, she has become unable to receive any sensation of her body or of the external world and can only have experiences of her own thoughts. When a creature has an experience, there exists in that creature what we can call a subject, that is, a “someone” who is having the experience, an “I” who is conscious. The word subjective, which refers to inner, or personal, experiences, refers to this subject. A subject is a someone, one who experiences their world, as an animal does. An object is a thing that does not experience its world. A chicken is a subject of experience, whereas a rock is not. If you pet a chicken she will feel pleasure. If you pet a rock, there is no one there to feel anything. The question we have to answer is: what sorts of creatures are sentient (and, therefore, conscious)? Or, put another way, what kind of physical structure and arrangement of nerve cells does a creature have to have so that it isn’t merely a collection of cells, but a conscious being? 2

The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

The problem of consciousness To be conscious is to be able to have some kind of subjective experience or awareness of something.1 We

can only experience something if we are conscious, and if we are conscious it means we can have

experiences. Conscious creatures can experience something external in the environment or something

internal to the body. It can be the experience of a feeling or of a thought of any type. An experience is

positive when the subject enjoys it, is satisfied with it, or is pleased by it. It is negative when it involves

some form of suffering. To suffer is to have a negative experience.

All emotions and feelings that we have are experiences, and we can also have experiences that are

caused only by our thoughts. We can have these experiences insofar as we are conscious; indeed, the

fact itself of having experiences is, as noted above, to be conscious.

The word “sentience” is sometimes used instead of consciousness. Sentience refers to the ability to have

positive and negative experiences caused by external affectations to our body or to sensations within our

body. The difference in meaning between sentience and consciousness is slight. All sentient beings are

conscious beings. Though a conscious being may not be sentient if, through some damage, she has

become unable to receive any sensation of her body or of the external world and can only have

experiences of her own thoughts.

When a creature has an experience, there exists in that creature what we can call a subject, that is, a

“someone” who is having the experience, an “I” who is conscious. The word subjective, which refers to

inner, or personal, experiences, refers to this subject. A subject is a someone, one who experiences their

world, as an animal does. An object is a thing that does not experience its world. A chicken is a subject of

experience, whereas a rock is not. If you pet a chicken she will feel pleasure. If you pet a rock, there is no

one there to feel anything.

The question we have to answer is: what sorts of creatures are sentient (and, therefore, conscious)? Or,

put another way, what kind of physical structure and arrangement of nerve cells does a creature have to

have so that it isn’t merely a collection of cells, but a conscious being?2

Page 2: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

What is the problem of consciousness?

The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a

brain or a centralized nervous system), consciousness emerges?3 This is what the problem of

consciousness really boils down to. Answering this requires answering the question, what structures must

be present in an organism and how would they function for consciousness to be possible? In other words,

of all the different ways that the bodies of animals are arranged, which ones contain structures and

arrangements that give rise to consciousness? There is no reason to suppose that only a human-like

central nervous system will give rise to consciousness, and a great deal of evidence that very different

types of animals are conscious. An example is bird brains, which have many structural similarities to

mammalian brains, but different arrangements of neurons. Yet their brain circuits seem to be wired in a

different way that creates a similar effect in terms of consciousness and cognition. An octopus is an

invertebrate with a very different type of nervous system. But an octopus exhibits behavior and responds

to her environment like a conscious creature.

Why is it that only beings with a centralized nervous system are sentient?

We don’t yet know what causes consciousness to arise. And until we know this, we can’t know which

creatures will be sentient. But we do know that, in the absence of at least a centralized nervous system,

consciousness will not arise in an animal. By this we must understand a nervous system that not only

transmits information, but has also some brain or ganglia that processes it. We know that creatures

lacking a centralized nervous system cannot be conscious. Non-centralized nervous systems do transmit

information about damage in some part of the organism, but this information does not result in a conscious

experience because there is no bodily structure in which a sufficiently large aggregate of nerve cells

interact to process an experience, as opposed to merely transmitting the information. It is

the processing of information that produces the experience. Processing or computing information is not

merely an indication of consciousness. Consciousness seems to be impossible if no processing occurs.

Reflex arcs: How a nervous system operates without giving rise to an experience

In our bodies, if our knee is lightly tapped, our leg moves automatically (with no intention on our part) and

independently of the experience of the tap that we sense. The information that originates in our knee, with

the tap, splits up and moves through two separate pathways: one path goes to our brain through the spinal

cord, where it is processed to produce the corresponding experience; the other path involves a different

circuit, going through the spinal cord to the muscles that operate the leg, without ever reaching the brain.

In the second path the information takes a much shorter direct route to enable our body to react quickly to

the stimulus (‘reflex arc’). There is a good reason why this dual mechanism exists. There are cases where

some part of the body will be endangered by a slow reaction to an external threat. If we had to think about

moving because of pain, rather than responding automatically, we might not act quickly enough to avoid

harm.

Page 3: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

What is relevant here is that the information transmitted through this ‘reflex arc’ is never experienced

because it is never processed by a central nervous system. The non-centralized nervous systems of some

animals operate just as reflex arcs do. Information is transmitted from the cells receiving certain stimuli to

other cells which must be activated, without any involvement of subjective experience. In these cases

there is a merely mechanical transmission of information. Such reactions are not an indication of

sentience.

For this reason, we can rule out the hypothesis that beings without a centralized nervous system are

sentient, just as we can for organisms lacking a nervous system altogether (see Which beings are not

conscious).

What is known about the emergence of consciousness?

How do the structures and arrangements of different centralized nervous systems operate to give rise to

consciousness? We don’t know.

Currently researchers are trying to identify the neural correlates of consciousness in humans. The neural

correlates of consciousness are the “neural events”, i.e., the ways that sets of neurons work and operate

when a given mental operation occurs.4 In connection with this, researchers are studying human subjects

who have suffered brain lesions and who have, as a consequence, lost some aspects of consciousness.

These studies are in their infancy and it will take a very long time before we have a solid understanding of

the neural correlates of consciousness.

Knowing what operations take place in a nervous system when some experience occurs does not explain

how those operations create the experience. And the neural correlates of a certain type of experience

might be different in different types of animals, like birds, cetaceans and cephalopods. We just don’t know.

Such early research can provide only limited knowledge and while the problem of what consciousness is

and how it arises remains unsolved, speculations about how centralized nervous systems produce

experiences will remain open to revision.

Due to the difficulty of solving the problem of consciousness, those who study it agree that it is unlikely

that it will be solved in the near future. Given what we know today, we can only make rough estimates

about which beings are more or less likely to be sentient and we can confidently assert that certain

creatures are not sentient. Given current information, it is impossible to know with certainty which beings

with centralized nervous systems are conscious. We know that without such a system there cannot be

consciousness, but we do not know what degree of complexity such a system must possess for

consciousness to emerge. We cannot know exactly which beings can have experiences until we know

exactly what physical basis is necessary for consciousness and therefore experiences. And we cannot

answer this question until we solve the problem of how consciousness arises.

Page 4: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

The significance of having experiences that are positive or negative

In order to determine which beings to give moral consideration to, we must consider that beings who have

experiences as a result of the evolutionary process can have both positive and negative experiences.5 If

there were beings who had either positive or negative experiences only, these beings would also deserve

moral consideration.

There could also be entities that have experiences that are neither positive nor negative. There is a

difference between the capacity to have experiences in general and the capacity to have positive or

negative experiences specifically. It may be possible to create a computer that can have experiences yet

is indifferent to those experiences. Its experiences would be neither positive nor negative. The computer

wouldn’t care whether it has them or not. Such a computer would also be indifferent towards its own

continued existence. Because it would lack positive and negative experiences altogether, the computer

wouldn’t care how we treated it. Regardless of what we did to the computer, it would be impossible for us

to harm or help it. If it were in any way pleased about the prospect of continuing to exist, or upset at the

thought of its own death, the computer would then count as having positive or negative experiences, and

would have to be considered a different kind of entity, one that is sentient.

We know that sentient animals, human and nonhuman, have experiences that are positive or negative.

Since the problem of consciousness will likely remain unsolved for many decades, we should act on the

assumption that any animal with a centralized nervous system may be sentient. We should consider the

likelihood that they are sentient, and that we can affect them through our actions, and so we should give

them moral consideration.

Further readings

Barron, A. B. & Klein, C. (1996) “What insects can tell us about the origins of consciousness”, PNAS, 113,

pp. 4900-4908 [accessed on 24 December 2016].

Chalmers, D. J. (1996) The conscious mind: In search of a fundamental theory, Oxford: Oxford University

Press.

Chalmers, D. J. (2003) “Consciousness and its place in nature”, in Stich, S. P. & Warfield, T. A.

(eds.) Blackwell guide to philosophy of mind, Oxford: Blackwell, pp. 102-142.

Feinberg, T. E. & Mallatt, J. M. (2016) The ancient origins of consciousness: How the brain created

experience, Oxford: Oxford University Press.

Gennaro, R. J. (2005) “Consciousness”, Internet Encyclopedia of Philosophy [accessed on 13 November

2013].

Page 5: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Godfrey-Smith, P. (2016) Other minds: The octopus, the sea, and the deep origins of consciousness, New

York: Farrar, Straus and Giroux.

Gray, R. (2003) “Recent work on consciousness”, International Journal of Philosophical Studies, 11, pp.

101-107.

Gregory, R. L. (ed.) (2001) Oxford companion to the mind, Oxford: Oxford University Press.

Honderich, T. (2004) On consciousness, Oxford: Oxford University Press.

Hurley, S. L. (1998) Consciousness in action, Oxford: Oxford University Press.

Ito, M.; Miyashita, Y. & Rolls, E. T. (1997) Cognition, computation, and consciousness, Oxford: Oxford

University Press.

Jackendoff, R. S. (1987) Consciousness and the computational mind, Cambridge: MIT Press.

Kriegel, U. (2006) “Theories of consciousness”, Philosophy Compass, 1, pp. 58-64.

Lormand, E. (1996) “Consciousness”, Routledge Encyclopedia of Philosophy [accessed on 26 November

2013].

Lloyd, D. (2004) Radiant cool: A novel theory of consciousness, Cambridge: MIT Press.

Lycan, W. G. (1987) Consciousness, Cambridge: MIT Press.

Lycan, W. G. (1996) Consciousness and experience, Cambridge: MIT Press.

McGinn, C. (2004) Consciousness and its objects, Oxford: Oxford University Press.

Metzinger, T. (1985) “Introduction: The problem of consciousness”, in Metzinger, T. (ed.) Conscious

experience, Exeter: Imprint Academic, pp. 3-37.

Minsky, M. (2006) The emotion machine: Commonsense thinking, artificial intelligence, and the future of

the human mind machine, New York: Simon & Schuster.

Nadel, L. (ed.) (2003) Encyclopedia of cognitive science, London: Nature Publishing Group.

Nelkin, N. (1996) Consciousness and the origins of thought, Oxford: Oxford University Press.

O’Shaughnessy, B. (2000) Consciousness and the World, Oxford: Oxford University Press.

Page 6: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Notes

1 Nagel, T. (1974) “What is it like to be a bat?”, Philosophical Review, 83, pp. 435-450.

2 It seems perfectly possible that a structure different from the neural wiring of sentient animals would be

able to perform analogous functions. Therefore, it is in principle possible that there could be minds that are

not organic, although in our world, currently at least, only animals with centralized nervous systems are

conscious.

3 Chalmers, D. J. (1996) The conscious mind: In search of a fundamental theory, Oxford: Oxford

University Press.

4 Rees, G.; Kreiman, G. & Koch, C. (2002) “Neural correlates of consciousness in humans”, Nature

Reviews Neuroscience, 3, pp. 261-270. Block, N. (2005) “Two neural correlates of consciousness”, Trends

in Cognitive Sciences, 9, pp. 46-52.

5 Griffin, D. R. (1981) The question of animal awareness: Evolutionary continuity of mental experience,

New York: Rockefeller University Press. Cabanac, M.; Cabanac, A. J. & Paren, A. (2009) “The emergence

of consciousness in phylogeny”, Behavioural Brain Research, 198, pp. 267-272. Grinde, B. (2013) “The

evolutionary rationale for consciousness”, Biological Theory, 7, pp. 227-236. Ng, Y.-K. (1995) “Towards

welfare biology: Evolutionary economics of animal consciousness and suffering”, Biology and Philosophy,

10, pp. 255-285.

Page 7: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Criteria for recognizing sentience There are three general criteria for deciding whether a being is sentient. These involve considerations that

are (1) behavioral, (2) evolutionary, and (3) physiological.

Behavior

When we experience suffering or enjoyment, we tend to behave in certain ways. We grimace, we cry, we

groan… And the same is true of other sentient beings. This applies to both human beings and a large

number of nonhuman animals. Behavior of this sort indicates that those who behave in these ways are

having positive or negative experiences.1

There are, furthermore, certain types of behavior that may lead us to suppose that a creature might be

having such experiences, namely those that demonstrate an understanding of beneficial or harmful

aspects of the environment. For instance, we may see that an animal, after being burned for the first time,

will stay away from fire in the future. And the same applies to positive experiences, as when an animal

finds food at a certain location and later returns to that spot. However, this behavior alone doesn’t provide

a reason to believe that these creatures can experience suffering and enjoyment. It is, more generally, a

reason to believe that they can have experiences at all and are therefore conscious. Although we should

also note that it is perfectly possible that there are beings who are conscious but lack any capacity for

learning.

These are examples of specific behaviors exhibited by many nonhuman animals. But these creatures

behave in complex ways not only in situations where we may think that they are experiencing suffering or

enjoyment. What is most relevant to ascertain whether a being is sentient is not how it reacts in these

specific cases, but how the being behaves in general. The behavior of an animal can lead us to

understand that he is sentient, even if he doesn’t exhibit signs of suffering or enjoyment. Here’s the

reason.

The way animals manage to keep themselves alive (and, from an evolutionary perspective, to pass on

their genetic material) is by behaving in certain ways. Thus, those beings that avoid what threatens their

survival and seek what promotes it do actually survive. The key to this is behavior. Consciousness

Page 8: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

provides a wide range of possibilities for survival and for passing on genetic material to those organisms

lucky enough to have awareness, because it determines whether they act in one way rather than another.

The way this happens is through motivation. Positive and negative experiences motivate subjects to react

favorably and unfavorably to that which elicits them. This type of reaction to positive and negative

experiences could not have been programmed in creatures lacking the kind of motivation made possible

by the capacity for conscious awareness.2

Thus, we find that the possession of consciousness is the most plausible explanation we can give when

trying to determine why an animal acts in complex ways. There is a huge number of animals whose

behavior is by no means simple. These animals encounter very diverse situations, where in order to

survive they must respond appropriately. The plasticity that this requires is difficult to explain without

appealing to consciousness.

Evolutionary considerations

In discussing behavior we consider evolution, which explains why there are conscious beings in the first

place. If such beings exist, it’s probably because consciousness increased their chances of survival, and

thus of passing on their genes to the next generation of sentient beings.

There are two ways in which evolutionary considerations can lead to the conclusion that a being

possesses or lacks the ability to have positive and negative experiences. The first refers to the kind of

circumstances that may surround the life of an animal and to the animal’s capacity to act in certain ways.

As indicated above, the capacity to feel arises in evolutionary history in connection with the capacity to act

in one way or another.3

Now, we have seen that this motivation makes sense when the behavior of the creature can be very

plastic, i.e., complex and adaptable to circumstances. When that which helps an animal pass on its genes

is a very simple type of behavior, having the capacity for conscious experience is not really necessary. In

these cases, consciousness would involve a wasteful use of energy, since it carries a considerable

metabolic cost. In the case of humans, up to 20% of energy consumed is spent on maintaining an active

brain. A portion of this energy is used to perform functions not accompanied by subjective experience, but

a very important portion is involved in the production and maintenance of consciousness. In animals with a

lower brain-to-body mass ratio than humans, this portion is not as high, but is still quite high on the whole.

If consciousness weren’t necessary to carry out behavior required for survival, it would be a drag, as it

would needlessly consume energy that could be used for other useful functions.4 This would be the case

for creatures who are unable to move, such as plants or fungi.

There is another way in which evolutionary considerations can help us determine whether a being is or

isn’t sentient: kinship. Consider the case of species that are very closely related, as in the case of species

that have diverged recently in the evolutionary tree. We have some reason to believe that, if the members

of one of these two species are conscious, then so are the members of the other. (Some examples of this

can be seen in the section on what beings are sentient.5)

Page 9: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Physiology

The presence of a centralized nervous system

The criterion that should be the determining factor as to whether a being is sentient relies on evidence

from physiology. It is the physical structure and associated functioning that makes it possible for a creature

to have conscious experiences. However, as of today we do not know the mechanisms by which this

occurs. To be sentient, a being must possess a certain physical structure, but we only have a rough idea

of the nature of this structure. This is explained in the section on the problem of consciousness.

The mere possession of a nervous system is not a sufficient condition for sentience, if the nervous system

is not centralized. Today we only know that a centralized nervous system is necessary for sentience.

However, the complexity of a centralized nervous system can vary quite considerably. The simplest

nervous systems consist solely of nerve ganglia, which are made by a combination of different nerves.

They can vary in complexity, ranging from very simple structures to fully-formed brains. And fully-formed

brains, too, can vary significantly in their degree of internal organization. A very simple brain may be only

slightly more developed than a complex nerve ganglion.

Moreover, there can also be considerable variation in the degree of centralization. Octopodes, for

example, are mollusks that have a centralized nervous system much more complex than that of many

vertebrates. The organization of the nervous system of octopodes and vertebrates is very different, due to

differences in their respective evolutionary histories. Still, the complexity in behavior exhibited by

octopodes leads to the conclusion that they are conscious beings. For this reason, we know that sentience

doesn’t require a brain configuration like ours, like that of mammals or even that of vertebrates.6 In fact,

this suggests that the mode of organization of a nervous system necessary for positive and negative

experience may be quite simple. Such a mode of organization would be realized in an ancient structure

that evolved prior to the emergence of the structural complexity observed in the nervous system of an

octopus or a mammal. This leads to the conclusion that the animals capable of having conscious

experiences are very numerous indeed.

Physiological criteria other than nerve structure

The nerve structure is an essential criterion for deciding whether a being is conscious, but there are other,

additional criteria. On the basis of these alone we wouldn’t be in a position to conclude that a being

without a centralized nervous system is conscious; but they provide additional evidence for consciousness

in the case of beings who do possess a centralized nervous system.

One of these criteria refers to a number of chemicals that, at least in many cases, act as analgesics. A

number of animals, which we can assume are conscious (among them ourselves), produce several

substances that have the purpose of alleviating our suffering in situations where it is not useful for us (for

instance, if we must flee from something that threatens us). However, a large number of invertebrates with

Page 10: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

very simple centralized nervous systems also secrete these substances. Admittedly, the function of these

substances could be different in these organisms, but in principle it is natural to think that they could play

the same role, on the basis of evolutionary considerations.7

Another criterion is the possession of drivers like nociceptors. The function of these drivers is to transmit

information of tissue damage to the brain.8 Nociception is the detection of noxious or potentially noxious

sensory stimuli. It occurs when the tissues of an organism are affected in ways that cause or may cause

damage. This damage is detected in the tissues and the information is transmitted along the nervous

system. This is the mechanism that allows us to experience pain and other physical sensations (such as

heat or cold).

Thus, one might think that the study of sentience could be reduced to the study of nociception. This would

be wrong, however. The reason is that the information that is received and transmitted through the

mechanism of nociception is not as such a sensation of pain. In order for pain to be actually experienced,

that information has to be received by a brain that is organized in such a fashion as to make it not only

capable of processing it, but of processing it in a way that results in the experience coded by it. And what

is unknown as of today is how brains need to be organized in order to give rise to this experience.

However, although the transmission of information through nociception is not equivalent to the

experiencing of suffering, in animals like ourselves it is a precondition for it. Moreover, nociception has no

additional function. In light of this, when considering a creature having a centralized nervous system with a

structure that makes nociception possible, we can safely assume that creature has the capacity for

suffering and enjoyment (they are conscious).

However, although we can say this, the issue of which beings are sentient is still unsolved, because there

may be creatures that are capable of having experiences yet lack nociceptors. This would be possible in

the case of animals with very simple pain transmitters.

Further readings

Allen, C. (1992) “Mental content and evolutionary explanation”, Biology and Philosophy, 7, pp. 1-12.

Allen, C. & Bekoff, M. (1997) Species of mind, Cambridge: MIT Press.

Baars, B. J. (2001) “There are no known differences in brain mechanisms of consciousness between

humans and other mammals”, Animal Welfare, 10, suppl. 1, pp. 31-40.

Beshkar, M. (2008) “The presence of consciousness in the absence of the cerebral cortex”, Synapse, 62,

pp. 553-556.

Chandroo, K. P.; Yue, S. & Moccia, R. D. (2004) “An evaluation of current perspectives on consciousness

and pain in fishes”, Fish and Fisheries, 5, pp. 281-295.

Page 11: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Darwin, C. (1896 [1871]) The descent of man and selection in relation to sex, New York: D. Appleton and

Co. [accessed on 12 January 2014].

Dawkins, M. S. (1993) Through our eyes only? The search for animal consciousness, New York: W. H.

Freeman.

Dawkins, M. S. (2001) “Who needs consciousness?”, Animal Welfare, 10, suppl. 1, pp. 19- 29.

DeGrazia, D. (1996) Taking animals seriously: Mental life & moral status, Cambridge: Cambridge

University Press.

Dretske, F. I. (1999) “Machines, plants and animals: the origins of agency”, Erkenntnis, 51, pp. 19-31.

Edelman D. B. & Seth, A. K. (2009) “Animal consciousness: A synthetic approach”, Trends in

Neuroscience, 9, pp. 476-484.

Farah, M. J. (2008) “Neuroethics and the problem of other minds: implications of neuroscience for the

moral status of brain-damaged patients and nonhuman animals”, Neuroethics, 1, pp. 9-18.

Griffin, D. R. & Speck, G. B. (2004) “New evidence of animal consciousness”, Animal Cognition, 7, pp. 5-

18.

Jamieson, D. (1998) “Science, knowledge, and animals minds”, Proceedings of the Aristotelian Society,

98, pp. 79-102.

Panksepp, J. (2004) Affective neuroscience: The foundations of human and animal emotions, New York:

Oxford University Press.

Radner, D. & Radner, M. (1989) Animal consciousness, Buffalo: Prometheus.

Robinson, W. S. (1997) “Some nonhuman animals can have pains in a morally relevant sense”, Biology

and Philosophy, 12, pp. 51-71.

Sneddon, L. U. (2009) “Pain perception in fish: Indicators and endpoints”, ILAR Journal, 50, pp. 338-342.

Notes

1 Rollin, B. E. (1989) The unheeded cry: Animal consciousness, animal pain and science, Oxford: Oxford

University Press.

2 Gherardi, F. (2009) “Behavioural indicators of pain in crustacean decapods”, Annali dell’Istituto

Superiore di Sanità, 45, pp. 432-438.

Page 12: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

3 Damasio, A. R. (1999) The feeling of what happens: Body and emotion in the making of consciousness,

San Diego: Harcourt.

4 Ng, Y.-K. (1995) “Towards welfare biology: Evolutionary economics of animal consciousness and

suffering”, Biology and Philosophy, 10, pp. 255-285.

5 Griffin, D. R. (1981) The question of animal awareness: Evolutionary continuity of mental experience,

New York: Rockefeller University Press. Cabanac, M.; Cabanac, A. J.; Parent, A. (2009) “The emergence

of consciousness in phylogeny”, Behavioural Brain Research, 198, pp. 267-272. Grinde, B. (2013) “The

evolutionary rationale for consciousness” Biological Theory, 7, pp 227-236

6 Smith, J. A. (1991) “A question of pain in invertebrates”, ILAR Journal, 33, pp. 25-31 [accessed on 24

December 2013]. Mather, J. A. (2001) “Animal suffering: An invertebrate perspective”, Journal of Applied

Animal Welfare Science, 4, pp. 151-156. Mather, J. A. & Anderson, R. C. (2007) “Ethics and invertebrates:

A cephalopod perspective”, Diseases of Aquatic Organisms, 75, pp. 119-129 [accessed on 9 April 2017].

7 Kavaliers, M.; Hirst, M. & Tesky, G. C. (1983) “A functional role for an opiate system in snail thermal

behaviour”, Science, 220, pp. 99-101.

8 Sneddon, L. U. (2004) “Evolution of nociception in vertebrates: Comparative analysis of lower

vertebrates”, Brain Research Reviews, 46, pp. 123-130.

Page 13: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

What beings are not conscious Beings that have no centralized nervous systems are not sentient. This includes bacteria, archaea,

protists, fungi, plants and certain animals. There is the possibility that a number of animals with very

simple centralized nervous systems are not sentient either, but this is an open question and cannot be

settled yet.

The reasons that lead to this conclusion are as follows:

Only among animals can we find the physical structures that enable sentience

Possession of a centralized nervous system is what enables animals to have experiences, and only

animals possess such systems. No other living entity has a nervous system. Looking at the anatomy of a

fungus, bacteria or plant, for example, we will not find any nerves.

It could be that beings other than animals possess different physical structures that fulfill the same function

as a centralized nervous system. Thus, a system organized in an equally complex fashion could result in a

sentient organism. This is, in principle, entirely possible. However, among all organisms in our biosphere,

none of the non-animals such as plants, fungi, protists, bacteria and archaea has such a structure. None

of them has a mechanism for transmission of information similar to that present in animals with centralized

nervous systems.

Evolutionary logic and living beings that aren’t animals

Structures that allow for the development of consciousness appear very early in the development of

animals, yet do not ever appear in living things that are not animals. Living entities that are not animals

have a very simple structure. They do not have a nerve structure or any physical structure complex

enough to allow for the possession of consciousness. Moreover, the possession of such a structure would

make no evolutionarily sense.

As shown in What beings are conscious, the capacity to feel arises in evolutionary history due to its

usefulness in motivating animals, through positive and negative stimuli, to engage in or abstain from

Page 14: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

fitness-increasing behavior. Therefore, it would make no sense for beings who lack the capacity to engage

in such behavior to have the capacity to feel. For example, plants can’t run away from a threat or forage

for a type of food they enjoy. These stimuli would serve no purpose, and would involve an unnecessary

expenditure of energy.

Plants do not have experiences: the response to external stimuli is not sentience

One idea that has no scientific backing but which has received some support is the view that plants have

experiences because they respond to certain stimuli. However, exhibiting a physical response of this type

does not require the capacity for subjective experience.

It is also sometimes claimed that certain plants grow better if there is music in the environment, or if

people talk to them. It may be that certain sound waves somehow benefit plant growth, and that these

waves overlap with those that humans find pleasant. But this by no means implies that plants are

organisms with physical structures that cause mental experiences, a center of consciousness that allows

them to experience and appreciate music and improve their growth on that basis (we may note that the

taste in music is something very culturally specific, which further shows the absurdity of the pseudo-

scientific assertion that “plants like music”). In any case, any other alleged evidence of this sort cannot be

considered a sign of possession of consciousness by the plant as long as it is only based on behavioral

observations. Arguments for possession of consciousness must be backed by physiological evidence, with

a specific physical structure identified and reasons given for why such a structure might give rise to

conscious experience.

The ways an organism lacking a non-centralized nervous system may respond to stimuli can vary greatly.

Still, however complex they are, with no centralized nervous system or physical structure that can fulfill a

similar function, such a response cannot be explained by consciousness. We should explain it by

assuming some alternative physical mechanism. Although non-conscious physical responses fail to attain

a level of complexity comparable to that of creatures whose consciousness allows them a wide range of

behaviors, non-conscious responses can have a relatively high level of complexity.

This can also be seen in a number of machines that humans have manufactured. For example, a bulb

connected to a photoelectric cell can be switched on and off depending on the amount of light in the

environment, without this being accompanied by any type of experience.

Non-sentient animals

The fact that only animals are sentient does not mean that all animals are sentient. As explained in the

page on criteria for sentience, in order to have experiences it is necessary to have a centralized nervous

system. And some animals lack such a system. This implies that there are animals who cannot be

sentient. First, we would include here those beings that do not have a nervous system, such as Porifera

(the phylum that includes sponges), and those who do have a nervous system which is not centralized,

Page 15: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

such as echinoderms and cnidarians. Non-sentient animals would then include sponges, corals,

anemones, and hydras.

Again, as in the case of plants, these animals may react to external stimuli, and even engage in

locomotion. For example, sponges, though not having a nervous system, have a physical mechanism that

allows them to perform certain movements (by circulating water through the cells of which they are

composed). Echinoderms (such as starfish, sea urchins and sea cucumbers) can have relatively complex

behavior (as can, for example, a carnivorous plant). But, as in the case of plants, there is nothing in their

physiology to allow possession of sentience.

Depending on what kind of organization a centralized nervous system needs in order to allow experience,

it is possible that some animals with centralized but very simple nervous systems are not sentient. This

could happen if consciousness requires a certain degree of nervous complexity, which may well be the

case. However, since at present we lack the relevant knowledge, the question must remain open. What

we do know based on our present knowledge is that all sentient beings are animals, but not all animals are

sentient.

It is important to note, though, that there are many other animals who do have simple yet centralized

nervous systems. This includes many invertebrates, including mollusks such as cephalopods and

arthropods such as crustacea or insects. Our degree of certainty about whether they are sentient can vary

(we can be really confident that they are in the case of cephalopods, but uncertain in the case of bivalves).

But the case of these animals is wholly different from that of the animals without any nervous system with

a structure that allows information processing.

Further readings

Broom, D. M. (2007) “Cognitive ability and sentience: Which aquatic animals should be

protected?”, Diseases of Aquatic Organisms, 75, pp. 99-108.

Dawkins, M. S. (2001) “Who needs consciousness?”, Animal Welfare, 10, pp. 19-29.

Edelman, D. B. & Seth A. K. (2009) “Animal consciousness: A synthetic approach”, Trends in

Neurosciences, 32, pp. 476-484.

Griffin, D. R. (1981) The question of animal awareness: Evolutionary continuity of mental experience, New

York: Rockefeller University Press.

Grinde, B. (2013) “The evolutionary rationale for consciousness”, Biological Theory, 7, pp 227-236.

Lurz, R. W. (ed.) (2009) The philosophy of animal minds, Cambridge: Cambridge University Press.

Page 16: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Mather, J. A. (2001) “Animal suffering: An invertebrate perspective”, Journal of Applied Animal Welfare

Science, 4, pp. 151-156.

McGinn, C. (2004) Consciousness and its objects, Oxford: Oxford University Press.

Norton, N. (1996) Consciousness and the origins of thought, Oxford: Oxford University Press.

O’Shaughnessy, B. (2000) Consciousness and the world, Oxford: Oxford University Press.

Rosenthal, D. M. (2008) “Consciousness and its function”, Neuropsychologia, 46, pp. 829-840.

Smith, J. A. (1991) “A question of pain in invertebrates”, Institute for Laboratory Animals Journal Journal,

33, pp. 1-2 [accessed on 27 September 2013].

Page 17: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

What beings are conscious? Given the criteria we have for considering whether a being is conscious, it is reasonable to conclude that

vertebrates and a large number of invertebrates are conscious. The clearer cases are those of animals

who have a centralized nervous system whose central organ (basically, a brain) has some development.

However, there are a number of animals who possess centralized nervous systems whose central organ is

not quite developed. In these cases doubts may arise about whether they are conscious or not. The

reason for this is that if in order to be conscious it is necessary that a nervous system be organized in a

certain way, then the evolutionary path that will lead there will necessarily pass first, in its previous stages,

through the existence of a nervous system without any centralization, and afterwards through a nervous

system that starts to be centralized, but not enough to host consciousness. First, the nervous system

becomes minimally centralized, with some very simple nervous ganglia, then, with more complex ganglia.

Nervous systems become more complex until at some point the phenomenon of consciousness appears.

Along the evolutionary path there may be stages where there are some minimally centralized nervous

systems that don’t give rise to consciousness.

We don’t know with full certainty if there are currently animals with minimally centralized nervous systems

that don’t give rise to consciousness. It may be that all the centralized nervous systems that exist currently

are centralized enough to host consciousness. This would be the case if all those that were in the

intermediate stage, that is, having minimally centralized nervous systems that don’t give rise to

consciousness, were already extinct. We have no answer to this question at this point.

Vertebrates and many invertebrates are conscious

Among those animals that are conscious we can count with a high degree of certainty vertebrates

including human beings and invertebrates such as cephalopods (such as octopuses and squids), since

they satisfy the criteria for sentience. In addition, we also have strong reasons to think that other animals

such as arthropods (insects, arachnids and crustaceans) are conscious too. The physiology of these

animals is organized in ways that seem to be sufficient for giving rise to consciousness, and their behavior

also seems to support this idea.1

Page 18: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

As for other animals, such as bivalve mollusks, we don’t have reasons as strong as those we have in the

previous cases.2 However, given the problems involved in determining the basis of consciousness, we

cannot rule out completely the possibility that they are sentient, unlike in the case of those with a nervous

system that is not centralized.

The following are some examples of animals who would fall, respectively, in these two groups.

Insects and other arthropods

It is often a controversial issue whether animals such as insects, arachnids and other arthropods are

sentient.3

In the case of insects we can consider the following line of reasoning, which is actually an argument

by homology. Insects possess a centralized nervous system which is centralized not merely due to the

presence of ganglia, but actually includes a brain. It must be noted, though, that it is a very simple and

small brain. Therefore, considering insects’ physiology alone is not enough to conclude whether they are

conscious or not. Apart from this, the behavior of some insects is very simple. Others, however, have very

complex behavior. A clear example of this is bees. Their behavior, including their famous waggle dance,

leads us to think that they really are beings with experiences, that is, they are conscious.4 There are other

insects that have a very similar physiological structure to that of bees but that exhibit only much simpler

behaviors, such as mosquitoes. Because of the similarity of their nervous systems, we might believe that if

bees are conscious, then they are conscious, too. We must bear in mind, though, that this does not follow

automatically. We must not lose sight of the fact that insects are the most numerous class of animals

currently existing. Due to this, there are certain differences among them that are much more significant

than those that can occur among mammals, for instance.

Because of this greater variation among insects, a different response may be to claim that bees (or, in

general, hymenopterans, the insect order to which bees belong and which includes wasps and ants) are

conscious, while other insects are not. Or, maybe, that even if all insects are conscious, bees are able to

have more vivid experiences. This seems more likely to be the case than that only some insects are

sentient. Although the differences in the behaviors of insects are very significant, the differences between

their physiologies are not so important as to lead us to conclude that only some of them are sentient.

Of course, a different line of reasoning is also possible. We might think that beings exhibiting only simple

behaviors could not be sentient. From here we could posit that the structure of the nervous systems of

these animals would not be complex enough for consciousness to appear (despite its centralization).

Therefore, we would conclude that, since their nervous systems are similar to those of animals exhibiting

only simple behaviors, animals such as bees would not really be conscious, since they would lack the

necessary nervous structure. We would then claim that even behaviors as complex as those of bees could

occur through mechanisms that would not imply the presence of consciousness. This explanation,

however, seems less plausible than the previous one, that complex behaviors imply consciousness and

that, since some insects show complex behaviors, the nervous systems of all insects are similar enough

Page 19: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

that all insects must be conscious, though possibly to varying degrees. A being may be conscious and

display a relatively simple behavior. It seems more unlikely, though, that a nonconscious being would

display a complex behavior.5

In the same vein, we could consider other criteria as well, such as the presence of what are called natural

opiates among insects. This would reinforce the claim that these animals are sentient.

In the case of other arthropods, such as arachnids for instance, we cannot appeal to evolutionary logic to

apply the conclusions we reach in the case of insects, given that they are not closely related. Despite this,

we may follow an argument from homology. Insects’ nervous structures are not significantly more complex

than arachnids’. Plus, the behavior of arachnids is not very different from that of a number of insects.

Therefore, it may make sense to infer that if insects are sentient, then arachnids are sentient too.

We can see that we are facing a question to which we cannot arrive at an immediate and clear answer.

However, we can consider all the different criteria we have to examine the question together, and ponder

all the evidence we have in order to make progress towards finding which is the most plausible answer. In

fact, the reasoning process is similar to the one that is followed in the case of other animals (such as, for

instance, vertebrates). It is only that here we may need to pay attention to more factors.

Bivalves and other beings that have centralized nervous systems with ganglia

The problem becomes more complex if we consider other beings with a simpler structure, that do not

actually have a brain, as insects do, but only some central nervous ganglia. This happens in the case of

many of invertebrates, such as, for instance, bivalve mollusks (including mussels and oysters, among

others) and gastropods (including snails).6 The appeal to evolutionary logic in these cases is not useful,

since the behavior these animals display is very simple. It could be performed without requiring that the

animals that display it be conscious. This happens in particular in the case of animals that stay attached to

rocks or other surfaces without moving, in the case of bivalves or of certain crustacea such as barnacles.

Bivalves can perform some movements, such as opening and closing their shells. But these movements

may be triggered in a more economic way in terms of energy by some stimulus-response mechanism (in

fact, their behavior is not more complex than that of other beings without a centralized nervous system,

such as carnivorous plants or certain echinoderms). At any rate, their physiology leaves the question

open.7 It might be that they have experiences. It is not possible to rule out that possibility given our lack of

knowledge regarding how to answer the question of what is the basis of consciousness.

There are other indicators that are not conclusive, although they may help us to appraise the question.

Bivalves possess mechanisms that are analogous to opiate receptors possessed by other animals.8 In

other animals, the function of these receptors is to make it possible to have their suffering relieved when

they are in significant pain. Due to this, a very plausible explanation of why bivalves have them, maybe the

most plausible one, is that they can suffer too. But this is not totally conclusive. It is also possible that the

organisms of these animals use these substances with an aim that is different from the one that they have

in other animals.

Page 20: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Apart from these, there are other reasons that support the idea that bivalves and other animals with very

simple centralized nervous systems can suffer. One of them is that some bivalves have simple eyes, and

the most plausible explanation is that a being with eyes also has the experience of vision (as in the case of

snails, who have eyes too).9 In addition, it has been discovered that the heart rate of bivalves speeds up in

those situations in which they are threatened by predators.10 These indicators, again, are not totally

conclusive, but they show that it is not clear that these animals are not conscious. In the case of other

animals that can have nervous systems with some centralization we can say something similar.

Further readings

Allen, C. & Trestman, M. (2004) “Animal consciousness”, in Zalta, E. N. (ed.) The Stanford encyclopedia of

philosophy [accessed on 18 February 2015].

Barr, S.; Laming, P. R.; Dick, J. T. A. & Elwood, R. W. (2008) “Nociception or pain in a decapod

crustacean?”, Animal Behaviour, 75, pp. 745-751.

Broom, D. M. (2007) “Cognitive ability and sentience: Which aquatic animals should be

protected?”, Diseases of Aquatic Organisms, 75, pp. 99-108.

Crook, R. J. (2013) “The welfare of invertebrate animals in research: Can science’s next generation

improve their lot?”, Journal of Postdoctoral Research, 1 (2), pp. 9-18 [accessed on 22 February 2014].

Crook, R. J.; Hanlon, R. T. & Walters, E. T. (2013) “Squid have nociceptors that display widespread long-

term sensitization and spontaneous activity after bodily injury”, The Journal of Neuroscience, 33, pp.

10021-10026.

Dawkins, M. S. (2001) “Who needs consciousness?”, Animal Welfare, 10, pp. 19- 29.

Eisemann, C. H.; Jorgensen, W. K.; Merritt, D. J.; Rice, M. J.; Cribb, B. W.; Webb, P. D. & Zalucki, M. P.

(1984) “Do insects feel pain? A biological view”, Experentia, 40, pp. 164-167.

Elwood, R. W. (2011) “Pain and suffering in invertebrates?”, ILAR Journal, 52, pp. 175-184.

Elwood, R. W. & Adams, L. (2015) “Electric shock causes physiological stress responses in shore crabs,

consistent with prediction of pain”, Biology Letters, 11 (1) [accessed on 13 November 2015].

Elwood, R. W. & Appel, M. (2009) “Pain experience in hermit crabs?”, Animal Behaviour, 77, pp. 1243-

1246.

Fiorito, G. (1986) “Is there ‘pain’ in invertebrates?”, Behavioural Processes, 12, pp. 383-388.

Page 21: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Gherardi, F. (2009) “Behavioural indicators of pain in crustacean decapods”, Annali dell´Istituto Superiore

di Sanita, 45, pp. 432-438.

Gentle, M. J. (1992) “Pain in birds”, Animal Welfare, 1, pp. 235-247.

Griffin, D. R. (1984) Animal thinking, Cambridge: Harvard University Press.

Griffin, D. R. (2001) Animal minds: Beyond cognition to consciousness, Chicago: Chicago University

Press.

Harvey-Clark, C. (2011) “IACUC challenges in invertebrate research”, ILAR Journal, 52, pp. 213-220

[accessed on 14 February 2013].

Horvath, K.; Angeletti, D.; Nascetti, G. & Carere, C. (2013) “Invertebrate welfare: An overlooked

issue”, Annali dell´Istituto superiore di sanità, 49, pp. 9-17 [accessed on 3 October 2013].

Huffard, C. L. (2013) “Cephalopod neurobiology: An introduction for biologists working in other model

systems”, Invertebrate Neuroscience, 13, pp. 11-8.

Kamenos, N. A.; Calosi, P. & Moore, P. P. (2006) “Substratum-mediated heart rate responses of an

invertebrate to predation threat”, Animal Behaviour, 71, pp. 809-813.

Knutsson, S. (2015a) The moral importance of small animals, Master’s thesis in practical

philosophy, Gothenburg: University of Gothenburg [accessed on 4 January 2016].

Knutsson, S. (2015b) “How good or bad is the life of an insect”, simonknutsson.com [accessed on 4

January 2016].

Leonard, G. H.; Bertness, M. D. & Yund, P. O. (1999) “Crab predation, waterborne cues, and inducible

defenses in the blue mussel, Mytilus edulis”, Ecology, 75, pp. 1-14.

Magee, B.; Elwood, R. W. (2013) “Shock avoidance by discrimination learning in the shore crab (Carcinus

maenas) is consistent with a key criterion for pain”, Journal of Experimental Biology, 216, pp. 353-358

[accessed on 25 December 2015].

Mather, J. A. (2001) “Animal suffering: An invertebrate perspective”, Journal of Applied Animal Welfare

Science, 4, pp. 151-156.

Mather, J. A. (2008) “Cephalopod consciousness: Behavioral evidence”, Consciousness and Cognition,

17, pp. 37-48.

Mather, J. A. & Anderson, R. C. (2007) “Ethics and invertebrates: A cephalopod perspective”, Diseases of

Aquatic Organisms, 75, pp. 119-129.

Page 22: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Sherwin, C. M. (2001) “Can invertebrates suffer? Or, how robust is argument-by-analogy?”, Animal

Welfare, 10, pp. 103-118.

Tomasik, B. (2013) “Speculations on population dynamics of bug suffering”, Essays on Reducing

Suffering[accessed on 18 March 2017].

Tomasik, B. (2015) “The importance of insect suffering”, Essays on Reducing Suffering [accessed on 23

March 2017].

Tomasik, B. (2016) “Brain sizes and cognitive abilities of micrometazoans”, Essays on Reducing

Suffering[accessed on 18 December 2016].

Tye, M. (2017) Tense bees and shell-shocked crabs: Are animals conscious?, New York: Oxford

University Press.

Volpato, G. L. (2009) “Challenges in assessing fish welfare”, ILAR Journal, 50, pp. 329-337 [accessed on

30 May 2013].

Walters, E. T. & Moroz, L. L. (2009) “Molluscan memory of injury: Evolutionary insights into chronic pain

and neurological disorders”, Brain, Behavior and Evolution, 74, pp. 206-218 [accesed on 22 September

2013].

Wilson, C. D.; Arnott, G. & Elwood, R. W. (2012) “Freshwater pearl mussels show plasticity of responses

to different predation risks but also show consistent individual differences in responsiveness”, Behavioural

Processes, 89, pp. 299-303.

Zullo, L. & Hochner, B. (2011) “A new perspective on the organization of an invertebrate

brain”, Communicative & Integrative Biology, 4, pp. 26-29.

Notes

1 Braithwaite, V. A. (2010) Do fish feel pain?, Oxford: Oxford University Press. Sherwin, O. M. (2001) “Can

invertebrates suffer? Or how robust is argument-by-analogy?”, Animal Welfare, 10, pp. 103-108. Sneddon,

L. U.; Braithwaite, V. A. & Gentle, M. J. (2003) “Do fishes have nociceptors? Evidence for the evolution of

a vertebrate sensory system”, Proceedings of the Royal Society of London, Series B, 270, pp. 1115-1121.

Elwood, R. W.; Barr, S. & Patterson, L. (2009) “Pain and stress in crustaceans?”, Applied Animal

Behaviour Science, 118, pp. 128-136.

2 Crook, R. J. & Walters, E. T. (2011) “Nociceptive behavior and physiology of molluscs: Animal welfare

implications”, ILAR Journal, 52, pp. 185-195 [accessed on 15 October 2013].

Page 23: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

3 Wigglesworth, V. B. (1980) “Do insects feel pain?”, Antenna, 4, pp. 8-9. Allen-Hermanson, S. (2008)

“Insects and the problem of simple minds: Are bees natural zombies?”, Journal of Philosophy, 105, pp.

389-415.

4 Balderrama, N.; Díaz, H.; Sequeda, A.; Núñez, A. & Maldonado H. (1987) “Behavioral and

pharmacological analysis of the stinging response in africanized and italian bees”, in Menzel, Randolf &

Mercer, Alison R. (eds.) Neurobiology and behavior of honeybees, Berlin: Springer-Verlag, p. 127. Núñez,

J.; Almeida, L.; Balderrama, N. & Giurfa, M. (1997) “Alarm pheromone induces stress analgesia via an

opioid system in the honeybee”, Physiology & Behaviour, 63, p. 78.

5 This is a central question when it comes to how positive and negative experiences are spread in nature

that is asked in a groundbreaking work in the examination of the suffering of animals in nature, Ng, Y.-K.

(1995) “Towards welfare biology: Evolutionary economics of animal consciousness and suffering”, Biology

and Philosophy, 10, pp. 255–285.

6 Bear in mind that others mollusks, such as cephalopods, have totally different nervous systems which

are much more complex.

7 Crook, R. J. & Walters, E. T. (2011) “Nociceptive behavior and physiology of molluscs: Animal welfare

implications”, op. cit.

8 Smith, J. A. (1991) “A question of pain in invertebrates”, ILAR Journal, 33, pp. 25-31 [accessed on 20

October 2013]. Sonetti, D.; Mola, L.; Casares, F.; Bianchi, E.; Guarna, M. & Stefano, G. B. (1999)

“Endogenous morphine levels increase in molluscan neural and immune tissues after physical

trauma”, Brain Research, 835, pp. 137-147. Cadet, P.; Zhu, W.; Mantione, K. J.; Baggerman, G. &

Stefano, G. B. (2002) “Cold stress alters Mytilus edulispedal ganglia expression of μ opiate receptor

transcripts determined by real-time RT-PCR and morphine levels”, BMolecular Brain Research, 99, pp. 26-

33.

9 Morton, B. (2001) “The evolution of eyes in the Bivalvia”, in Gibson, R. N.; Barnes, M. & Atkinson, R. J.

A. (eds.) Oceanography and marine Biology: An annual review, vol. 39, London: Taylor & Francis, pp.

165-205. Morton, B. (2008) “The evolution of eyes in the Bivalvia: New insights”, American Malacological

Bulletin, 26, pp. 35-45. Aberhan, M.; Nürnberg, S. & Kiessling, W. (2012) “Vision and the diversification of

Phanerozoic marine invertebrates”, Paleobiology, 38, pp. 187-204. Malkowsky, Y.; Götze, M.-C. (2014)

“Impact of habitat and life trait on character evolution of pallial eyes in Pectinidae (Mollusca:

bivalvia)”, Organisms Diversity & Evolution, 14, pp. 173-185. Morton, B. & Puljas, S. (2015) “The ectopic

compound ommatidium-like pallial eyes of three species of Mediterranean (Adriatic

Sea) Glycymeris (Bivalvia: Arcoida). Decreasing visual acuity with increasing depth?”, Acta Zoologica, 97,

pp. 464-474.

10 Kamenos, N. A.; Calosi, P. & Moore, P. G. (2006) “Substratum-mediated heart rate responses of an

invertebrate to predation threat”, Animal Behaviour, 71, pp. 809-813.

Page 24: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Consciousness Explaining the nature of consciousness is one of the most important and perplexing areas of philosophy, but the concept is notoriously ambiguous. The abstract noun “consciousness” is not frequently used by itself in the contemporary literature, but is originally derived from the Latin con(with) and scire (to know). Perhaps the most commonly used contemporary notion of a conscious mental state is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view. But how are we to understand this? For instance, how is the conscious mental state related to the body? Can consciousness be explained in terms of brain activity? What makes a mental state be a conscious mental state? The problem of consciousness is arguably the most central issue in current philosophy of mind and is also importantly related to major traditional topics in metaphysics, such as the possibility of immortality and the belief in free will. This article focuses on Western theories and conceptions of consciousness, especially as found in contemporary analytic philosophy of mind. The two broad, traditional and competing theories of mind are dualism and materialism (or physicalism). While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense, whereas the latter holds that, to put it crudely, the mind is the brain, or is caused by neural activity. It is against this general backdrop that many answers to the above questions are formulated and developed. There are also many familiar objections to both materialism and dualism. For example, it is often said that materialism cannot truly explain just how or why some brain states are conscious, and that there is an important “explanatory gap” between mind and matter. On the other hand, dualism faces the problem of explaining how a non-physical substance or mental state can causally interact with the physical body. Some philosophers attempt to explain consciousness directly in neurophysiological or physical terms, while others offer cognitive theories of consciousness whereby conscious mental states are reduced to some kind of representational relation between mental states and the world. There are a number of such representational theories of consciousness currently on the market, including higher-order theories which hold that what makes a mental state conscious is that the subject is aware of it in some sense. The relationship between consciousness and science is also central in much current theorizing on this topic: How does the brain “bind together” various sensory inputs to produce a unified subjective experience? What are the neural correlates of consciousness? What can be learned from abnormal psychology which might help us to understand normal consciousness? To what extent are animal minds different from human minds? Could an appropriately programmed machine be conscious?

1. Terminological Matters: Various Concepts of Consciousness

The concept of consciousness is notoriously ambiguous. It is important first to make several distinctions and to define related terms. The abstract noun “consciousness” is not often used in the contemporary literature, though it should be noted that it is originally derived from the Latin con(with) and scire (to know). Thus, “consciousness” has etymological ties to one’s ability to know and perceive, and should not be confused with conscience, which has the much more specific moral connotation of knowing when one has done or is doing something wrong. Through consciousness, one can have knowledge of the external world or one’s own mental states. The primary contemporary interest lies more in the use of the expressions “x is conscious” or “x is conscious of y.” Under the former category, perhaps most important is the distinction between state and creature consciousness (Rosenthal 1993a). We sometimes speak of an individual mental state, such as a pain or perception, as conscious. On the other hand, we also often speak of organisms or creatures as conscious, such as when we say “human beings are conscious” or “dogs are conscious.” Creature consciousness is also simply meant to refer to the fact that an organism is awake, as opposed to sleeping or in a coma. However, some kind of state consciousness is often implied by creature consciousness, that is, the organism is having conscious mental states. Due to the lack of a direct object

Page 25: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

in the expression “x is conscious,” this is usually referred to as intransitive consciousness, in contrast to transitive consciousness where the locution “x is conscious of y” is used (Rosenthal 1993a, 1997). Most contemporary theories of consciousness are aimed at explaining state consciousness; that is, explaining what makes a mental state a conscious mental state. It might seem that “conscious” is synonymous with, say, “awareness” or “experience” or “attention.” However, it is crucial to recognize that this is not generally accepted today. For example, though perhaps somewhat atypical, one might hold that there are even unconscious experiences, depending of course on how the term “experience” is defined (Carruthers 2000). More common is the belief that we can be aware of external objects in some unconscious sense, for example, during cases of subliminal perception. The expression “conscious awareness” does not therefore seem to be redundant. Finally, it is not clear that consciousness ought to be restricted to attention. It seems plausible to suppose that one is conscious (in some sense) of objects in one’s peripheral visual field even though one is only attending to some narrow (focal) set of objects within that visual field.

Perhaps the most fundamental and commonly used notion of “conscious” is captured by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a conscious mental state, there is “something it is like” for me to be in that state from the subjective or first-person point of view. When I am, for example, smelling a rose or having a conscious visual experience, there is something it “seems” or “feels” like from my perspective. An organism, such as a bat, is conscious if it is able to experience the outer world through its (echo-locatory) senses. There is also something it is like to be a conscious creature whereas there is nothing it is like to be, for example, a table or tree. This is primarily the sense of “conscious state” that will be used throughout this entry. There are still, though, a cluster of expressions and terms related to Nagel’s sense, and some authors simply stipulate the way that they use such terms. For example, philosophers sometimes refer to conscious states as phenomenal or qualitative states. More technically, philosophers often view such states as having qualitative properties called “qualia” (prounced like "kwal' ee uh"; the singular is quale). There is significant disagreement over the nature, and even the existence, of qualia, but they are perhaps most frequently understood as the felt properties or qualities of conscious states. Ned Block (1995) makes an often cited distinction between phenomenal consciousness (or “phenomenality”) and access consciousness. The former is very much in line with the Nagelian notion described above. However, Block also defines the quite different notion of access consciousness in terms of a mental state’s relationship with other mental states; for example, a mental state’s “availability for use in reasoning and rationality guiding speech and action” (Block 1995: 227). This would, for example, count a visual perception as (access) conscious not because it has the “what it’s likeness” of phenomenal states, but rather because it carries visual information which is generally available for use by the organism, regardless of whether or not it has any qualitative properties. Access consciousness is therefore more of a functional notion; that is, concerned with what such states do. Although this concept of consciousness is certainly very important in cognitive science and philosophy of mind generally, not everyone agrees that access consciousness deserves to be called “consciousnesses” in any important sense. Block himself argues that neither sense of consciousness implies the other, while others urge that there is a more intimate connection between the two.

Finally, it is helpful to distinguish between consciousness and self-consciousness, which plausibly involves some kind of awareness or consciousness of one’s own mental states (instead of something out in the world). Self-consciousness arguably comes in degrees of sophistication ranging from minimal bodily self-awareness to the ability to reason and reflect on one’s own mental states, such as one’s beliefs and desires. Some important historical figures have even held that consciousness entails some form of self-consciousness (Kant 1781/1965, Sartre 1956), a view shared by some contemporary philosophers (Gennaro 1996a, Kriegel 2004).

Page 26: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

2. Some History on the Topic

Interest in the nature of conscious experience has no doubt been around for as long as there have been reflective humans. It would be impossible here to survey the entire history, but a few highlights are in order. In the history of Western philosophy, which is the focus of this entry, important writings on human nature and the soul and mind go back to ancient philosophers, such as Plato. More sophisticated work on the nature of consciousness and perception can be found in the work of Plato’s most famous student Aristotle (see Caston 2002), and then throughout the later Medieval period. It is, however, with the work of René Descartes (1596-1650) and his successors in the early modern period of philosophy that consciousness and the relationship between the mind and body took center stage. As we shall see, Descartes argued that the mind is a non-physical substance distinct from the body. He also did not believe in the existence of unconscious mental states, a view certainly not widely held today. Descartes defined “thinking” very broadly to include virtually every kind of mental state and urged that consciousness is essential to thought. Our mental states are, according to Descartes, infallibly transparent to introspection. John Locke(1689/1975) held a similar position regarding the connection between mentality and consciousness, but was far less committed on the exact metaphysical nature of the mind. Perhaps the most important philosopher of the period explicitly to endorse the existence of unconscious mental states was G.W. Leibniz (1686/1991, 1720/1925). Although Leibniz also believed in the immaterial nature of mental substances (which he called “monads”), he recognized the existence of what he called “petit perceptions,” which are basically unconscious perceptions. He also importantly distinguished between perception and apperception, roughly the difference between outer-directed consciousness and self-consciousness (see Gennaro 1999 for some discussion). The most important detailed theory of mind in the early modern period was developed by Immanuel Kant. His main work Critique of Pure Reason (1781/1965) is as equally dense as it is important, and cannot easily be summarized in this context. Although he owes a great debt to his immediate predecessors, Kant is arguably the most important philosopher since Plato and Aristotle and is highly relevant today. Kant basically thought that an adequate account of phenomenal consciousness involved far more than any of his predecessors had considered. There are important mental structures which are “presupposed” in conscious experience, and Kant presented an elaborate theory as to what those structures are, which, in turn, had other important implications. He, like Leibniz, also saw the need to postulate the existence of unconscious mental states and mechanisms in order to provide an adequate theory of mind (Kitcher 1990 and Brook 1994 are two excellent books on Kant’s theory of mind.). Over the past one hundred years or so, however, research on consciousness has taken off in many important directions. In psychology, with the notable exception of the virtual banishment of consciousness by behaviorist psychologists (e.g., Skinner 1953), there were also those deeply interested in consciousness and various introspective (or “first-person”) methods of investigating the mind. The writings of such figures as Wilhelm Wundt (1897), William James (1890) and Alfred Titchener (1901) are good examples of this approach. Franz Brentano (1874/1973) also had a profound effect on some contemporary theories of consciousness. Similar introspectionist approaches were used by those in the so-called “phenomenological” tradition in philosophy, such as in the writings of Edmund Husserl (1913/1931, 1929/1960) and Martin Heidegger (1927/1962).The work of Sigmund Freud was very important, at minimum, in bringing about the near universal acceptance of the existence of unconscious mental states and processes. It must, however, be kept in mind that none of the above had very much scientific knowledge about the detailed workings of the brain. The relatively recent development of neurophysiology is, in part, also responsible for the unprecedented interdisciplinary research interest in consciousness, particularly since the 1980s. There are now several important journals devoted entirely to the study of consciousness: Consciousness and Cognition, Journal of Consciousness Studies, and Psyche. There are also major annual conferences sponsored by world wide professional organizations, such as the Association for the Scientific Study of Consciousness, and an entire book series called “Advances in Consciousness Research” published by John Benjamins. (For a small sample of introductory texts and

Page 27: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

important anthologies, see Kim 1996, Gennaro 1996b, Block et. al. 1997, Seager 1999, Chalmers 2002, Baars et. al. 2003, Blackmore 2004, Campbell 2005, Velmans and Schneider 2007, Zelazo et al. 2007, Revonsuo 2010.)

3. The Metaphysics of Consciousness: Materialism vs. Dualism

Metaphysics is the branch of philosophy concerned with the ultimate nature of reality. There are two broad traditional and competing metaphysical views concerning the nature of the mind and conscious mental states: dualism and materialism. While there are many versions of each, the former generally holds that the conscious mind or a conscious mental state is non-physical in some sense. On the other hand, materialists hold that the mind is the brain, or, more accurately, that conscious mental activity is identical with neural activity. It is important to recognize that by non-physical, dualists do not merely mean “not visible to the naked eye.” Many physical things fit this description, such as the atoms which make up the air in a typical room. For something to be non-physical, it must literally be outside the realm of physics; that is, not in space at all and undetectable in principle by the instruments of physics. It is equally important to recognize that the category “physical” is broader than the category “material.” Materialists are called such because there is the tendency to view the brain, a material thing, as the most likely physical candidate to identify with the mind. However, something might be physical but not material in this sense, such as an electromagnetic or energy field. One might therefore instead be a “physicalist” in some broader sense and still not a dualist. Thus, to say that the mind is non-physical is to say something much stronger than that it is non-material. Dualists, then, tend to believe that conscious mental states or minds are radically different from anything in the physical world at all.

a. Dualism: General Support and Related Issues

There are a number of reasons why some version of dualism has been held throughout the centuries. For one thing, especially from the introspective or first-person perspective, our conscious mental states just do not seem like physical things or processes. That is, when we reflect on our conscious perceptions, pains, and desires, they do not seem to be physical in any sense. Consciousness seems to be a unique aspect of the world not to be understood in any physical way. Although materialists will urge that this completely ignores the more scientific third-person perspective on the nature of consciousness and mind, this idea continues to have force for many today. Indeed, it is arguably the crucial underlying intuition behind historically significant “conceivability arguments” against materialism and for dualism. Such arguments typically reason from the premise that one can conceive of one’s conscious states existing without one’s body or, conversely, that one can imagine one’s own physical duplicate without consciousness at all (see section 3b.iv). The metaphysical conclusion ultimately drawn is that consciousness cannot be identical with anything physical, partly because there is no essential conceptual connection between the mental and the physical. Arguments such as these go back to Descartes and continue to be used today in various ways (Kripke 1972, Chalmers 1996), but it is highly controversial as to whether they succeed in showing that materialism is false. Materialists have replied in various ways to such arguments and the relevant literature has grown dramatically in recent years.

Historically, there is also the clear link between dualism and a belief in immortality, and hence a more theistic perspective than one tends to find among materialists. Indeed, belief in dualism is often explicitly theologically motivated. If the conscious mind is not physical, it seems more plausible to believe in the possibility of life after bodily death. On the other hand, if conscious mental activity is identical with brain activity, then it would seem that when all brain activity ceases, so do all conscious experiences and thus no immortality. After all, what do many people believe continues after bodily death? Presumably, one’s own conscious thoughts, memories, experiences, beliefs, and so on. There is perhaps a similar historical connection to a belief in free will, which is of course a major topic in its own right. For our purposes, it suffices to say that, on some definitions of what it is to act freely, such ability seems almost “supernatural” in the sense that one’s conscious decisions can alter the otherwise deterministic sequence of events in nature. To put it another way: If we are entirely physical beings as the materialist holds, then mustn’t all of

Page 28: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

the brain activity and behavior in question be determined by the laws of nature? Although materialism may not logically rule out immortality or free will, materialists will likely often reply that such traditional, perhaps even outdated or pre-scientific beliefs simply ought to be rejected to the extent that they conflict with materialism. After all, if the weight of the evidence points toward materialism and away from dualism, then so much the worse for those related views.

One might wonder “even if the mind is physical, what about the soul?” Maybe it’s the soul, not the mind, which is non-physical as one might be told in many religious traditions. While it is true that the term “soul” (or “spirit”) is often used instead of “mind” in such religious contexts, the problem is that it is unclear just how the soul is supposed to differ from the mind. The terms are often even used interchangeably in many historical texts and by many philosophers because it is unclear what else the soul could be other than “the mental substance.” It is difficult to describe the soul in any way that doesn’t make it sound like what we mean by the mind. After all, that’s what many believe goes on after bodily death; namely, conscious mental activity. Granted that the term “soul” carries a more theological connotation, but it doesn’t follow that the words “soul” and “mind” refer to entirely different things. Somewhat related to the issue of immortality, the existence of near death experiences is also used as some evidence for dualism and immortality. Such patients experience a peaceful moving toward a light through a tunnel like structure, or are able to see doctors working on their bodies while hovering over them in an emergency room (sometimes akin to what is called an “out of body experience”). In response, materialists will point out that such experiences can be artificially induced in various experimental situations, and that starving the brain of oxygen is known to cause hallucinations.

Various paranormal and psychic phenomena, such as clairvoyance, faith healing, and mind-reading, are sometimes also cited as evidence for dualism. However, materialists (and even many dualists) will first likely wish to be skeptical of the alleged phenomena themselves for numerous reasons. There are many modern day charlatans who should make us seriously question whether there really are such phenomena or mental abilities in the first place. Second, it is not quite clear just how dualism follows from such phenomena even if they are genuine. A materialist, or physicalist at least, might insist that though such phenomena are puzzling and perhaps currently difficult to explain in physical terms, they are nonetheless ultimately physical in nature; for example, having to do with very unusual transfers of energy in the physical world. The dualist advantage is perhaps not as obvious as one might think, and we need not jump to supernatural conclusions so quickly.

i. Substance Dualism and Objections Interactionist Dualism or simply “interactionism” is the most common form of “substance dualism” and its name derives from the widely accepted fact that mental states and bodily states causally interact with each other. For example, my desire to drink something cold causes my body to move to the refrigerator and get something to drink and, conversely, kicking me in the shin will cause me to feel a pain and get angry. Due to Descartes’ influence, it is also sometimes referred to as “Cartesian dualism.” Knowing nothing about just where such causal interaction could take place, Descartes speculated that it was through the pineal gland, a now almost humorous conjecture. But a modern day interactionist would certainly wish to treat various areas of the brain as the location of such interactions.

Three serious objections are briefly worth noting here. The first is simply the issue of just how does or could such radically different substances causally interact. How something non-physical causally interacts with something physical, such as the brain? No such explanation is forthcoming or is perhaps even possible, according to materialists. Moreover, if causation involves a transfer of energy from cause to effect, then how is that possible if the mind is really non-physical? Gilbert Ryle (1949) mockingly calls the Cartesian view about the nature of mind, a belief in the “ghost in the machine.” Secondly, assuming that some such energy transfer makes any sense at all, it is also then often alleged that interactionism is inconsistent with the scientifically well-established Conservation of Energy principle, which says that the total amount of energy in the universe, or any controlled part of it, remains constant. So any loss of energy

Page 29: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

in the cause must be passed along as a corresponding gain of energy in the effect, as in standard billiard ball examples. But if interactionism is true, then when mental events cause physical events, energy would literally come into the physical word. On the other hand, when bodily events cause mental events, energy would literally go out of the physical world. At the least, there is a very peculiar and unique notion of energy involved, unless one wished, even more radically, to deny the conservation principle itself. Third, some materialists might also use the well-known fact that brain damage (even to very specific areas of the brain) causes mental defects as a serious objection to interactionism (and thus as support for materialism). This has of course been known for many centuries, but the level of detailed knowledge has increased dramatically in recent years. Now a dualist might reply that such phenomena do not absolutely refute her metaphysical position since it could be replied that damage to the brain simply causes corresponding damage to the mind. However, this raises a host of other questions: Why not opt for the simpler explanation, i.e., that brain damage causes mental damage because mental processes simply are brain processes? If the non-physical mind is damaged when brain damage occurs, how does that leave one’s mind according to the dualist’s conception of an afterlife? Will the severe amnesic at the end of life on Earth retain such a deficit in the afterlife? If proper mental functioning still depends on proper brain functioning, then is dualism really in no better position to offer hope for immortality?

It should be noted that there is also another less popular form of substance dualism called parallelism, which denies the causal interaction between the non-physical mental and physical bodily realms. It seems fair to say that it encounters even more serious objections than interactionism.

ii. Other Forms of Dualism While a detailed survey of all varieties of dualism is beyond the scope of this entry, it is at least important to note here that the main and most popular form of dualism today is called property dualism. Substance dualism has largely fallen out of favor at least in most philosophical circles, though there are important exceptions (e.g., Swinburne 1986, Foster 1996) and it often continues to be tied to various theological positions. Property dualism, on the other hand, is a more modest version of dualism and it holds that there are mental properties (that is, characteristics or aspects of things) that are neither identical with nor reducible to physical properties. There are actually several different kinds of property dualism, but what they have in common is the idea that conscious properties, such as the color qualia involved in a conscious experience of a visual perception, cannot be explained in purely physical terms and, thus, are not themselves to be identified with any brain state or process. Two other views worth mentioning are epiphenomenalism and panpsychism. The latter is the somewhat eccentric view that all things in physical reality, even down to micro-particles, have some mental properties. All substances have a mental aspect, though it is not always clear exactly how to characterize or test such a claim. Epiphenomenalism holds that mental events are caused by brain events but those mental events are mere “epiphenomena” which do not, in turn, cause anything physical at all, despite appearances to the contrary (for a recent defense, see Robinson 2004).

Finally, although not a form of dualism, idealism holds that there are only immaterial mental substances, a view more common in the Eastern tradition. The most prominent Western proponent of idealism was 18th century empiricist George Berkeley. The idealist agrees with the substance dualist, however, that minds are non-physical, but then denies the existence of mind-independent physical substances altogether. Such a view faces a number of serious objections, and it also requires a belief in the existence of God.

b. Materialism: General Support

Some form of materialism is probably much more widely held today than in centuries past. No doubt part of the reason for this has to do with the explosion in scientific knowledge about the workings of the brain and its intimate connection with consciousness, including the close connection between brain damage and various states of consciousness. Brain death is now the main criterion for when someone dies. Stimulation

Page 30: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

to specific areas of the brain results in modality specific conscious experiences. Indeed, materialism often seems to be a working assumption in neurophysiology. Imagine saying to a neuroscientist “you are not really studying the conscious mind itself” when she is examining the workings of the brain during an fMRI. The idea is that science is showing us that conscious mental states, such as visual perceptions, are simply identical with certain neuro-chemical brain processes; much like the science of chemistry taught us that water just is H2O.

There are also theoretical factors on the side of materialism, such as adherence to the so-called “principle of simplicity” which says that if two theories can equally explain a given phenomenon, then we should accept the one which posits fewer objects or forces. In this case, even if dualism could equally explain consciousness (which would of course be disputed by materialists), materialism is clearly the simpler theory in so far as it does not posit any objects or processes over and above physical ones. Materialists will wonder why there is a need to believe in the existence of such mysterious non-physical entities. Moreover, in the aftermath of the Darwinian revolution, it would seem that materialism is on even stronger ground provided that one accepts basic evolutionary theory and the notion that most animals are conscious. Given the similarities between the more primitive parts of the human brain and the brains of other animals, it seems most natural to conclude that, through evolution, increasing layers of brain areas correspond to increased mental abilities. For example, having a well developed prefrontal cortex allows humans to reason and plan in ways not available to dogs and cats. It also seems fairly uncontroversial to hold that we should be materialists about the minds of animals. If so, then it would be odd indeed to hold that non-physical conscious states suddenly appear on the scene with humans.

There are still, however, a number of much discussed and important objections to materialism, most of which question the notion that materialism can adequately explain conscious experience.

i. Objection 1: The Explanatory Gap and The Hard Problem Joseph Levine (1983) coined the expression “the explanatory gap” to express a difficulty for any materialistic attempt to explain consciousness. Although not concerned to reject the metaphysics of materialism, Levine gives eloquent expression to the idea that there is a key gap in our ability to explain the connection between phenomenal properties and brain properties (see also Levine 1993, 2001). The basic problem is that it is, at least at present, very difficult for us to understand the relationship between brain properties and phenomenal properties in any explanatory satisfying way, especially given the fact that it seems possible for one to be present without the other. There is an odd kind of arbitrariness involved: Why or how does some particular brain process produce that particular taste or visual sensation? It is difficult to see any real explanatory connection between specific conscious states and brain states in a way that explains just how or why the former are identical with the latter. There is therefore an explanatory gap between the physical and mental. Levine argues that this difficulty in explaining consciousness is unique; that is, we do not have similar worries about other scientific identities, such as that “water is H2O” or that “heat is mean molecular kinetic energy.” There is “an important sense in which we can’t really understand how [materialism] could be true.” (2001: 68)

David Chalmers (1995) has articulated a similar worry by using the catchy phrase “the hard problem of consciousness,” which basically refers to the difficulty of explaining just how physical processes in the brain give rise to subjective conscious experiences. The “really hard problem is the problem of experience…How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?” (1995: 201) Others have made similar points, as Chalmers acknowledges, but reference to the phrase “the hard problem” has now become commonplace in the literature. Unlike Levine, however, Chalmers is much more inclined to draw anti-materialist metaphysical conclusions from these and other considerations. Chalmers usefully distinguishes the hard problem of consciousness from what he calls the (relatively) “easy problems” of consciousness, such as the ability to discriminate and categorize stimuli, the ability of a cognitive system to access its own internal states, and the difference between wakefulness and sleep. The easy problems generally have more to do with the functions of

Page 31: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

consciousness, but Chalmers urges that solving them does not touch the hard problem of phenomenal consciousness. Most philosophers, according to Chalmers, are really only addressing the easy problems, perhaps merely with something like Block’s “access consciousness” in mind. Their theories ignore phenomenal consciousness. There are many responses by materialists to the above charges, but it is worth emphasizing that Levine, at least, does not reject the metaphysics of materialism. Instead, he sees the “explanatory gap [as] primarily an epistemological problem” (2001: 10). That is, it is primarily a problem having to do with knowledge or understanding. This concession is still important at least to the extent that one is concerned with the larger related metaphysical issues discussed in section 3a, such as the possibility of immortality.

Perhaps most important for the materialist, however, is recognition of the fact that different concepts can pick out the same property or object in the world (Loar 1990, 1997). Out in the world there is only the one “stuff,” which we can conceptualize either as “water” or as “H2O.” The traditional distinction, made most notably by Gottlob Frege in the late 19th century, between “meaning” (or “sense”) and “reference” is also relevant here. Two or more concepts, which can have different meanings, can refer to the same property or object, much like “Venus” and “The Morning Star.” Materialists, then, explain that it is essential to distinguish between mental properties and our concepts of those properties. By analogy, there are so-called “phenomenal concepts” which uses a phenomenal or “first-person” property to refer to some conscious mental state, such as a sensation of red (Alter and Walter 2007). In contrast, we can also use various concepts couched in physical or neurophysiological terms to refer to that same mental state from the third-person point of view. There is thus but one conscious mental state which can be conceptualized in two different ways: either by employing first-person experiential phenomenal concepts or by employing third-person neurophysiological concepts. It may then just be a “brute fact” about the world that there are such identities and the appearance of arbitrariness between brain properties and mental properties is just that – an apparent problem leading many to wonder about the alleged explanatory gap. Qualia would then still be identical to physical properties. Moreover, this response provides a diagnosis for why there even seems to be such a gap; namely, that we use very different concepts to pick out the same property. Science will be able, in principle, to close the gap and solve the hard problem of consciousness in an analogous way that we now have a very good understanding for why “water is H2O” or “heat is mean molecular kinetic energy” that was lacking centuries ago. Maybe the hard problem isn’t so hard after all – it will just take some more time. After all, the science of chemistry didn’t develop overnight and we are relatively early in the history of neurophysiology and our understanding of phenomenal consciousness. ii. Objection 2: The Knowledge Argument There is a pair of very widely discussed, and arguably related, objections to materialism which come from the seminal writings of Thomas Nagel (1974) and Frank Jackson (1982, 1986). These arguments, especially Jackson’s, have come to be known as examples of the “knowledge argument” against materialism, due to their clear emphasis on the epistemological (that is, knowledge related) limitations of materialism. Like Levine, Nagel does not reject the metaphysics of materialism. Jackson had originally intended for his argument to yield a dualistic conclusion, but he no longer holds that view. The general pattern of each argument is to assume that all the physical facts are known about some conscious mind or conscious experience. Yet, the argument goes, not all is known about the mind or experience. It is then inferred that the missing knowledge is non-physical in some sense, which is surely an anti-materialist conclusion in some sense. Nagel imagines a future where we know everything physical there is to know about some other conscious creature’s mind, such as a bat. However, it seems clear that we would still not know something crucial; namely, “what it is like to be a bat.” It will not do to imagine what it is like for us to be a bat. We would still not know what it is like to be a bat from the bat’s subjective or first-person point of view. The idea, then, is that if we accept the hypothesis that we know all of the physical facts about bat minds, and yet some knowledge about bat minds is left out, then materialism is inherently flawed when it comes to explaining consciousness. Even in an ideal future in which everything physical is known by us, something would still be left out. Jackson’s somewhat similar, but no less influential, argument begins by asking us to imagine a

Page 32: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

future where a person, Mary, is kept in a black and white room from birth during which time she becomes a brilliant neuroscientist and an expert on color perception. Mary never sees red for example, but she learns all of the physical facts and everything neurophysiologically about human color vision. Eventually she is released from the room and sees red for the first time. Jackson argues that it is clear that Mary comes to learn something new; namely, to use Nagel’s famous phrase, what it is like to experience red. This is a new piece of knowledge and hence she must have come to know some non-physical fact (since, by hypothesis, she already knew all of the physical facts). Thus, not all knowledge about the conscious mind is physical knowledge.

The influence and the quantity of work that these ideas have generated cannot be exaggerated. Numerous materialist responses to Nagel’s argument have been presented (such as Van Gulick 1985), and there is now a very useful anthology devoted entirely to Jackson’s knowledge argument (Ludlow et. al. 2004). Some materialists have wondered if we should concede up front that Mary wouldn’t be able to imagine the color red even before leaving the room, so that maybe she wouldn’t even be surprised upon seeing red for the first time. Various suspicions about the nature and effectiveness of such thought experiments also usually accompany this response. More commonly, however, materialists reply by arguing that Mary does not learn a new fact when seeing red for the first time, but rather learns the same fact in a different way. Recalling the distinction made in section 3b.i between concepts and objects or properties, the materialist will urge that there is only the one physical fact about color vision, but there are two ways to come to know it: either by employing neurophysiological concepts or by actually undergoing the relevant experience and so by employing phenomenal concepts. We might say that Mary, upon leaving the black and white room, becomes acquainted with the same neural property as before, but only now from the first-person point of view. The property itself isn’t new; only the perspective, or what philosophers sometimes call the “mode of presentation,” is different. In short, coming to learn or know something new does not entail learning some new fact about the world. Analogies are again given in other less controversial areas, for example, one can come to know about some historical fact or event by reading a (reliable) third-person historical account or by having observed that event oneself. But there is still only the one objective fact under two different descriptions. Finally, it is crucial to remember that, according to most, the metaphysics of materialism remains unaffected. Drawing a metaphysical conclusion from such purely epistemological premises is always a questionable practice. Nagel’s argument doesn’t show that bat mental states are not identical with bat brain states. Indeed, a materialist might even expect the conclusion that Nagel draws; after all, given that our brains are so different from bat brains, it almost seems natural for there to be certain aspects of bat experience that we could never fully comprehend. Only the bat actually undergoes the relevant brain processes. Similarly, Jackson’s argument doesn’t show that Mary’s color experience is distinct from her brain processes.

Despite the plethora of materialist responses, vigorous debate continues as there are those who still think that something profound must always be missing from any materialist attempt to explain consciousness; namely, that understanding subjective phenomenal consciousness is an inherently first-person activity which cannot be captured by any objective third-person scientific means, no matter how much scientific knowledge is accumulated. Some knowledge about consciousness is essentially limited to first-person knowledge. Such a sense, no doubt, continues to fuel the related anti-materialist intuitions raised in the previous section. Perhaps consciousness is simply a fundamental or irreducible part of nature in some sense (Chalmers 1996). (For more see Van Gulick 1993.)

iii. Objection 3: Mysterianism Finally, some go so far as to argue that we are simply not capable of solving the problem of consciousness (McGinn 1989, 1991, 1995). In short, “mysterians” believe that the hard problem can never be solved because of human cognitive limitations; the explanatory gap can never be filled. Once again, however, McGinn does not reject the metaphysics of materialism, but rather argues that we are “cognitively closed” with respect to this problem much like a rat or dog is cognitively incapable of solving, or even understanding, calculus problems. More specifically, McGinn claims that we are cognitively closed as to how the brain produces conscious awareness. McGinn concedes that some brain property produces

Page 33: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

conscious experience, but we cannot understand how this is so or even know what that brain property is. Our concept forming mechanisms simply will not allow us to grasp the physical and causal basis of consciousness. We are not conceptually suited to be able to do so.

McGinn does not entirely rest his argument on past failed attempts at explaining consciousness in materialist terms; instead, he presents another argument for his admittedly pessimistic conclusion. McGinn observes that we do not have a mental faculty that can access both consciousness and the brain. We access consciousness through introspection or the first-person perspective, but our access to the brain is through the use of outer spatial senses (e.g., vision) or a more third-person perspective. Thus we have no way to access both the brain and consciousness together, and therefore any explanatory link between them is forever beyond our reach.

Materialist responses are numerous. First, one might wonder why we can’t combine the two perspectives within certain experimental contexts. Both first-person and third-person scientific data about the brain and consciousness can be acquired and used to solve the hard problem. Even if a single person cannot grasp consciousness from both perspectives at the same time, why can’t a plausible physicalist theory emerge from such a combined approach? Presumably, McGinn would say that we are not capable of putting such a theory together in any appropriate way. Second, despite McGinn’s protests to the contrary, many will view the problem of explaining consciousness as a merely temporary limit of our theorizing, and not something which is unsolvable in principle (Dennett 1991). Third, it may be that McGinn expects too much; namely, grasping some causal link between the brain and consciousness. After all, if conscious mental states are simply identical to brain states, then there may simply be a “brute fact” that really does not need any further explaining. Indeed, this is sometimes also said in response to the explanatory gap and the hard problem, as we saw earlier. It may even be that some form of dualism is presupposed in McGinn’s argument, to the extent that brain states are said to “cause” or “give rise to” consciousness, instead of using the language of identity. Fourth, McGinn’s analogy to lower animals and mathematics is not quite accurate. Rats, for example, have no concept whatsoever of calculus. It is not as if they can grasp it to some extent but just haven’t figured out the answer to some particular problem within mathematics. Rats are just completely oblivious to calculus problems. On the other hand, we humans obviously do have some grasp on consciousness and on the workings of the brain -- just see the references at the end of this entry! It is not clear, then, why we should accept the extremely pessimistic and universally negative conclusion that we can never discover the answer to the problem of consciousness, or, more specifically, why we could never understand the link between consciousness and the brain.

iv. Objection 4: Zombies Unlike many of the above objections to materialism, the appeal to the possibility of zombies is often taken as both a problem for materialism and as a more positive argument for some form of dualism, such as property dualism. The philosophical notion of a “zombie” basically refers to conceivable creatures which are physically indistinguishable from us but lack consciousness entirely (Chalmers 1996). It certainly seems logically possible for there to be such creatures: “the conceivability of zombies seems…obvious to me…While this possibility is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description” (Chalmers 1996: 96). Philosophers often contrast what is logically possible (in the sense of “that which is not self-contradictory”) from what is empirically possible given the actual laws of nature. Thus, it is logically possible for me to jump fifty feet in the air, but not empirically possible. Philosophers often use the notion of “possible worlds,” i.e., different ways that the world might have been, in describing such non-actual situations or possibilities. The objection, then, typically proceeds from such a possibility to the conclusion that materialism is false because materialism would seem to rule out that possibility. It has been fairly widely accepted (since Kripke 1972) that all identity statements are necessarily true (that is, true in all possible worlds), and the same should therefore go for mind-brain identity claims. Since the possibility of zombies shows that it doesn’t, then we should conclude that materialism is false. (See Identity Theory.)

Page 34: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

It is impossible to do justice to all of the subtleties here. The literature in response to zombie, and related “conceivability,” arguments is enormous (see, for example, Hill 1997, Hill and McLaughlin 1999, Papineau 1998, 2002, Balog 1999, Block and Stalnaker 1999, Loar 1999, Yablo 1999, Perry 2001, Botterell 2001, Kirk 2005). A few lines of reply are as follows: First, it is sometimes objected that the conceivability of something does not really entail its possibility. Perhaps we can also conceive of water not being H2O, since there seems to be no logical contradiction in doing so, but, according to received wisdom from Kripke, that is really impossible. Perhaps, then, some things just seem possible but really aren’t. Much of the debate centers on various alleged similarities or dissimilarities between the mind-brain and water-H2O cases (or other such scientific identities). Indeed, the entire issue of the exact relationship between “conceivability” and “possibility” is the subject of an important recently published anthology (Gendler and Hawthorne 2002). Second, even if zombies are conceivable in the sense of logically possible, how can we draw a substantial metaphysical conclusion about the actual world? There is often suspicion on the part of materialists about what, if anything, such philosophers’ “thought experiments” can teach us about the nature of our minds. It seems that one could take virtually any philosophical or scientific theory about almost anything, conceive that it is possibly false, and then conclude that it is actually false. Something, perhaps, is generally wrong with this way of reasoning. Third, as we saw earlier (3b.i), there may be a very good reason why such zombie scenarios seem possible; namely, that we do not (at least, not yet) see what the necessary connection is between neural events and conscious mental events. On the one side, we are dealing with scientific third-person concepts and, on the other, we are employing phenomenal concepts. We are, perhaps, simply currently not in a position to understand completely such a necessary connection.

Debate and discussion on all four objections remains very active.

v. Varieties of Materialism Despite the apparent simplicity of materialism, say, in terms of the identity between mental states and neural states, the fact is that there are many different forms of materialism. While a detailed survey of all varieties is beyond the scope of this entry, it is at least important to acknowledge the commonly drawn distinction between two kinds of “identity theory”: token-token and type-type materialism. Type-type identity theory is the stronger thesis and says that mental properties, such as “having a desire to drink some water” or “being in pain,” are literally identical with a brain property of some kind. Such identities were originally meant to be understood as on a par with, for example, the scientific identity between “being water” and “being composed of H2O” (Place 1956, Smart 1959). However, this view historically came under serious assault due to the fact that it seems to rule out the so-called “multiple realizability” of conscious mental states. The idea is simply that it seems perfectly possible for there to be other conscious beings (e.g., aliens, radically different animals) who can have those same mental states but who also are radically different from us physiologically (Fodor 1974). It seems that commitment to type-type identity theory led to the undesirable result that only organisms with brains like ours can have conscious states. Somewhat more technically, most materialists wish to leave room for the possibility that mental properties can be “instantiated” in different kinds of organisms. (But for more recent defenses of type-type identity theory see Hill and McLaughlin 1999, Papineau 1994, 1995, 1998, Polger 2004.) As a consequence, a more modest “token-token” identity theory has become preferable to many materialists. This view simply holds that each particular conscious mental event in some organism is identical with some particular brain process or event in that organism. This seems to preserve much of what the materialist wants but yet allows for the multiple realizability of conscious states, because both the human and the alien can still have a conscious desire for something to drink while each mental event is identical with a (different) physical state in each organism.

Taking the notion of multiple realizability very seriously has also led many to embrace functionalism, which is the view that conscious mental states should really only be identified with the functional role they play within an organism. For example, conscious pains are defined more in terms of input and output, such as causing bodily damage and avoidance behavior, as well as in terms of their relationship to other mental states. It is normally viewed as a form of materialism since virtually all functionalists also believe, like the

Page 35: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

token-token theorist, that something physical ultimately realizes that functional state in the organism, but functionalism does not, by itself, entail that materialism is true. Critics of functionalism, however, have long argued that such purely functional accounts cannot adequately explain the essential “feel” of conscious states, or that it seems possible to have two functionally equivalent creatures, one of whom lacks qualia entirely (Block 1980a, 1980b, Chalmers 1996; see also Shoemaker 1975, 1981).

Some materialists even deny the very existence of mind and mental states altogether, at least in the sense that the very concept of consciousness is muddled (Wilkes 1984, 1988) or that the mentalistic notions found in folk psychology, such as desires and beliefs, will eventually be eliminated and replaced by physicalistic terms as neurophysiology matures into the future (Churchland 1983). This is meant as analogous to past similar eliminations based on deeper scientific understanding, for example, we no longer need to speak of “ether” or “phlogiston.” Other eliminativists, more modestly, argue that there is no such thing as qualia when they are defined in certain problematic ways (Dennett 1988).

Finally, it should also be noted that not all materialists believe that conscious mentality can be explained in terms of the physical, at least in the sense that the former cannot be “reduced” to the latter. Materialism is true as an ontological or metaphysical doctrine, but facts about the mind cannot be deduced from facts about the physical world (Boyd 1980, Van Gulick 1992). In some ways, this might be viewed as a relatively harmless variation on materialist themes, but others object to the very coherence of this form of materialism (Kim 1987, 1998). Indeed, the line between such “non-reductive materialism” and property dualism is not always so easy to draw; partly because the entire notion of “reduction” is ambiguous and a very complex topic in its own right. On a related front, some materialists are happy enough to talk about a somewhat weaker “supervenience” relation between mind and matter. Although “supervenience” is a highly technical notion with many variations, the idea is basically one of dependence (instead of identity); for example, that the mental depends on the physical in the sense that any mental change must be accompanied by some physical change (see Kim 1993).

4. Specific Theories of Consciousness

Most specific theories of consciousness tend to be reductionist in some sense. The classic notion at work is that consciousness or individual conscious mental states can be explained in terms of something else or in some other terms. This section will focus on several prominent contemporary reductionist theories. We should, however, distinguish between those who attempt such a reduction directly in physicalistic, such as neurophysiological, terms and those who do so in mentalistic terms, such as by using unconscious mental states or other cognitive notions.

a. Neural Theories

The more direct reductionist approach can be seen in various, more specific, neural theories of consciousness. Perhaps best known is the theory offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons fire in synchrony and all have oscillations within the 35-75 hertz range (that is, 35-75 cycles per second). However, many philosophers and scientists have put forth other candidates for what, specifically, to identify in the brain with consciousness. This vast enterprise has come to be known as the search for the “neural correlates of consciousness” or NCCs (see section 5b below for more). The overall idea is to show how one or more specific kinds of neuro-chemical activity can underlie and explain conscious mental activity (Metzinger 2000). Of course, mere “correlation” is not enough for a fully adequate neural theory and explaining just what counts as a NCC turns out to be more difficult than one might think (Chalmers 2000). Even Crick and Koch have acknowledged that they, at best, provide a necessary condition for consciousness, and that such firing patters are not automatically sufficient for having conscious experience.

Page 36: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

b. Representational Theories of Consciousness

Many current theories attempt to reduce consciousness in mentalistic terms. One broadly popular approach along these lines is to reduce consciousness to “mental representations” of some kind. The notion of a “representation” is of course very general and can be applied to photographs, signs, and various natural objects, such as the rings inside a tree. Much of what goes on in the brain, however, might also be understood in a representational way; for example, as mental events representing outer objects partly because they are caused by such objects in, say, cases of veridical visual perception. More specifically, philosophers will often call such representational mental states “intentional states” which have representational content; that is, mental states which are “about something” or “directed at something” as when one has a thought about the house or a perception of the tree. Although intentional states are sometimes contrasted with phenomenal states, such as pains and color experiences, it is clear that many conscious states have both phenomenal and intentional properties, such as visual perceptions. It should be noted that the relation between intentionalilty and consciousness is itself a major ongoing area of dispute with some arguing that genuine intentionality actually presupposes consciousness in some way (Searle 1992, Siewart 1998, Horgan and Tienson 2002) while most representationalists insist that intentionality is prior to consciousness (Gennaro 2012, chapter two).

The general view that we can explain conscious mental states in terms of representational or intentional states is called “representationalism.” Although not automatically reductionist in spirit, most versions of representationalism do indeed attempt such a reduction. Most representationalists, then, believe that there is room for a kind of “second-step” reduction to be filled in later by neuroscience. The other related motivation for representational theories of consciousness is that many believe that an account of representation or intentionality can more easily be given in naturalistic terms, such as causal theories whereby mental states are understood as representing outer objects in virtue of some reliable causal connection. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a reductionist and naturalistic theory of consciousness. Most generally, however, we can say that a representationalist will typically hold that the phenomenal properties of experience (that is, the “qualia” or “what it is like of experience” or “phenomenal character”) can be explained in terms of the experiences’ representational properties. Alternatively, conscious mental states have no mental properties other than their representational properties. Two conscious states with all the same representational properties will not differ phenomenally. For example, when I look at the blue sky, what it is like for me to have a conscious experience of the sky is simply identical with my experience’s representation of the blue sky.

i. First-Order Representationalism A First-order representational (FOR) theory of consciousness is a theory that attempts to explain conscious experience primarily in terms of world-directed (or first-order) intentional states. Probably the two most cited FOR theories of consciousness are those of Fred Dretske (1995) and Michael Tye (1995, 2000), though there are many others as well (e.g., Harman 1990, Kirk 1994, Byrne 2001, Thau 2002, Droege 2003). Tye’s theory is more fully worked out and so will be the focus of this section. Like other FOR theorists, Tye holds that the representational content of my conscious experience (that is, what my experience is about or directed at) is identical with the phenomenal properties of experience. Aside from reductionistic motivations, Tye and other FOR representationalists often use the somewhat technical notion of the “transparency of experience” as support for their view (Harman 1990). This is an argument based on the phenomenological first-person observation, which goes back to Moore (1903), that when one turns one’s attention away from, say, the blue sky and onto one’s experience itself, one is still only aware of the blueness of the sky. The experience itself is not blue; rather, one “sees right through” one’s experience to its representational properties, and there is nothing else to one’s experience over and above such properties.

Page 37: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Whatever the merits and exact nature of the argument from transparency (see Kind 2003), it is clear, of course, that not all mental representations are conscious, so the key question eventually becomes: What exactly distinguishes conscious from unconscious mental states (or representations)? What makes a mental state a conscious mental state? Here Tye defends what he calls “PANIC theory.” The acronym “PANIC” stands for poised, abstract, non-conceptual, intentional content. Without probing into every aspect of PANIC theory, Tye holds that at least some of the representational content in question is non-conceptual (N), which is to say that the subject can lack the concept for the properties represented by the experience in question, such as an experience of a certain shade of red that one has never seen before. Actually, the exact nature or even existence of non-conceptual content of experience is itself a highly debated and difficult issue in philosophy of mind (Gunther 2003). Gennaro (2012), for example, defends conceptualism and connects it in various ways to the higher-order thought theory of consciousness (see section 4b.ii). Conscious states clearly must also have “intentional content” (IC) for any representationalist. Tye also asserts that such content is “abstract” (A) and not necessarily about particular concrete objects. This condition is needed to handle cases of hallucinations, where there are no concrete objects at all or cases where different objects look phenomenally alike. Perhaps most important for mental states to be conscious, however, is that such content must be “poised” (P), which is an importantly functional notion. The “key idea is that experiences and feelings...stand ready and available to make a direct impact on beliefs and/or desires. For example…feeling hungry… has an immediate cognitive effect, namely, the desire to eat….States with nonconceptual content that are not so poised lack phenomenal character [because]…they arise too early, as it were, in the information processing” (Tye 2000: 62).

One objection to Tye’s theory is that it does not really address the hard problem of phenomenal consciousness (see section 3b.i). This is partly because what really seems to be doing most of the work on Tye’s PANIC account is the very functional sounding “poised” notion, which is perhaps closer to Block’s access consciousness (see section 1) and is therefore not necessarily able to explain phenomenal consciousness (see Kriegel 2002). In short, it is difficult to see just how Tye’s PANIC account might not equally apply to unconscious representations and thus how it really explains phenomenal consciousness.

Other standard objections to Tye’s theory as well as to other FOR accounts include the concern that it does not cover all kinds of conscious states. Some conscious states seem not to be “about” anything, such as pains, anxiety, or after-images, and so would be non-representational conscious states. If so, then conscious experience cannot generally be explained in terms of representational properties (Block 1996). Tye responds that pains, itches, and the like do represent, in the sense that they represent parts of the body. And after-images, hallucinations, and the like either misrepresent (which is still a kind of representation) or the conscious subject still takes them to have representational properties from the first-person point of view. Indeed, Tye (2000) admirably goes to great lengths and argues convincingly in response to a whole host of alleged counter-examples to representationalism. Historically among them are various hypothetical cases of inverted qualia (see Shoemaker 1982), the mere possibility of which is sometimes taken as devastating to representationalism. These are cases where behaviorally indistinguishable individuals have inverted color perceptions of objects, such as person A visually experiences a lemon the way that person B experience a ripe tomato with respect to their color, and so on for all yellow and red objects. Isn’t it possible that there are two individuals whose color experiences are inverted with respect to the objects of perception? (For more on the importance of color in philosophy, see Hardin 1986.)

A somewhat different twist on the inverted spectrum is famously put forth in Block’s (1990) Inverted Earth case. On Inverted Earth every object has the complementary color to the one it has here, but we are asked to imagine that a person is equipped with color-inverting lenses and then sent to Inverted Earth completely ignorant of those facts. Since the color inversions cancel out, the phenomenal experiences remain the same, yet there certainly seem to be different representational properties of objects involved. The strategy on the part of critics, in short, is to think of counter-examples (either actual or hypothetical) whereby there is a difference between the phenomenal properties in experience and the relevant representational properties in the world. Such objections can, perhaps, be answered by Tye and others in various ways, but significant debate continues (Macpherson 2005). Intuitions also dramatically differ as to

Page 38: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

the very plausibility and value of such thought experiments. (For more, see Seager 1999, chapters 6 and 7. See also Chalmers 2004 for an excellent discussion of the dizzying array of possible representationalist positions.)

ii. Higher-Order Representationalism As we have seen, one question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? There is a long tradition that has attempted to understand consciousness in terms of some kind of higher-order awareness. For example, John Locke (1689/1975) once said that “consciousness is the perception of what passes in a man’s own mind.” This intuition has been revived by a number of philosophers (Rosenthal, 1986, 1993b, 1997, 2000, 2004, 2005; Gennaro 1996a, 2012; Armstrong, 1968, 1981; Lycan, 1996, 2001). In general, the idea is that what makes a mental state conscious is that it is the object of some kind of higher-order representation (HOR). A mental state M becomes conscious when there is a HOR of M. A HOR is a “meta-psychological” state, i.e., a mental state directed at another mental state. So, for example, my desire to write a good encyclopedia entry becomes conscious when I am (non-inferentially) “aware” of the desire. Intuitively, it seems that conscious states, as opposed to unconscious ones, are mental states that I am “aware of” in some sense. This is sometimes referred to as the Transitivity Principle. Any theory which attempts to explain consciousness in terms of higher-order states is known as a higher-order (HO) theory of consciousness. It is best initially to use the more neutral term “representation” because there are a number of different kinds of higher-order theory, depending upon how one characterizes the HOR in question. HO theories, thus, attempt to explain consciousness in mentalistic terms, that is, by reference to such notions as “thoughts” and “awareness.” Conscious mental states arise when two unconscious mental states are related in a certain specific way; namely, that one of them (the HOR) is directed at the other (M). HO theorists are united in the belief that their approach can better explain consciousness than any purely FOR theory, which has significant difficulty in explaining the difference between unconscious and conscious mental states. There are various kinds of HO theory with the most common division between higher-order thought (HOT) theories and higher-order perception (HOP) theories. HOT theorists, such as David M. Rosenthal, think it is better to understand the HOR as a thought of some kind. HOTs are treated as cognitive states involving some kind of conceptual component. HOP theorists urge that the HOR is a perceptual or experiential state of some kind (Lycan 1996) which does not require the kind of conceptual content invoked by HOT theorists. Partly due to Kant (1781/1965), HOP theory is sometimes referred to as “inner sense theory” as a way of emphasizing its sensory or perceptual aspect. Although HOT and HOP theorists agree on the need for a HOR theory of consciousness, they do sometimes argue for the superiority of their respective positions (such as in Rosenthal 2004, Lycan 2004, and Gennaro 2012). Some philosophers, however, have argued that the difference between these theories is perhaps not as important or as clear as some think it is (Güzeldere 1995, Gennaro 1996a, Van Gulick 2000).

A common initial objection to HOR theories is that they are circular and lead to an infinite regress. It might seem that the HOT theory results in circularity by defining consciousness in terms of HOTs. It also might seem that an infinite regress results because a conscious mental state must be accompanied by a HOT, which, in turn, must be accompanied by another HOT ad infinitum. However, the standard reply is that when a conscious mental state is a first-order world-directed state the higher-order thought (HOT) is not itself conscious; otherwise, circularity and an infinite regress would follow. When the HOT is itself conscious, there is a yet higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection which involves a conscious HOT directed at an inner mental state. When one introspects, one's attention is directed back into one's mind. For example, what makes my desire to write a good entry a conscious first-order desire is that there is a (non-conscious) HOT directed at the desire. In this case, my conscious focus is directed at the entry and my computer screen, so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself (see Rosenthal 1986).

Page 39: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Peter Carruthers (2000) has proposed another possibility within HO theory; namely, that it is better for various reasons to think of the HOTs as dispositional states instead of the standard view that the HOTs are actual, though he also understands his “dispositional HOT theory” to be a form of HOP theory (Carruthers 2004). The basic idea is that the conscious status of an experience is due to its availability to higher-order thought. So “conscious experience occurs when perceptual contents are fed into a special short-term buffer memory store, whose function is to make those contents available to cause HOTs about themselves.” (Carruthers 2000: 228). Some first-order perceptual contents are available to a higher-order “theory of mind mechanism,” which transforms those representational contents into conscious contents. Thus, no actual HOT occurs. Instead, according to Carruthers, some perceptual states acquire a dual intentional content; for example, a conscious experience of red not only has a first-order content of “red,” but also has the higher-order content “seems red” or “experience of red.” Carruthers also makes interesting use of so-called “consumer semantics” in order to fill out his theory of phenomenal consciousness. The content of a mental state depends, in part, on the powers of the organisms which “consume” that state, e.g., the kinds of inferences which the organism can make when it is in that state. Daniel Dennett (1991) is sometimes credited with an earlier version of a dispositional account (see Carruthers 2000, chapter ten). Carruthers’ dispositional theory is often criticized by those who, among other things, do not see how the mere disposition toward a mental state can render it conscious (Rosenthal 2004; see also Gennaro 2004, 2012; for more, see Consciousness, Higher Order Theories of.) It is worth briefly noting a few typical objections to HO theories (many of which can be found in Byrne 1997): First, and perhaps most common, is that various animals (and even infants) are not likely to have to the conceptual sophistication required for HOTs, and so that would render animal (and infant) consciousness very unlikely (Dretske 1995, Seager 2004). Are cats and dogs capable of having complex higher-order thoughts such as “I am in mental state M”? Although most who bring forth this objection are not HO theorists, Peter Carruthers (1989) is one HO theorist who actually embraces the conclusion that (most) animals do not have phenomenal consciousness. Gennaro (1993, 1996) has replied to Carruthers on this point; for example, it is argued that the HOTs need not be as sophisticated as it might initially appear and there is ample comparative neurophysiological evidence supporting the conclusion that animals have conscious mental states. Most HO theorists do not wish to accept the absence of animal or infant consciousness as a consequence of holding the theory. The debate continues, however, in Carruthers (2000, 2005, 2008) and Gennaro (2004, 2009, 2012, chapters seven and eight).

A second objection has been referred to as the “problem of the rock” (Stubenberg 1998) and the “generality problem” (Van Gulick 2000, 2004), but it is originally due to Alvin Goldman (Goldman 1993). When I have a thought about a rock, it is certainly not true that the rock becomes conscious. So why should I suppose that a mental state becomes conscious when I think about it? This is puzzling to many and the objection forces HO theorists to explain just how adding the HO state changes an unconscious state into a conscious. There have been, however, a number of responses to this kind of objection (Rosenthal 1997, Lycan, 1996, Van Gulick 2000, 2004, Gennaro 2005, 2012, chapter four). A common theme is that there is a principled difference in the objects of the HO states in question. Rocks and the like are not mental states in the first place, and so HO theorists are first and foremost trying to explain how a mental state becomes conscious. The objects of the HO states must be “in the head.” Third, the above leads somewhat naturally to an objection related to Chalmers’ hard problem (section 3b.i). It might be asked just how exactly any HO theory really explains the subjective or phenomenal aspect of conscious experience. How or why does a mental state come to have a first-person qualitative “what it is like” aspect by virtue of the presence of a HOR directed at it? It is probably fair to say that HO theorists have been slow to address this problem, though a number of overlapping responses have emerged (see also Gennaro 2005, 2012, chapter four, for more extensive treatment). Some argue that this objection misconstrues the main and more modest purpose of (at least, their) HO theories. The claim is that HO theories are theories of consciousness only in the sense that they are attempting to explain what differentiates conscious from unconscious states, i.e., in terms of a higher-order awareness of some kind. A full account of “qualitative properties” or “sensory qualities” (which can themselves be non-conscious) can be found elsewhere in their work, but is independent of their theory of consciousness (Rosenthal 1991, Lycan 1996, 2001). Thus, a full explanation of phenomenal consciousness does require more than a HO theory, but that is no objection to HO theories as such. Another response is that proponents of the

Page 40: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

hard problem unjustly raise the bar as to what would count as a viable explanation of consciousness so that any such reductivist attempt would inevitably fall short (Carruthers 2000, Gennaro 2012). Part of the problem, then, is a lack of clarity about what would even count as an explanation of consciousness (Van Gulick 1995; see also section 3b). Once this is clarified, however, the hard problem can indeed be solved. Moreover, anyone familiar with the literature knows that there are significant terminological difficulties in the use of various crucial terms which sometimes inhibits genuine progress (but see Byrne 2004 for some helpful clarification).

A fourth important objection to HO approaches is the question of how such theories can explain cases where the HO state might misrepresent the lower-order (LO) mental state (Byrne 1997, Neander 1998, Levine 2001, Block 2011). After all, if we have a representational relation between two states, it seems possible for misrepresentation or malfunction to occur. If it does, then what explanation can be offered by the HO theorist? If my LO state registers a red percept and my HO state registers a thought about something green due, say, to some neural misfiring, then what happens? It seems that problems loom for any answer given by a HO theorist and the cause of the problem has to do with the very nature of the HO theorist’s belief that there is a representational relation between the LO and HO states. For example, if the HO theorist takes the option that the resulting conscious experience is reddish, then it seems that the HO state plays no role in determining the qualitative character of the experience. On the other hand, if the resulting experience is greenish, then the LO state seems irrelevant. Rosenthal and Weisberg hold that the HO state determines the qualitative properties even in cases when there is no LO state at all (Rosenthal 2005, 2011, Weisberg 2008, 2011a, 2011b). Gennaro (2012) argues that no conscious experience results in such cases and wonders, for example, how a sole (unconscious) HOT can result in a conscious state at all. He argues that there must be a match, complete or partial, between the LO and HO state in order for a conscious state to exist in the first place. This important objection forces HO theorists to be clearer about just how to view the relationship between the LO and HO states. Debate is ongoing and significant both on varieties of HO theory and in terms of the above objections (see Gennaro 2004a). There is also interdisciplinary interest in how various HO theories might be realized in the brain (Gennaro 2012, chapter nine).

iii. Hybrid Representational Accounts A related and increasingly popular version of representational theory holds that the meta-psychological state in question should be understood as intrinsic to (or part of) an overall complex conscious state. This stands in contrast to the standard view that the HO state is extrinsic to (that is, entirely distinct from) its target mental state. The assumption, made by Rosenthal for example, about the extrinsic nature of the meta-thought has increasingly come under attack, and thus various hybrid representational theories can be found in the literature. One motivation for this movement is growing dissatisfaction with standard HO theory’s ability to handle some of the objections addressed in the previous section. Another reason is renewed interest in a view somewhat closer to the one held by Franz Brentano (1874/1973) and various other followers, normally associated with the phenomenological tradition (Husserl 1913/1931, 1929/1960; Sartre 1956; see also Smith 1986, 2004). To varying degrees, these views have in common the idea that conscious mental states, in some sense, represent themselves, which then still involves having a thought about a mental state, just not a distinct or separate state. Thus, when one has a conscious desire for a cold glass of water, one is also aware that one is in that very state. The conscious desire both represents the glass of water and itself. It is this “self-representing” which makes the state conscious. These theories can go by various names, which sometimes seem in conflict, and have added significantly in recent years to the acronyms which abound in the literature. For example, Gennaro (1996a, 2002, 2004, 2006, 2012) has argued that, when one has a first-order conscious state, the HOT is better viewed as intrinsic to the target state, so that we have a complex conscious state with parts. Gennaro calls this the “wide intrinsicality view” (WIV) and he also argues that Jean-Paul Sartre’s theory of consciousness can be understood in this way (Gennaro 2002). Gennaro holds that conscious mental states should be understood (as Kant might have today) as global brain states which are combinations of passively received perceptual input and presupposed higher-order conceptual activity directed at that input. Higher-

Page 41: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

order concepts in the meta-psychological thoughts are presupposed in having first-order conscious states. Robert Van Gulick (2000, 2004, 2006) has also explored the alternative that the HO state is part of an overall global conscious state. He calls such states “HOGS” (Higher-Order Global States) whereby a lower-order unconscious state is “recruited” into a larger state, which becomes conscious partly due to the implicit self-awareness that one is in the lower-order state. Both Gennaro and Van Gulick have suggested that conscious states can be understood materialistically as global states of the brain, and it would be better to treat the first-order state as part of the larger complex brain state. This general approach is also forcefully advocated by Uriah Kriegel (Kriegel 2003a, 2003b, 2005, 2006, 2009) and is even the subject of an entire anthology debating its merits (Kriegel and Williford 2006). Kriegel has used several different names for his “neo-Brentanian theory,” such as the SOMT (Same-Order Monitoring Theory) and, more recently, the “self-representational theory of consciousness.” To be sure, the notion of a mental state representing itself or a mental state with one part representing another part is in need of further development and is perhaps somewhat mysterious. Nonetheless, there is agreement among these authors that conscious mental states are, in some important sense, reflexive or self-directed. And, once again, there is keen interest in developing this model in a way that coheres with the latest neurophysiological research on consciousness. A point of emphasis is on the concept of global meta-representation within a complex brain state, and attempts are underway to identify just how such an account can be realized in the brain. It is worth mentioning that this idea was also briefly explored by Thomas Metzinger who focused on the fact that consciousness “is something that unifies or synthesizes experience” (Metzinger 1995: 454). Metzinger calls this the process of “higher-order binding” and thus uses the acronym HOB. Others who hold some form of the self-representational view include Kobes (1995), Caston (2002), Williford (2006), Brook and Raymont (2006), and even Carruthers’ (2000) theory can be viewed in this light since he contends that conscious states have two representational contents. Thomas Natsoulas also has a series of papers defending a similar view, beginning with Natsoulas 1996. Some authors (such as Gennaro 2012) view this hybrid position to be a modified version of HOT theory; indeed, Rosenthal (2004) has called it “intrinsic higher-order theory.” Van Gulick also clearly wishes to preserve the HO is his HOGS. Others, such as Kriegel, are not inclined to call their views “higher-order” at all and call it, for example, the “same-order monitoring” or “self-representational” theory of consciousness. To some extent, this is a terminological dispute, but, despite important similarities, there are also key subtle differences between these hybrid alternatives. Like HO theorists, however, those who advocate this general approach all take very seriously the notion that a conscious mental state M is a state that subject S is (non-inferentially) aware that S is in. By contrast, one is obviously not aware of one’s unconscious mental states. Thus, there are various attempts to make sense of and elaborate upon this key intuition in a way that is, as it were, “in-between” standard FO and HO theory. (See also Lurz 2003 and 2004 for yet another interesting hybrid account.)

c. Other Cognitive Theories

Aside from the explicitly representational approaches discussed above, there are also related attempts to explain consciousness in other cognitive terms. The two most prominent such theories are worth describing here:

Daniel Dennett (1991, 2005) has put forth what he calls the Multiple Drafts Model (MDM) of consciousness. Although similar in some ways to representationalism, Dennett is most concerned that materialists avoid falling prey to what he calls the “myth of the Cartesian theater,” the notion that there is some privileged place in the brain where everything comes together to produce conscious experience. Instead, the MDM holds that all kinds of mental activity occur in the brain by parallel processes of interpretation, all of which are under frequent revision. The MDM rejects the idea of some “self” as an inner observer; rather, the self is the product or construction of a narrative which emerges over time. Dennett is also well known for rejecting the very assumption that there is a clear line to be drawn between conscious and unconscious mental states in terms of the problematic notion of “qualia.” He influentially

Page 42: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

rejects strong emphasis on any phenomenological or first-person approach to investigating consciousness, advocating instead what he calls “heterophenomenology” according to which we should follow a more neutral path “leading from objective physical science and its insistence on the third person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences.” (1991: 72)

Bernard Baars’ Global Workspace Theory (GWT) model of consciousness is probably the most influential theory proposed among psychologists (Baars 1988, 1997). The basic idea and metaphor is that we should think of the entire cognitive system as built on a “blackboard architecture” which is a kind of global workspace. According to GWT, unconscious processes and mental states compete for the spotlight of attention, from which information is “broadcast globally” throughout the system. Consciousness consists in such global broadcasting and is therefore also, according to Baars, an important functional and biological adaptation. We might say that consciousness is thus created by a kind of global access to select bits of information in the brain and nervous system. Despite Baars’ frequent use of “theater” and “spotlight” metaphors, he argues that his view does not entail the presence of the material Cartesian theater that Dennett is so concerned to avoid. It is, in any case, an empirical matter just how the brain performs the functions he describes, such as detecting mechanisms of attention.

Objections to these cognitive theories include the charge that they do not really address the hard problem of consciousness (as described in section 3b.i), but only the “easy” problems. Dennett is also often accused of explaining away consciousness rather than really explaining it. It is also interesting to think about Baars’ GWT in light of the Block’s distinction between access and phenomenal consciousness (see section 1). Does Baars’ theory only address access consciousness instead of the more difficult to explain phenomenal consciousness? (Two other psychological cognitive theories worth noting are the ones proposed by George Mandler 1975 and Tim Shallice 1988.)

d. Quantum Approaches

Finally, there are those who look deep beneath the neural level to the field of quantum mechanics, basically the study of sub-atomic particles, to find the key to unlocking the mysteries of consciousness. The bizarre world of quantum physics is quite different from the deterministic world of classical physics, and a major area of research in its own right. Such authors place the locus of consciousness at a very fundamental physical level. This somewhat radical, though exciting, option is explored most notably by physicist Roger Penrose (1989, 1994) and anesthesiologist Stuart Hameroff (1998). The basic idea is that consciousness arises through quantum effects which occur in subcellular neural structures known as microtubules, which are structural proteins in cell walls. There are also other quantum approaches which aim to explain the coherence of consciousness (Marshall and Zohar 1990) or use the “holistic” nature of quantum mechanics to explain consciousness (Silberstein 1998, 2001). It is difficult to assess these somewhat exotic approaches at present. Given the puzzling and often very counterintuitive nature of quantum physics, it is unclear whether such approaches will prove genuinely scientifically valuable methods in explaining consciousness. One concern is simply that these authors are trying to explain one puzzling phenomenon (consciousness) in terms of another mysterious natural phenomenon (quantum effects). Thus, the thinking seems to go, perhaps the two are essentially related somehow and other physicalistic accounts are looking in the wrong place, such as at the neuro-chemical level. Although many attempts to explain consciousness often rely of conjecture or speculation, quantum approaches may indeed lead the field along these lines. Of course, this doesn’t mean that some such theory isn’t correct. One exciting aspect of this approach is the resulting interdisciplinary interest it has generated among physicists and other scientists in the problem of consciousness.

Page 43: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

5. Consciousness and Science: Key Issues

Over the past two decades there has been an explosion of interdisciplinary work in the science of consciousness. Some of the credit must go to the ground breaking 1986 book by Patricia Churchland entitled Neurophilosophy. In this section, three of the most important such areas are addressed.

a. The Unity of Consciousness/The Binding Problem

Conscious experience seems to be “unified” in an important sense; this crucial feature of consciousness played an important role in the philosophy of Kant who argued that unified conscious experience must be the product of the (presupposed) synthesizing work of the mind. Getting clear about exactly what is meant by the “unity of consciousness” and explaining how the brain achieves such unity has become a central topic in the study of consciousness. There are many different senses of “unity” (see Tye 2003; Bayne and Chalmers 2003, Dainton 2000, 2008, Bayne 2010), but perhaps most common is the notion that, from the first-person point of view, we experience the world in an integrated way and as a single phenomenal field of experience. (For an important anthology on the subject, see Cleeremans 2003.) However, when one looks at how the brain processes information, one only sees discrete regions of the cortex processing separate aspects of perceptual objects. Even different aspects of the same object, such as its color and shape, are processed in different parts of the brain. Given that there is no “Cartesian theater” in the brain where all this information comes together, the problem arises as to just how the resulting conscious experience is unified. What mechanisms allow us to experience the world in such a unified way? What happens when this unity breaks down, as in various pathological cases? The “problem of integrating the information processed by different regions of the brain is known as the binding problem” (Cleeremans 2003: 1). Thus, the so-called “binding problem” is inextricably linked to explaining the unity of consciousness. As was seen earlier with neural theories (section 4a) and as will be seen below on the neural correlates of consciousness (5b), some attempts to solve the binding problem have to do with trying to isolate the precise brain mechanisms responsible for consciousness. For example, Crick and Koch’s (1990) idea that synchronous neural firings are (at least) necessary for consciousness can also be viewed as an attempt to explain how disparate neural networks bind together separate pieces of information to produce unified subjective conscious experience. Perhaps the binding problem and the hard problem of consciousness (section 3b.i) are very closely connected. If the binding problem can be solved, then we arguably have identified the elusive neural correlate of consciousness and have, therefore, perhaps even solved the hard problem. In addition, perhaps the explanatory gap between third-person scientific knowledge and first-person unified conscious experience can also be bridged. Thus, this exciting area of inquiry is central to some of the deepest questions in the philosophical and scientific exploration of consciousness.

b. The Neural Correlates of Consciousness (NCCs)

As was seen earlier in discussing neural theories of consciousness (section 4a), the search for the so-called “neural correlates of consciousness” (NCCs) is a major preoccupation of philosophers and scientists alike (Metzinger 2000). Narrowing down the precise brain property responsible for consciousness is a different and far more difficult enterprise than merely holding a generic belief in some form of materialism. One leading candidate is offered by Francis Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become conscious when large numbers of neurons all fire in synchrony with one another (oscillations within the 35-75 hertz range or 35-75 cycles per second). Currently, one method used is simply to study some aspect of neural functioning with sophisticated detecting equipments (such as MRIs and PET scans) and then correlate it with first-person reports of conscious experience. Another method is to study the difference in brain activity between those under anesthesia and those not under any such influence. A detailed survey would be impossible to give here, but a number of other candidates for the NCC have emerged over the past two decades, including reentrant cortical feedback loops in the neural circuitry throughout the brain (Edelman 1989, Edelman and

Page 44: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Tononi 2000), NMDA-mediated transient neural assemblies (Flohr 1995), and emotive somatosensory haemostatic processes in the frontal lobe (Damasio 1999). To elaborate briefly on Flohr’s theory, the idea is that anesthetics destroy conscious mental activity because they interfere with the functioning of NMDA synapses between neurons, which are those that are dependent on N-methyl-D-aspartate receptors. These and other NCCs are explored at length in Metzinger (2000). Ongoing scientific investigation is significant and an important aspect of current scientific research in the field.

One problem with some of the above candidates is determining exactly how they are related to consciousness. For example, although a case can be made that some of them are necessary for conscious mentality, it is unclear that they are sufficient. That is, some of the above seem to occur unconsciously as well. And pinning down a narrow enough necessary condition is not as easy as it might seem. Another general worry is with the very use of the term “correlate.” As any philosopher, scientist, and even undergraduate student should know, saying that “A is correlated with B” is rather weak (though it is an important first step), especially if one wishes to establish the stronger identity claim between consciousness and neural activity. Even if such a correlation can be established, we cannot automatically conclude that there is an identity relation. Perhaps A causes B or B causes A, and that’s why we find the correlation. Even most dualists can accept such interpretations. Maybe there is some other neural process C which causes both A and B. “Correlation” is not even the same as “cause,” let alone enough to establish “identity.” Finally, some NCCs are not even necessarily put forth as candidates for all conscious states, but rather for certain specific kinds of consciousness (e.g., visual).

c. Philosophical Psychopathology

Philosophers have long been intrigued by disorders of the mind and consciousness. Part of the interest is presumably that if we can understand how consciousness goes wrong, then that can help us to theorize about the normal functioning mind. Going back at least as far as John Locke (1689/1975), there has been some discussion about the philosophical implications of multiple personality disorder (MPD) which is now called “dissociative identity disorder” (DID). Questions abound: Could there be two centers of consciousness in one body? What makes a person the same person over time? What makes a person a person at any given time? These questions are closely linked to the traditional philosophical problem of personal identity, which is also importantly related to some aspects of consciousness research. Much the same can be said for memory disorders, such as various forms of amnesia (see Gennaro 1996a, chapter 9). Does consciousness require some kind of autobiographical memory or psychological continuity? On a related front, there is significant interest in experimental results from patients who have undergone a commisurotomy, which is usually performed to relieve symptoms of severe epilepsy when all else fails. During this procedure, the nerve fibers connecting the two brain hemispheres are cut, resulting in so-called “split-brain” patients (Bayne 2010).

Philosophical interest is so high that there is now a book series called Philosophical Psychopathology published by MIT Press. Another rich source of information comes from the provocative and accessible writings of neurologists on a whole host of psychopathologies, most notably Oliver Sacks (starting with his 1987 book) and, more recently, V. S. Ramachandran (2004; see also Ramachandran and Blakeslee 1998). Another launching point came from the discovery of the phenomenon known as “blindsight” (Weiskrantz 1986), which is very frequently discussed in the philosophical literature regarding its implications for consciousness. Blindsight patients are blind in a well defined part of the visual field (due to cortical damage), but yet, when forced, can guess, with a higher than expected degree of accuracy, the location or orientation of an object in the blind field. There is also philosophical interest in many other disorders, such as phantom limb pain (where one feels pain in a missing or amputated limb), various agnosias (such as visual agnosia where one is not capable of visually recognizing everyday objects), and anosognosia (which is denial of illness, such as when one claims that a paralyzed limb is still functioning, or when one denies that one is blind). These phenomena raise a number of important philosophical questions and have forced philosophers to rethink some very

Page 45: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

basic assumptions about the nature of mind and consciousness. Much has also recently been learned about autism and various forms of schizophrenia. A common view is that these disorders involve some kind of deficit in self-consciousness or in one’s ability to use certain self-concepts. (For a nice review article, see Graham 2002.) Synesthesia is also a fascinating abnormal phenomenon, although not really a “pathological” condition as such (Cytowic 2003). Those with synesthesia literally have taste sensations when seeing certain shapes or have color sensations when hearing certain sounds. It is thus an often bizarre mixing of incoming sensory input via different modalities.

One of the exciting results of this relatively new sub-field is the important interdisciplinary interest that it has generated among philosophers, psychologists, and scientists (such as in Graham 2010, Hirstein 2005, and Radden 2004).

6. Animal and Machine Consciousness

Two final areas of interest involve animal and machine consciousness. In the former case it is clear that we have come a long way from the Cartesian view that animals are mere “automata” and that they do not even have conscious experience (perhaps partly because they do not have immortal souls). In addition to the obviously significant behavioral similarities between humans and many animals, much more is known today about other physiological similarities, such as brain and DNA structures. To be sure, there are important differences as well and there are, no doubt, some genuinely difficult “grey areas” where one might have legitimate doubts about some animal or organism consciousness, such as small rodents, some birds and fish, and especially various insects. Nonetheless, it seems fair to say that most philosophers today readily accept the fact that a significant portion of the animal kingdom is capable of having conscious mental states, though there are still notable exceptions to that rule (Carruthers 2000, 2005). Of course, this is not to say that various animals can have all of the same kinds of sophisticated conscious states enjoyed by human beings, such as reflecting on philosophical and mathematical problems, enjoying artworks, thinking about the vast universe or the distant past, and so on. However, it still seems reasonable to believe that animals can have at least some conscious states from rudimentary pains to various perceptual states and perhaps even to some level of self-consciousness. A number of key areas are under continuing investigation. For example, to what extent can animals recognize themselves, such as in a mirror, in order to demonstrate some level of self-awareness? To what extent can animals deceive or empathize with other animals, either of which would indicate awareness of the minds of others? These and other important questions are at the center of much current theorizing about animal cognition. (See Keenan et. al. 2003 and Beckoff et. al. 2002.) In some ways, the problem of knowing about animal minds is an interesting sub-area of the traditional epistemological “problem of other minds”: How do we even know that other humans have conscious minds? What justifies such a belief?

The possibility of machine (or robot) consciousness has intrigued philosophers and non-philosophers alike for decades. Could a machine really think or be conscious? Could a robot really subjectively experience the smelling of a rose or the feeling of pain? One important early launching point was a well-known paper by the mathematician Alan Turing (1950) which proposed what has come to be known as the “Turing test” for machine intelligence and thought (and perhaps consciousness as well). The basic idea is that if a machine could fool an interrogator (who could not see the machine) into thinking that it was human, then we should say it thinks or, at least, has intelligence. However, Turing was probably overly optimistic about whether anything even today can pass the Turing Test, as most programs are specialized and have very narrow uses. One cannot ask the machine about virtually anything, as Turing had envisioned. Moreover, even if a machine or robot could pass the Turing Test, many remain very skeptical as to whether or not this demonstrates genuine machine thinking, let alone consciousness. For one thing, many philosophers would not take such purely behavioral (e.g., linguistic) evidence to support the conclusion that machines are capable of having phenomenal first person experiences. Merely using words like “red” doesn’t ensure that there is the corresponding sensation of red or real grasp of the meaning of “red.” Turing himself considered numerous objections and offered his own replies, many of which are still debated today.

Page 46: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Another much discussed argument is John Searle’s (1980) famous Chinese Room Argument, which has spawned an enormous amount of literature since its original publication (see also Searle 1984; Preston and Bishop 2002). Searle is concerned to reject what he calls “strong AI” which is the view that suitably programmed computers literally have a mind, that is, they really understand language and actually have other mental capacities similar to humans. This is contrasted with “weak AI” which is the view that computers are merely useful tools for studying the mind. The gist of Searle’s argument is that he imagines himself running a program for using Chinese and then shows that he does not understand Chinese; therefore, strong AI is false; that is, running the program does not result in any real understanding (or thought or consciousness, by implication). Searle supports his argument against strong AI by utilizing a thought experiment whereby he is in a room and follows English instructions for manipulating Chinese symbols in order to produce appropriate answers to questions in Chinese. Searle argues that, despite the appearance of understanding Chinese (say, from outside the room), he does not understand Chinese at all. He does not thereby know Chinese, but is merely manipulating symbols on the basis of syntax alone. Since this is what computers do, no computer, merely by following a program, genuinely understands anything. Searle replies to numerous possible criticisms in his original paper (which also comes with extensive peer commentary), but suffice it to say that not everyone is satisfied with his responses. For example, it might be argued that the entire room or “system” understands Chinese if we are forced to use Searle’s analogy and thought experiment. Each part of the room doesn’t understand Chinese (including Searle himself) but the entire system does, which includes the instructions and so on. Searle’s larger argument, however, is that one cannot get semantics (meaning) from syntax (formal symbol manipulation). Despite heavy criticism of the argument, two central issues are raised by Searle which continue to be of deep interest. First, how and when does one distinguish mere “simulation” of some mental activity from genuine “duplication”? Searle’s view is that computers are, at best, merely simulating understanding and thought, not really duplicating it. Much like we might say that a computerized hurricane simulation does not duplicate a real hurricane, Searle insists the same goes for any alleged computer “mental” activity. We do after all distinguish between real diamonds or leather and mere simulations which are just not the real thing. Second, and perhaps even more important, when considering just why computers really can’t think or be conscious, Searle interestingly reverts back to a biologically based argument. In essence, he says that computers or robots are just not made of the right stuff with the right kind of “causal powers” to produce genuine thought or consciousness. After all, even a materialist does not have to allow that any kind of physical stuff can produce consciousness any more than any type of physical substance can, say, conduct electricity. Of course, this raises a whole host of other questions which go to the heart of the metaphysics of consciousness. To what extent must an organism or system be physiologically like us in order to be conscious? Why is having a certain biological or chemical make up necessary for consciousness? Why exactly couldn’t an appropriately built robot be capable of having conscious mental states? How could we even know either way? However one answers these questions, it seems that building a truly conscious Commander Data is, at best, still just science fiction.

In any case, the growing areas of cognitive science and artificial intelligence are major fields within philosophy of mind and can importantly bear on philosophical questions of consciousness. Much of current research focuses on how to program a computer to model the workings of the human brain, such as with so-called “neural (or connectionist) networks.”

7. References and Further Reading

Alter, T. and S.Walter, eds. Phenomenal Concepts and Phenomenal Knowledge: New Essays on

Consciousness and Physicalism. New York: Oxford University Press, 2007. Armstrong, D. A Materialist Theory of Mind. London: Routledge and Kegan Paul, 1968. Armstrong, D. "What is Consciousness?" In The Nature of Mind. Ithaca, NY: Cornell University Press,

1981. Baars, B. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press, 1988. Baars, B. In The Theater of Consciousness. New York: Oxford University Press, 1997.

Page 47: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Baars, B., Banks, W., and Newman, J. eds. Essential Sources in the Scientific Study of Consciousness. Cambridge, MA: MIT Press, 2003.

Balog, K. "Conceivability, Possibility, and the Mind-Body Problem." In Philosophical Review 108: 497-528, 1999.

Bayne, T. & Chalmers, D. “What is the Unity of Consciousness?” In Cleeremans, 2003. Bayne, T. The Unity of Consciousness. New York: Oxford University Press, 2010. Beckoff, M., Allen, C., and Burghardt, G. The Cognitive Animal: Empirical and Theoretical Perspectives

on Animal Cognition. Cambridge, MA: MIT Press, 2002. Blackmore, S. Consciousness: An Introduction. Oxford: Oxford University Press, 2004. Block, N. "Troubles with Functionalism.” In Readings in the Philosophy of Psychology, Volume 1, Ned

Block, ed., Cambridge, MA: Harvard University Press, 1980a. Block, N. "Are Absent Qualia Impossible?" Philosophical Review 89: 257-74, 1980b. Block, N. "Inverted Earth." In Philosophical Perspectives, 4, J. Tomberlin, ed., Atascadero, CA:

Ridgeview Publishing Company, 1990. Block, N. "On a Confusion about the Function of Consciousness." In Behavioral and Brain Sciences 18:

227-47, 1995. Block, N. "Mental Paint and Mental Latex." In E. Villanueva, ed. Perception. Atascadero, CA: Ridgeview,

1996. Block, N. "The higher order approach to consciousness is defunct.” Analysis 71: 419-431, 2011. Block, N, Flanagan, O. & Guzeledere, G. eds. The Nature of Consciousness. Cambridge, MA: MIT

Press, 1997. Block, N. & Stalnaker, R. "Conceptual Analysis, Dualism, and the Explanatory Gap." Philosophical

Review 108: 1-46, 1999. Botterell, A. “Conceiving what is not there.” In Journal of Consciousness Studies 8 (8): 21-42, 2001. Boyd, R. "Materialism without Reductionism: What Physicalism does not entail." In N. Block,

ed. Readings in the Philosophy of Psychology, Vol.1. Cambridge, MA: Harvard University Press, 1980. Brentano, F. Psychology from an Empirical Standpoint. New York: Humanities, 1874/1973. Brook, A. Kant and the Mind. New York: Cambridge University Press, 1994. Brook, A. & Raymont, P. 2006. A Unified Theory of Consciousness. Forthcoming. Byrne, A. "Some like it HOT: Consciousness and Higher-Order Thoughts." In Philosophical

Studies 86:103-29, 1997. Byrne, A. "Intentionalism Defended." In Philosophical Review 110: 199-240, 2001. Byrne, A. “What Phenomenal Consciousness is like.” In Gennaro 2004a. Campbell, N. A Brief Introduction to the Philosophy of Mind. Ontario: Broadview, 2004. Carruthers, P. “Brute Experience.” In Journal of Philosophy 86: 258-269, 1989. Carruthers, P. Phenomenal Consciousness. Cambridge, MA: Cambridge University Press, 2000. Carruthers, P. “HOP over FOR, HOT Theory.” In Gennaro 2004a. Carruthers, P. Consciousness: Essays from a Higher-Order Perspective. New York: Oxford University

Press, 2005. Carruthers, P. “Meta-cognition in animals: A skeptical look.” Mind and Language 23: 58-89, 2008. Caston, V. “Aristotle on Consciousness.” Mind 111: 751-815, 2002. Chalmers, D.J. "Facing up to the Problem of Consciousness." In Journal of Consciousness

Studies 2:200-19, 1995. Chalmers, D.J. The Conscious Mind. Oxford: Oxford University Press, 1996. Chalmers, D.J. “What is a Neural Correlate of Consciousness?” In Metzinger 2000. Chalmers, D.J. Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University

Press, 2002. Chalmers, D.J. “The Representational Character of Experience.” In B. Leiter ed. The Future for

Philosophy. Oxford: Oxford University Press, 2004. Churchland, P. S. "Consciousness: the Transmutation of a Concept." In Pacific Philosophical

Quarterly 64: 80-95, 1983. Churchland, P. S. Neurophilosophy. Cambridge, MA: MIT Press, 1986. Cleeremans, A. The Unity of Consciousness: Binding, Integration and Dissociation. Oxford: Oxford

University Press, 2003.

Page 48: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Crick, F. and Koch, C. "Toward a Neurobiological Theory of Consciousness." In Seminars in Neuroscience 2: 263-75, 1990.

Crick, F. H. The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Scribners, 1994. Cytowic, R. The Man Who Tasted Shapes. Cambridge, MA: MIT Press, 2003. Dainton, B. Stream of Consciousness. New York: Routledge, 2000. Dainton, B. The Phenomenal Self. Oxford: Oxford University Press, 2008. Damasio, A. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New

York: Harcourt, 1999. Dennett, D. C. "Quining Qualia." In A. Marcel & E. Bisiach eds. Consciousness and Contemporary

Science. New York: Oxford University Press, 1988. Dennett, D.C. Consciousness Explained. Boston: Little, Brown, and Co, 1991. Dennett, D. C. Sweet Dreams. Cambridge, MA: MIT Press, 2005. Dretske, F. Naturalizing the Mind. Cambridge, MA: MIT Press, 1995. Droege, P. Caging the Beast. Philadelphia & Amsterdam: John Benjamins Publishers, 2003. Edelman, G. The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books,

1989. Edelman, G. & Tononi, G. “Reentry and the Dynamic Core: Neural Correlates of Conscious Experience.”

In Metzinger 2000. Flohr, H. "An Information Processing Theory of Anesthesia." In Neuropsychologia 33: 9, 1169-80, 1995. Fodor, J. "Special Sciences.” In Synthese 28, 77-115, 1974. Foster, J. The Immaterial Self: A Defence of the Cartesian Dualist Conception of Mind. London:

Routledge, 1996. Gendler, T. & Hawthorne, J. eds. Conceivability and Possibility. Oxford: Oxford University Press, 2002. Gennaro, R.J. “Brute Experience and the Higher-Order Thought Theory of Consciousness.”

In Philosophical Papers 22: 51-69, 1993. Gennaro, R.J. Consciousness and Self-consciousness: A Defense of the Higher-Order Thought Theory

of Consciousness. Amsterdam & Philadelphia: John Benjamins, 1996a. Gennaro, R.J. Mind and Brain: A Dialogue on the Mind-Body Problem. Indianapolis: Hackett Publishing

Company, 1996b. Gennaro, R.J. “Leibniz on Consciousness and Self Consciousness.” In R. Gennaro & C. Huenemann,

eds. New Essays on the Rationalists. New York: Oxford University Press, 1999. Gennaro, R.J. “Jean-Paul Sartre and the HOT Theory of Consciousness.” In Canadian Journal of

Philosophy32: 293-330, 2002. Gennaro, R.J. “Higher-Order Thoughts, Animal Consciousness, and Misrepresentation: A Reply to

Carruthers and Levine,” 2004. In Gennaro 2004a. Gennaro, R.J., ed. Higher-Order Theories of Consciousness: An Anthology. Amsterdam and

Philadelphia: John Benjamins, 2004a. Gennaro, R.J. “The HOT Theory of Consciousness: Between a Rock and a Hard Place?” In Journal of

Consciousness Studies 12 (2): 3-21, 2005. Gennaro, R.J. “Between Pure Self-referentialism and the (extrinsic) HOT Theory of Consciousness.” In

Kriegel and Williford 2006. Gennaro, R.J. “Animals, consciousness, and I-thoughts.” In R. Lurz ed. Philosophy of Animal Minds. New

York: Cambridge University Press, 2009. Gennaro, R.J. The Consciousness Paradox: Consciousness, Concepts, and Higher-Order

Thoughts. Cambridge, MA: MIT Press, 2012. Goldman, A. “Consciousness, Folk Psychology and Cognitive Science.” In Consciousness and

Cognition 2: 264-82, 1993. Graham, G. “Recent Work in Philosophical Psychopathology.” In American Philosophical Quarterly 39:

109-134, 2002. Graham, G. The Disordered Mind. New York: Routledge, 2010. Gunther, Y. ed. Essays on Nonconceptual Content. Cambridge, MA: MIT Press, 2003. Guzeldere, G. “Is Consciousness the Perception of what passes in one’s own Mind?” In Metzinger 1995. Hameroff, S. "Quantum Computation in Brain Microtubules? The Pemose-Hameroff "Orch OR" Model of

Consciousness." In Philosophical Transactions Royal Society London A 356:1869-96, 1998. Hardin, C. Color for Philosophers. Indianapolis: Hackett, 1986.

Page 49: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Harman, G. "The Intrinsic Quality of Experience." In J. Tomberlin, ed. Philosophical Perspectives, 4. Atascadero, CA: Ridgeview Publishing, 1990.

Heidegger, M. Being and Time (Sein und Zeit). Translated by J. Macquarrie and E. Robinson. New York: Harper and Row, 1927/1962.

Hill, C. S. "Imaginability, Conceivability, Possibility, and the Mind-Body Problem." In Philosophical Studies 87: 61-85, 1997.

Hill, C. and McLaughlin, B. "There are fewer things in Reality than are dreamt of in Chalmers' Philosophy." In Philosophy and Phenomenological Research 59: 445-54, 1998.

Hirstein, W. Brain Fiction. Cambridge, MA: MIT Press, 2005. Horgan, T. and Tienson, J. "The Intentionality of Phenomenology and the Phenomenology of

Intentionality." In Chalmers 2002. Husserl, E. Ideas: General Introduction to Pure Phenomenology (Ideen au einer reinen Phänomenologie

und phänomenologischen Philosophie). Translated by W. Boyce Gibson. New York: MacMillan, 1913/1931.

Husserl, E. Cartesian Meditations: an Introduction to Phenomenology. Translated by Dorian Cairns.The Hague: M. Nijhoff, 1929/1960.

Jackson, F. "Epiphenomenal Qualia." In Philosophical Quarterly 32: 127-136, 1982. Jackson, F. "What Mary didn't Know." In Journal of Philosophy 83: 291-5, 1986. James, W. The Principles of Psychology. New York: Henry Holt & Company, 1890. Kant, I. Critique of Pure Reason. Translated by N. Kemp Smith. New York: MacMillan, 1965. Keenan, J., Gallup, G., and Falk, D. The Face in the Mirror. New York: HarperCollins, 2003. Kim, J. "The Myth of Non-Reductive Physicalism." In Proceedings and Addresses of the American

Philosophical Association, 1987. Kim, J. Supervenience and Mind. Cambridge, MA: Cambridge University Press, 1993. Kim, J. Mind in Physical World. Cambridge: MIT Press, 1998. Kind, A. “What’s so Transparent about Transparency?” In Philosophical Studies 115: 225-244, 2003. Kirk, R. Raw Feeling. New York: Oxford University Press, 1994. Kirk, R. Zombies and Consciousness. New York: Oxford University Press, 2005. Kitcher, P. Kant’s Transcendental Psychology. New York: Oxford University Press, 1990. Kobes, B. “Telic Higher-Order Thoughts and Moore’s Paradox.” In Philosophical Perspectives 9: 291-

312, 1995. Koch, C. The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts and

Company, 2004. Kriegel, U. “PANIC Theory and the Prospects for a Representational Theory of Phenomenal

Consciousness.” In Philosophical Psychology 15: 55-64, 2002. Kriegel, U. “Consciousness, Higher-Order Content, and the Individuation of Vehicles.” In Synthese 134:

477-504, 2003a. Kriegel, U. “Consciousness as Intransitive Self-Consciousness: Two Views and an Argument.”

In Canadian Journal of Philosophy 33: 103-132, 2003b. Kriegel, U. “Consciousness and Self-Consciousness.” In The Monist 87: 182-205, 2004. Kriegel, U. “Naturalizing Subjective Character.” In Philosophy and Phenomenological Research,

forthcoming. Kriegel, U. “The Same Order Monitoring Theory of Consciousness.” In Kriegel and Williford 2006. Kriegel, U. Subjective Consciousness. New York: Oxford University Press, 2009. Kriegel, U. & Williford, K. Self-Representational Approaches to Consciousness. Cambridge, MA: MIT

Press, 2006. Kripke, S. Naming and Necessity. Cambridge, MA: Harvard University Press, 1972. Leibniz, G. W. Discourse on Metaphysics. Translated by D. Garber and R. Ariew. Indianapolis: Hackett,

1686/1991. Leibniz, G. W. The Monadology. Translated by R. Lotte. London: Oxford University Press, 1720/1925. Levine, J. "Materialism and Qualia: the Explanatory Gap." In Pacific Philosophical Quarterly 64,354-361,

1983. Levine, J. "On Leaving out what it's like." In M. Davies and G. Humphreys, eds. Consciousness:

Psychological and Philosophical Essays. Oxford: Blackwell, 1993. Levine, J. Purple Haze: The Puzzle of Conscious Experience. Cambridge, MA: MIT Press, 2003.

Page 50: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Loar, B. "Phenomenal States." In Philosophical Perspectives 4, 81-108, 1990. Loar, B. "Phenomenal States". In N. Block, O. Flanagan, and G. Guzeldere eds. The Nature of

Consciousness. Cambridge, MA: MIT Press, 1997. Loar, B. “David Chalmers’s The Conscious Mind.” Philosophy and Phenomenological Research 59: 465-

72, 1999. Locke, J. An Essay Concerning Human Understanding. Ed. P. Nidditch. Oxford: Clarendon, 1689/1975. Ludlow, P., Nagasawa, Y, & Stoljar, D. eds. There’s Something about Mary. Cambridge, MA: MIT Press,

2004. Lurz, R. “Neither HOT nor COLD: An Alternative Account of Consciousness.” In Psyche 9, 2003. Lurz, R. “Either FOR or HOR: A False Dichotomy.” In Gennaro 2004a. Lycan, W.G. Consciousness and Experience. Cambridge, MA: MIT Press, 1996. Lycan, W.G. “A Simple Argument for a Higher-Order Representation Theory of

Consciousness.” Analysis 61: 3-4, 2001. Lycan, W.G. "The Superiority of HOP to HOT." In Gennaro 2004a. Macpherson, F. “Colour Inversion Problems for Representationalism.” In Philosophy and

Phenomenological Research 70: 127-52, 2005. Mandler, G. Mind and Emotion. New York: Wiley, 1975. Marshall, J. and Zohar, D. The Quantum Self: Human Nature and Consciousness Defined by the New

Physics. New York: Morrow, 1990. McGinn, C. "Can we solve the Mind-Body Problem?" In Mind 98:349-66, 1989. McGinn, C. The Problem of Consciousness. Oxford: Blackwell, 1991. McGinn, C. "Consciousness and Space.” In Metzinger 1995. Metzinger, T. ed. Conscious Experience. Paderbom: Ferdinand Schöningh, 1995. Metzinger, T. ed. Neural Correlates of Consciousness: Empirical and Conceptual Questions. Cambridge,

MA: MIT Press, 2000. Moore, G. E. "The Refutation of Idealism." In G. E. Moore Philosophical Studies. Totowa, NJ: Littlefield,

Adams, and Company, 1903. Nagel, T. "What is it like to be a Bat?" In Philosophical Review 83: 435-456, 1974. Natsoulas, T. “The Case for Intrinsic Theory I. An Introduction.” In The Journal of Mind and Behavior 17:

267-286, 1996. Neander, K. “The Division of Phenomenal Labor: A Problem for Representational Theories of

Consciousness.” In Philosophical Perspectives 12: 411-434, 1998. Papineau, D. Philosophical Naturalism. Oxford: Blackwell, 1994. Papineau, D. "The Antipathetic Fallacy and the Boundaries of Consciousness." In Metzinger 1995. Papineau, D. “Mind the Gap.” In J. Tomberlin, ed. Philosophical Perspectives 12. Atascadero, CA:

Ridgeview Publishing Company, 1998. Papineau, D. Thinking about Consciousness. Oxford: Oxford University Press, 2002. Perry, J. Knowledge, Possibility, and Consciousness. Cambridge, MA: MIT Press, 2001. Penrose, R. The Emperor's New Mind: Computers, Minds and the Laws of Physics. Oxford: Oxford

University Press, 1989. Penrose, R. Shadows of the Mind. Oxford: Oxford University Press, 1994. Place, U. T. "Is Consciousness a Brain Process?" In British Journal of Psychology 47: 44-50, 1956. Polger, T. Natural Minds. Cambridge, MA: MIT Press, 2004. Preston, J. and Bishop, M. eds. Views into the Chinese Room: New Essays on Searle and Artificial

Intelligence. New York: Oxford University Press, 2002. Radden, J. Ed. The Philosophy of Psychiatry. New York: Oxford University Press, 2004. Ramachandran, V.S. A Brief Tour of Human Consciousness. New York: PI Press, 2004. Ramachandran, V.S. and Blakeslee, S. Phantoms in the Brain. New York: Harper Collins, 1998. Revonsuo, A. Consciousness: The Science of Subjectivity. New York: Psychology Press, 2010. Robinson, W.S. Understanding Phenomenal Consciousness. New York: Cambridge University Press,

2004. Rosenthal, D. M. “Two Concepts of Consciousness." In Philosophical Studies 49:329-59, 1986. Rosenthal, D. M. "The Independence of Consciousness and Sensory Quality." In E. Villanueva,

ed. Consciousness. Atascadero, CA: Ridgeview Publishing, 1991.

Page 51: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Rosenthal, D.M. “State Consciousness and Transitive Consciousness.” In Consciousness and Cognition 2: 355-63, 1993a.

Rosenthal, D. M. "Thinking that one thinks." In M. Davies and G. Humphreys, eds. Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993b.

Rosenthal, D. M. "A Theory of Consciousness." In N. Block, O. Flanagan, and G. Guzeldere, eds. The Nature of Consciousness. Cambridge, MA: MIT Press, 1997.

Rosenthal, D. M. “Introspection and Self-Interpretation.” In Philosophical Topics 28: 201-33, 2000. Rosenthal, D. M. “Varieties of Higher-Order Theory.” In Gennaro 2004a. Rosenthal, D.M. Consciousness and Mind. New York: Oxford University Press, 2005. Rosenthal, D.M. “Exaggerated reports: reply to Block.” Analysis 71: 431-437, 2011. Ryle, G. The Concept of Mind. London: Hutchinson and Company, 1949. Sacks, 0. The Man who mistook his Wife for a Hat and Other Essays. New York: Harper and Row, 1987. Sartre, J.P. Being and Nothingness. Trans. Hazel Barnes. New York: Philosophical Library, 1956. Seager, W. Theories of Consciousness. London: Routledge, 1999. Seager, W. “A Cold Look at HOT Theory.” In Gennaro 2004a. Searle, J. “Minds, Brains, and Programs.” In Behavioral and Brain Sciences 3: 417-57, 1980. Searle, J. Minds, Brains and Science. Cambridge, MA: Harvard University Press, 1984. Searle, J. The Rediscovery of the Mind. Cambridge. MA: MIT Press, 1992. Siewert, C. The Significance of Consciousness. Princeton, NJ: Princeton University Press, 1998. Shallice, T. From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press, 1988. Shear, J. Explaining Consciousness: The Hard Problem. Cambridge, MA: MIT Press, 1997. Shoemaker, S. "Functionalism and Qualia." In Philosophical Studies, 27, 291-315, 1975. Shoemaker, S. "Absent Qualia are Impossible." In Philosophical Review 90, 581-99, 1981. Shoemaker, S. "The Inverted Spectrum." In Journal of Philosophy, 79, 357-381, 1982. Silberstein, M. "Emergence and the Mind-Body Problem." In Journal of Consciousness Studies 5: 464-

82, 1998. Silberstein, M. "Converging on Emergence: Consciousness, Causation and Explanation." In Journal of

Consciousness Studies 8: 61-98, 2001. Skinner, B. F. Science and Human Behavior. New York: MacMillan, 1953. Smart, J.J.C. "Sensations and Brain Processes." In Philosophical Review 68: 141-56, 1959. Smith, D.W. “The Structure of (self-)consciousness.” In Topoi 5: 149-56, 1986. Smith, D.W. Mind World: Essays in Phenomenology and Ontology. Cambridge, MA: Cambridge

University Press, 2004. Stubenberg, L. Consciousness and Qualia. Philadelphia & Amsterdam: John Benjamins Publishers,

1998. Swinburne, R. The Evolution of the Soul. Oxford: Oxford University Press, 1986. Thau, M. Consciousness and Cognition. Oxford: Oxford University Press, 2002. Titchener, E. An Outline of Psychology. New York: Macmillan, 1901. Turing, A. “Computing Machinery and Intelligence.” In Mind 59: 433-60, 1950. Tye, M. Ten Problems of Consciousness. Cambridge, MA: MIT Press, 1995. Tye, M. Consciousness, Color, and Content. Cambridge, MA: MIT Press, 2000. Tye, M. Consciousness and Persons. Cambridge, MA: MIT Press, 2003. Van Gulick, R. "Physicalism and the Subjectivity of the Mental." In Philosophical Topics 13, 51-70, 1985. Van Gulick, R. "Nonreductive Materialism and Intertheoretical Constraint." In A. Beckermann, H. Flohr, J.

Kim, eds. Emergence and Reduction. Berlin and New York: De Gruyter, 1992. Van Gulick, R. "Understanding the Phenomenal Mind: Are we all just armadillos?" In M. Davies and G.

Humphreys, eds., Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell, 1993. Van Gulick, R. "What would count as Explaining Consciousness?" In Metzinger 1995. Van Gulick, R. "Inward and Upward: Reflection, Introspection and Self-Awareness." In Philosophical

Topics 28: 275-305, 2000. Van Gulick, R. "Higher-Order Global States HOGS: An Alternative Higher-Order Model of

Consciousness." In Gennaro 2004a. Van Gulick, R. “Mirror Mirror – is that all?” In Kriegel and Williford 2006. Velmans, M. and S. Schneider eds. The Blackwell Companion to Consciousness. Malden, MA:

Blackwell, 2007.

Page 52: The problem of consciousness · The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a ... Gregory, R. L. (ed.) (2001) Oxford companion

Weisberg, J. “Same Old, Same Old: The Same-Order Representation Theory of Consciousness and the Division of Phenomenal Labor.” Synthese 160: 161-181, 2008.

Weisberg, J. “Misrepresenting consciousness.” Philosophical Studies 154: 409-433, 2011a. Weisberg, J. “Abusing the Notion of What-it’s-like-ness: A Response to Block.” Analysis 71: 438-443,

2011b. Weiskrantz, L. Blindsight. Oxford: Clarendon, 1986. Wilkes, K. V. "Is Consciousness Important?" In British Journal for the Philosophy of Science 35: 223-43,

1984. Wilkes, K. V. "Yishi, Duo, Us and Consciousness." In A. Marcel & E. Bisiach, eds., Consciousness in

Contemporary Science. Oxford: Oxford University Press, 1988. Williford, K. “The Self-Representational Structure of Consciousness.” In Kriegel and Williford 2006. Wundt, W. Outlines of Psychology. Leipzig: W. Engleman, 1897. Yablo, S. "Concepts and Consciousness." In Philosophy and Phenomenological Research 59: 455-63,

1999. Zelazo, P, M. Moscovitch, and E. Thompson. Eds. The Cambridge Handbook of Consciousness.

Cambridge: Cambridge University Press, 2007.