36
Functionalism In 1967 Hilary Putnam published a paper with the title “Psychological Predicates.” 1

Functionalism In 1967 Hilary Putnam published a paper with the title “Psychological Predicates.” 1

Embed Size (px)

Citation preview

Functionalism

In 1967 Hilary Putnam published a paper with the title “Psychological Predicates.”

1

First, it quickly brought about the demise of type physicalism, in particular the mind-brain identity theory

Consequences:

Second, it gave birth to functionalism

Third, it helped to install antireductionism as the received view on the nature of mental properties and other "higher-level" properties of the special sciences.

All this arose out of a single idea: the multiple realizability of mental properties.

2

If the idea of an angel with beliefs, desires, and emotions is a consistent idea, that would show that there is nothing in the idea of mentality itself that precludes purely nonphysical, wholly immaterial realizations of psychological states.

It seems then that we cannot set aside the possibility of immaterial realizations of mentality as a matter of a priori conceptual fact. Ruling out such a possibility requires commitment to a substantive metaphysical thesis, perhaps something like this:

Realization Physicalism: If something x has some mental property M (or is in mental state M) at time t, then x is a material thing and x has M at t in virtue of the fact that x has at t some physical property P that realizes M in X at t.

Realization Physicalism

3

The Realization Physicalism:Minds, if they exist, must be embodied.

4

This principle provides for the possibility of multiple realization of mental properties. Mental property M—say, being in pain—may be such that in humans C-fiber activation realizes it but in other species (say, octopuses and reptiles) physiological mechanisms that realize pain may be vastly different. Perhaps there might be non-carbon-based or no protein-based biological organisms with mentality, and we cannot a priori preclude the possibility that nonbiological electromechanical systems, like the “intelligent” robots and androids in science fiction, might be capable of having beliefs, desires, and even sensations. All this suggests an interesting feature of mental concepts: they seem to carry no constraint on the actual physical-biological mechanisms that, in a given system, realize or implement them.

Realization Physicalism

5

Mental Concepts are like Concepts of Artifacts

We can think of pain to be specified by the job description "tissue-damage detector."

The concept of an engine is specified by a job description, not a description of mechanisms that can execute the job.

6

A computational view of mentality shows that we must expect mental states to be variably realized. We know that any computational process can be implemented in a huge variety of physically diverse computing machines.

Mental Concepts are like Concepts of Artifacts

7

What these considerations point to, according to some philosophers, is the abstractness or formality of psychological properties in relation to physical or biological properties: Psychological kinds abstract from physical and biological details of organisms so that states that are vastly different from a physicochemical point of view can fall under the same psychological kind, and organisms and systems that are widely diverse biologically and physically can instantiate the same psychological regularities.

Functionalism

8

Conversely, the same physical structure, depending on the way it is causally embedded in a larger system, can subserve different psychological capacities and functions (just as the same computer chip can be used for different computational functions in various subsystems of a computer). After all, most neurons, it has been said, are pretty much alike and largely interchangeable.

Functionalism

9

What is it, then, that binds together all the physically diverse instances of a given mental kind? What do all pains—pains in humans, pains in canines, pains in octopuses, and pains in Martians—have in common in virtue of which they all fall under a single psychological kind, pain?

A Crucial Question

10

The brain-state type physicalist will say this: What all pains have in common that makes them cases of pain is a certain neurobiological property, namely, being an instance of C-fiber excitation (or some such state).

A behaviorist will answer the question: What all pains have in common is a certain behavioral property—or, to put it another way, two organisms are both in pain at a time just in case at that time they exhibit, or are disposed to exhibit, the behavior patterns definitive of pain (e.g., escape behavior, withdrawal behavior, etc.). For the behaviorist, then, a mental kind is a behavioral kind.

The Answers

11

According to functionalism, a mental kind is a functional kind, or a causalfunctional kind, since the "function" involved is to fill a certain causal role.

The Answers

12

An organism has the capacity to be in pain just in case it is equipped with a mechanism that detects damage to its tissues, regardless of exactly how this mechanism is physically configured. To work as a tissue-damage detector, the mechanism must causally respond to tissue damages: Ideally, every instance of tissue damage, and nothing else, must activate this mechanism—turn it on—and this must further activate other mechanisms with which it is hooked up, finally resulting in, among other things, behavior that will in normal circumstances spatially separate the damaged part, or the whole organism, from the external cause of the damage (that is, it must trigger escape behavior).

Functionalism

13

Thus, the concept of pain is defined in terms of its function, and the function involved is to serve as a causal intermediary between typical pain inputs (tissue damage, trauma, etc.) and typical pain outputs (winces, groans, escape behavior, etc.). The functionalist will say that this is generally true of all mental kinds: Mental kinds are causal-functional kinds, and what all instances of a given mental kind have in common is the fact that they serve a certain causal role distinctive of that kind. As Armstrong has put it, the concept of a mental state is that of an internal state apt to be caused by certain sensory inputs and apt for causing certain behavioral outputs.

Black Box FunctionalismCausal-Theoretical Functionalism

14

Functionalism and Behaviorism:Commonalities and Differences

15

Commonality: both functionalism and behaviorism speak of sensory input and behavioral output—or “stimulus” and “response”—as central to the concept of mentality. In this respect, functionalism is part of a broadly behavioral approach to mentality and can be considered a generalized and more sophisticated version of behaviorism.

16

First, the functionalist, on the one hand, takes mental states to be real internal states of an organism with causal powers; for an organism to be in pain is for it to be in an internal state (e.g., a neurobiological state for humans) that is typically caused by tissue damage and that in turn typically causes winces, groans, and escape behavior. The behaviorist, on the other hand, eschews talk of internal states in connection with mental states, identifying them with actual or possible behavior. Thus, to be in pain, for the behaviorist, is to wince and groan or be disposed to wince and groan but not, as the functionalist would have it, to be in some internal state that causes winces and groans.

Differences:

17

The functionalist takes a "realist" approach to dispositions whereas the behaviorist embraces an "instrumentalist" line.

[Instrumentalist analysis] x is soluble in water = def. if x is immersed in water, x dissolves.

[Realist analysis] x is soluble in water = def. x is in a certain internal state S (that is, has a certain microstructure S) such that when x is immersed in water, S causes x to dissolve.

Differences:

18

The second major difference between functionalism and behaviorism, one that gives the former a substantially greater power, is in the way "input" and "output" are construed for mental states. For the behaviorist, input and output consist entirely of observable physical stimulus conditions and observable behavioral responses. However, the functionalist will include reference to other mental states in the characterization of a given mental state. It is a crucial part of the functionalist conception of a mental state that its typical causes and effects may include other mental states. Thus, for a ham sandwich to cause you to want to eat it, you must believe it to be a ham sandwich; a bad headache can cause you a feeling of distress and a desire to call your doctor.

Differences:

19

In functionalism, what makes a mental event the kind of mental event it is is the way it is causally linked to other mental-event kinds and input/output conditions. Since each of these other mental-event kinds in turn has its identity determined by its causal relations to other mental events and inputs/outputs, the identity of each mental kind depends ultimately on the whole system—its internal structure and the way it is causally linked to the external world via its inputs and outputs. In this sense, functionalism gives us a holistic conception of mentality.

Functionalism

20

Computer Functionalism

The way the system works is that the brain is a digital computer and what we call the “mind” is a digital computer program or set of programs.

Mental states are computational states of the brain. The brain is a computer and the mind is a program or set of programs.

The mind is to the brain as the program is to the hardware.

21

Strong Artificial Intelligence

Computer Functionalism

22

COMPUTATION AND MENTAL PROCESSES

Related Notions:

Algorithm, a Turing machine, Church's thesis, Turing's theorem, the Turing test, levels of description, multiple realizability, and recursive decomposition.

23

Algorithms. An algorithm is a method for solving a problem by going through a precise series of steps. The steps must be finite in number, and if carried out correctly, they guarantee a solution to the problem. For this reason algorithms are also called “effective procedures.” Good examples are the methods used to solve problems in arithmetic, such as addition and subtraction. If you follow the steps exactly, you will get the correct solution.

24

Turing Machines. A Turing machine is a device that carries out calculations using only two types of symbols. These are usually thought of as zeros and ones, but any symbols will do. The idea of the Turing machine was invented by Alan Turing, the great British logician and mathematician. The striking feature of the Turing machine is its simplicity: it has an endless tape on which the symbols are written. It has a head that reads symbols on the tape. The Turing machine head will move to the left or to the right, it can erase a zero, it can print a one, it can erase a one, it can print a zero. It does all of these things in accordance with a program, which consists of a set of rules. The rules always have the same form; under condition C, perform act A: C A. For example, a rule might be of the form, if you are scanning a zero, replace it with a one and move one square to the left.

25

Church's Thesis. Due originally to Alonzo Church (arrived at independently by Turing, so sometimes called the Church-Turing Thesis), this thesis states that any problem that has an algorithmic solution can be solved on a Turing machine. Or another way to say the same thing is that any algorithm at all can be carried out on a Turing machine. The idea of a machine that just uses binary symbols, zeros and ones, is sufficient to carry out any algorithm whatever. This is a very important thesis because it says in mathematical terms that any problem that is computable can be computed on a Turing machine. Any computable function is Turing computable.

26

Turing machines can come in many different kinds, states, and varieties. In my car there are specialized computers for detecting the rate of fuel consumption, for example. But in addition to the idea of these special-purpose computers, or Turing machines, there is the idea of a general-purpose computer, something that can implement any program at all. And Turing, in an important mathematical result known as Turing's Theorem, proved that there is a Universal Turing machine that can simulate the behavior of any other Turing machine. More precisely, Turing proved that there is a Universal Turing machine, UTM, such that for any Turing machine carrying out a specific program, TP, UTM can carry out TP.

The Universal Turing Machine

27

Human brain is a Universal Turing machine

To get a really adequate scientific account of the mind we need only to discover the Turing machine programs that we are all using when we engage in cognition.

28

The Turing Test. we can side-step all the great debates about the other minds problem, about whether or not there really is any thinking going on in the machine, whether the machine is really intelligent, by simply asking ourselves, Can the machine perform in such a way that an expert cannot distinguish its performance from a human performance? If the machine responds to questions put to it in Chinese as well as a native Chinese speaker, so that other native Chinese speakers could not tell the difference between the machine and a native Chinese speaker, then we would have to say that the machine understood Chinese. The Turing test, as you will have noticed, expresses a kind of behaviorism. It says that the behavioral test is conclusive for the presence of mental states.

29

Levels of Description. Any complex system can be described in different ways. Thus, for example, a car engine can be characterized in terms of its molecular structure, in terms of its gross physical shape, in terms of its component parts, etc. It is tempting to describe this variability of descriptive possibilities in terms of the metaphor of “levels, ” and this terminology has become generally accepted. We think of the microlevel of molecules as a lower level of description than the level of gross physical structure or physical components, which are higher levels of description. Most of the interest of this distinction is that it applies in a dramatic fashion to computers. At a lower level of description your computer and mine might be quite different. Yours may have a different type of processor than mine, for example. But at a higher level of description they may be implementing exactly the same algorithm. They may be carrying out the same program.

30

Multiple Realizability. The notion of different levels of description already implicitly contains another notion that is crucial to the computational theory of mind, and that is the idea of multiple realizability. The point is that a higherlevel feature, such as being the Word program or being a carburetor, may be physically realized in different systems, thus one and the same higher-level feature can be said to be multiply realizable in different lower-level hardwares. Multiple realizability seems to be a natural feature of token identity theories. The different tokens of different types at the lower level may be different forms of realization of some common higher level mental feature. Just as the same computer program may be implemented in different sorts of hardware and thus is multiply realizable; so the same mental state, such as the belief that it is going to rain, might be implemented in different sorts of hardware, and thus also be multiply realizable.

31

One and the same system represented by the line AB can be realized in different lower level systems, represented by lines BC, BD, BE, BF, and BG

32

Recursive Decomposition. The complex tasks can be broken down (decomposed) into simple tasks by repeated application (recursively) of the same procedures until all that is left are simple binary operations with two symbols, the zeroes and ones. In the early heady days, some people even said that the fact that neurons were either firing or not firing was an indication that the brain was a binary system, just like any other digital computer. Again, the idea of recursive decomposition seemed to give us an important clue to understanding human intelligence. Complex intelligent human tasks are recursively decomposable into simple tasks, and that is how we are so intelligent.

33

The brain is a digital computer, in all probability a Universal Turing machine. As such it carries out algorithms by implementing programs, and what we call the mind is a program or a set of such programs. To understand human cognitive capacities it is only necessary to discover the programs that human beings are actually implementing when they activate such cognitive capacities as perception, memory, etc. Because the mental level of description is a program level, we do not need to understand the details of how the brain works in order to understand human cognition. Indeed, because the level of description is at a higher level than neuronal structures, we are not forced to any type-type identity theory of the mind. Rather, mental states are multiply realizable in different sorts of physical structures, which just happen to be implemented in brains but could equally well have been implemented in an indefinite range of computer hardwares. Any hardware implementation will do for the human mind provided only that it is stable enough and rich enough to carry the programs. Because we are Turing machines we will be able to understand cognition by reducing complex operations into the ultimately simplest operations, the manipulation of zeros and ones. Furthermore, we have a test that will enable us to tell when we have actually duplicated human cognition, the Turing test. The Turing test gives us a conclusive proof of the presence of cognitive capacities. To find out whether or not we have actually invented an intelligent machine we need only apply the Turing test. And we now have a research project; indeed, it is the research project of cognitive science.

Computer Functionalism

34

We try to discover the programs that are implemented in the brain by designing programs for our commercial machines that will pass the Turing test, and then we ask the psychologists to perform experiments on humans to see if they are following the same program as the program on our computer. For example, in one famous experiment involving the memory of numbers, the reaction times of the subjects seemed to vary in the same way as the processing time of a computer. This seemed to a lot of cognitive scientists good evidence that the humans were using the algorithmic procedures of the computer.

Computer Functionalism

35

ObjectionsSearle’s Chinese Room “Premise 1: It is possible in principle that a Chinese Room does not understand (any) Chinese, but acts perfectly as if it understood (some) Chinese.Premise 2: If understanding Chinese is a physical property, then it is not possible, not even in principle, that a Chinese Room acts (i.e., functions) perfectly as if it understood Chinese and does not understand Chinese. Conclusion: Understanding Chinese is not a physical property.Corollary: That Mao Tse-tung understood Chinese in 1955 is not a physical state of affairs.” (ibid, p. 172)

36