15
JOAN M. MOAAIS DOYLE PAUL JOHNSON Thinking Machines and Creativity INTRODUCTION Two major developments are currently moving forward on the cutting edge of our knowledge and technology which, taken together, suggest that the dividing line between human beings and machines may be getting somewhat fuzzy. One of these areas is the explosive development of computer programming capable of simulating human thinking processes (some examples are SAM, PAM. PHRAN. PHRED and DENDRAL (McCorduck. 1979; Restak, 1979; Rose. 1984». The other area is the growing knowledge provided by brain researchers into the neurological process of the brain (Restak, 1979; Rose. 1984). At the risk of oversimplification, it might be suggested that machines capable of thinking can be built and that the human brain is essentially a thinking machine. Even if we allow for only a minimal and superficial cornpari- son between computers and brains, the implications are profound when considered in the historical context of the development of science. Throughout most of the history of science, human beings have been thought to share more in common with organic life than with inorganic matter. Within the category of organic life human beings have been thought to be closer to animals than to plants, but both animals and plants have traditionally been sharply distinguished from inorganic mechanical systems. In spite of the prevalence of these traditional distinctions between inorganic matter and organic life. developments in computer technology and brain research have brought about some radical rethinking of the long-established differentiation between human beings and machines. 241 Volume 19 Number 4 Fourlh Quarter

Thinking Mochines and Creativity

Embed Size (px)

Citation preview

JOAN M. MOAAIS

DOYLE PAUL JOHNSON

Thinking Machinesand Creativity

INTRODUCTION Two major developments are currently moving forward on thecutting edge of our knowledge and technology which, takentogether, suggest that the dividing line between human beingsand machines may be getting somewhat fuzzy. One of theseareas is the explosive development of computer programmingcapable of simulating human thinking processes (someexamples are SAM, PAM. PHRAN. PHRED and DENDRAL(McCorduck. 1979; Restak, 1979; Rose. 1984».The other areais the growing knowledge provided by brain researchers intothe neurological process of the brain (Restak, 1979; Rose.1984). At the risk of oversimplification, it might be suggestedthat machines capable of thinking can be built and that thehuman brain is essentially a thinking machine.

Even if we allow for only a minimal and superficial cornpari­son between computers and brains, the implications areprofound when considered in the historical context of thedevelopment of science. Throughout most of the history ofscience, human beings have been thought to share more incommon with organic life than with inorganic matter. Withinthe category of organic life human beings have been thoughtto be closer to animals than to plants, but both animals andplants have traditionally been sharply distinguished frominorganic mechanical systems. In spite of the prevalence ofthese traditional distinctions between inorganic matter andorganic life. developments in computer technology and brainresearch have brought about some radical rethinking ofthe long-established differentiation between human beingsand machines.

241 Volume 19 Number 4 Fourlh Quarter

242

MECHANICALVERSUS

CREATIVEPROCESSES

Thinking Machines and Creativity

Human beings are organisms and exhibit all the complex lifeprocesses associated with organic life. In contrast. computers.even the most complex. are machines; they are not living,organic systems. Why. then, should \ve consider straining ourdefinitions of organic versus mechanical systems to try toestablish similarities between them? After all, we know thehuman body must obey the laws of physics just as machinesand all other material objects must; yet, we do not usuallybelabor the obvious implications following from this. Whyshould it be different with machines that duplicate humanthinking. as opposed to machines which duplicate otherhuman processes (like lifting and carrying which we regularlydelegate to machines)?

Perhaps the challenge lies in the fact that to duplicate thethinking process with a machine is to question the distinctive­ness of the human capacity for thought. Humans share with allmaterial objects the necessity for conforming to the laws ofgravity, for example. Humans share with almost all organismsthe capacity for growing, reproducing and actively adapting totheir environment to preserve the life process. But thinking,especially abstract creative thought, is the unique capability ofhuman beings that sets them apart from the rest of nature. Or.at least this assumption was widely accepted without questionbefore the development of computers as thinklnq machines.The purpose of this paper is to compare the capabilities andlimitations of computers as thinking machines with the capa­bilities and limitations of the built-in human biocomputerenclosed in our skulls. In undertaking this comparison. we willcontrast two different styles or modes of data processing:mechanical versus creative. Mechanical modes of processingdata and responding to the environment are not limited towhat we usually think of as machines. Indeed, a large part ofhuman behavior follows a mechanical mode of dealing withdata about the environment.

Creativity, in contrast, represents a different way of dealingwith data from the environment and responding to it. Part ofthis article will deal with the elusive question of how creativitycan be defined. At this point we simply note that creativity isless regular and predictable. and this unpredictability is notdue merely to inadequate knowledge of the observer. As atentative first step in distinguishing mechanical and creativemodes of handling data and responding to the environment,we might say that a creative mode involves going beyond ortranscending the system's prior programming. In other words,the system's output cannot be predicted reliably on the basis

243

MACHINESAND MAN

ABSTRACTMACHINES

The Journal of Creative Behavior

of knowledge of the input. To help see the importance of thedifference between the mechanical and the creative types ofresponses, let us explore each of these in detail.

It would be difficult to imagine how fast our lives would screechto a halt if suddenlyall our machines ceased to function. Someof us would be stranded on long stretches of highway withouttransportation or even a means of notifying anyone that helpwas needed. But that would not matter anyway because helpwould have no way of getting there even if notified. Otherswould be stranded in elevators, which would crash to theground, or possibly on middle floors of high-rise buildingswhose cosmetic windows could not replenish the diminishingair supply. Hospitals would be in disaster since even back-upsystems would no longer operate. Planes, radios and waterprocessing plants would stop. Survival would, for many, be amatter of seconds or minutes which would not be measured byclocks. The situation would be devastating.

This dependence on machines is as necessary as it is exten­sive. Machine·making and using are abilities central to humanexistence. While other animals are equipped with their ownparticular biological tools for survival, man is equipped with asophisticated ability to design, produce and use tools. To sud­denly remove the products of this ability would be as debilitat­ing to human animals as would removal of fang and claw frompredatory animals. The making and using of tools provide thesurvival advantage for humans which has developed in otheranimals in more task-specific forms (such as specialized bodyparts, instinctive behavior patterns, etc.).

While we may all agree that tool and machine making arecentral to human activity, there is less agreement about themachine-like character of man himself. Recent research onhuman brain functioning has emphasized the machine-likenature of the electrochemical processes that underlie allforms of human thought and feeling.

Relatively little is known about exactly what goes on in thebrain, but we do know there are at least 100 billion 'neurons'or nerve cells which are linked by microscopic gaps known assynapses. When stimulated, neurons send chemical neuro­transmitters across the synapses at tremendous speeds. "Thisis what thinking is; the rapid firing of electrochemical transmit·ters across infinitesimal gaps inside the skull" (Rose, 1984).

According to Webster, a machine is: "a constructed thingwhether material or immaterial," or "an assemblage of partsthat transmit forces, motion, and energy one to another in a

244

Thinking Machines and C,.,atlvlty

predetermined manner" (1984). We usually think of machinesas concrete objects which can be seen and touched, as objectshaving gears or pulleys or microchips. But we also use othermachines, abstract ones, and this abstract machinery is asmuch a part of our human experience as are the concreteones. These machines are used in more subjective ways toprocess information according to previously defined cateqo­ries and to make decisions. They include all sorts of systematicconstructs. We reference machines of this sortwhen we speakof "political machines," or the "machinery of government."But there are many others we do not usually regard asmachines, which nevertheless involve machine-like processesof handling information and responding to the environment.These abstract machines are also vital to our human exist­ence. They help us solve problems as routine as tying ourshoes and as complex as neurosurgery.

Abstract machines operate in ways that are similar toconcrete ones and can be defined within the same terms. Thatis, they "transmit forces" (or manipulate data) in a "predeter­mined manner." To the extent that our environment is pre·dictable, they allow us to function in it according to establishedpatterns. The use of these abstract machines makes it possiblefor us to proceed without the constant necessity of reinventingthe wheel.

The more complex a society becomes, the more of thesemachines are necessary. With increased social complexity, thevariety individuals confront in their environment increasesaccordingly. To deal with this variety it is necessary to cate­gorize experience and develop machine-like processes formanipulating the different categories. The number of cate­gories is far greater than anyone person can master. But, weassume that each person nevertheless can deal with moretypes oj social situations than the typical person in less corn­plex society. As concrete technological machinery becomesmore prevalent and more complex, so must abstract socialmachinery develop to provide ways of coping with a level oftechnology that is beyond anyone person's comprehension.Even new technological advances depend upon the workof numerous highly trained specialists following specificmachlne-like routines.

In his last book, "The Origins of Knowledge and lmaqlna­tion" (1978), Jacob Bronowski discussed machines andmachine-like producers of answers. He gave the example ofAlan Turing, a British mathematician, who mechanized amathematics problem. Turing developed a logical system for

245

The Journal of Creative Behavior

solving mathematics problems of a particular sort which cameto be called a "Turing machlne.l'Jt was a machine in the sensethat it stamped out solutions systematically as a result of themachine-like process he developed. In that sense, E =mc2 isalso a "machine." Determining the value of"E" depends on aprecisely defined mathematical computation that does not .allow room for creative deviation.

Machines of this type are so prevalent that we use themwithout conscious recognition oftheir machine-like character­istics. They entail the use of a formal system of logic, withformal symbols and formal rules of manipulation. They operateaccording to algorithms (a procedure or list of instructionsfor solving a problem). Interestingly, it was with the inventionof the 'Turing machine' that some "problems could finally beshown to be unsolvable algorithmically" (McCorduck, 1979).These "machines" may be rather simple or very complex, butwhat they all have in common is their predictability and con­sistency. As is the case with concrete machinery, consistencyand predictability are key design features. This is necessarilyso. The repetition of a specific action in a predictable manneris what makes a machine effective. Machines which fail tomeet these criteria are discarded for more reliable models.

To distinguish between machine-like and creative responseswe can use what is known as the "brick test" for measuringcreativity. The rnachine-like use of a brick is as a building'block. Bricklayers use a standard repetitive recipe in addingone brick to another to build a wall. But, there are a variety ofother uses ofbricks. In the brick test, subjects are asked to findas many uses for a brick as possible within a given period oftime. Responses are evaluated both quantitatively and qualita­tively. That Is, while the number of responses is important, theinnovative quality of the responses is more important. Thequalitative aspect of this test is concerned with whether thesubject tends to see the brick as a specific object havingcertain established potentialities, or whether, instead, he/sheidentifies characteristics of the brick which are transferred touses not usually considered for a brick.

If we go to the dictionary to find the definition of "brick," wewill find several rather limited statements. The main idea is thata brick is a building block of baked clay. The dictionary gives usthis definition as often as we have the inclination to consult it.The dictionary will not tell us, as the taker of the creativity testmight, that a brick may also be a nutcracker, an anchor, abookend or have any number of other uses. Dictionaries arenot designed for that kind of flexlblllty,

THE MACHINERYOF lANGUAGE

246

thinking /l\achlnes lind Creativity

In this way dictionaries are like machines. To be of any use,they must be consistent. Machines are built to producespecific "products." With the dictionary, the product is defini­tions. The purpose of dictionary definitions is to enforce theconsistency of language, which it does by providing informa­tion on the specific denotative meanings of words. Of coursethe dictionary could be equipped with more definitions, withmore possibilities for flexibility. But - and this is the crucialpoint - it could not produce a new combination; it could notinnovate. One reason for this limitation is that the dictionaryis not designed to be sensitive to the almost limitless varietyof different contexts in which words may be used or the limit­less variety of ways in which they can be combined. Eventhough the dictionary may offer alternative definitions, therules it offers for selecting one definition over another areextremely limited.

In seeking a better understanding of human thinking proc­esses, one of the best places to look is at language. Humanlanguage is itself a machine-like process. It involves applyinglabels and categories to objects in our experience. Thisnecessarily involves reducing these experiences so we areable to communicate. In fact, before communication the :process of perception itself involved a filtering out process,a selection of form from the vast array of stimuli present inan environment. A further red uction takes place when weselect those aspects of our total experience about which tocommunicate (see Figure 1). With perception and communi­cation, it is necessary to sort, to be selective, to developstandardized machine-like responses to predetermined cate­gories. Culture is an important part of this selection process;it is, in turn, reinforced by sharing perceptions developedthrough communication.

Language helps order our perceptions by providing a set ofconcepts, labels and categories for our experiences. Languagemakes possible the conversion of stimulus to perception, ofsensation to symbolization. The move from the concrete to theabstract is an obvious necessity for thinking and language.At this transition the necessary reductionism takes place.Bronowski addressed this issue saying: "We cannot extricateourselves from our own finiteness. And therefore, we do thisdecoding by a highly imaginative, creative piece of guesswork.But we finish with something which is only a gigantic meta­phor for that part of the universe which we are decoding"(1978).

The Journal of Creative Behavior

FIGURE 1 Personal and cultural limitations on perception.

u

247

u =the universe d.

A =all possible human perception

B =perception likely within a given culture sharinga common language

C, D, E = individuals' perceptions as affected by personalexperiences or history

Philosophers have grappled with this problem ofthe relationbetween the concrete and the abstract through the ages. Theproblem is that we must deal with reality indirectly, through thefiltering processes of our perception and the coding processesofour language. Our language is entirely abstract, as opposedto the concrete world in which we live.

.Jean-PaulSartre addressed this issue in his novel, "Nausea."Roquestin (the main character) sat beside an ancient chestnuttree (symbolically the tree of knowledge and wisdom) whoseroots were protruding and gnarled. He saw character in thoseroots and wanted to record what he saw. He was aware, as hesearched his vocabulary, that words could only limit hisexpression. The tree would not be consumed by words; he waschoked on them and nauseated by the attempt. No matterwhat words he attempted to use as expressions for what hesaw, there was no way for him to use language to express thereality of that tree (Dante, 1975).

248

ARTifICIALINTELLIGENCE

(AI)

Thinking Machines and C~atlvity

In this example, Sartre was concerned with the genericquality of language. One might refer to this as the deceptivequality of language as it represents reality; or rather, thatlanguage cannot adequately represent reality. We may preferto think that language approximates reality, but does it?Bronowski pointed out the deception (1965) by referring tothe concepts we use to order our realities as "imaginativecreations." He warned that we sometimes lose sight of thisand assume we have discovered concepts rather than In­vented them.

Language is another man-made machine for dealing withour environment. It functions the way fangs or claws functionfor other animals in coping with the environment. Language,like any machine, must conform. at some minimum level, tostandards of consistency. Without that consistency, languagecould not function but. with it. we must give up certain kinds offreedom. Grudln ' (1985) discusses this problem offreedomby stating that true freedom includes surrendering certainfreedoms, of placing restraints and limits. We are presentedwith a paradox; true freedom involves form but form, in someways, limits our freedom. Language provides the form forperception and communication and also influences the waywe think. We must not overlook the reductionist nature oflanguage. It is entirely symbolic and cannot correspond withnature directly or reflect it completely. Human language is amediating form for dealing with the external world.

Since the time of Turing's machine. there has been intensiveeffort to produce a machine which can operate with the flexibil·ity necessary for innovation. Vast research into artificial intelli­gence has sought to design a "thinklnq" machine. Theseefforts have been aimed primarily at producing machines thatcan mimic human thinking processes, but they have had toface as a primary obstacle the fact that these processes them­selves are not well understood (Dreyfus, 1972). "We are. inshort, still largely in the dark. Our position has been comparedto that ofan electrical engineer who has never seen a televisionbefore and is expected to explain its inner workings. not bygoing inside and tearing it apart but by studying the signals itreceives and what comes on the screen" (Rose, 1984).

Given the view of thinking as an electrochemical, machine­like process, it follows that whatevergoes on there must bequantifiable. Schank, a well known figure in AI puts it thus:" ... one way or another, that stuff is represented in the brain,isn't it? It's represented by some discrete entity - chemicals or

249

MECHANISTICVERSUS

NATURALMODES

OF DATAPROCESSING

The Journal of Creative Behavior

electrical states of neurons or whatever. And so all we're sayingis that there must be some way to represent it, period" (Rose,1984). Schank also states: "What we're interested in isbuilding algorithms. What are the human algorithms? Oneway or another I'd like to find out what they are" (Rose. 1984).The problem, then, can be seen as one of quantifying a dis­crete, machine-like process. This type of problem is solvedwith regularity in science. This one, even if very complex,should be solvable as well.

Theoretically, this view seems feasible. To use a methodo­loqical term from sociology, we could imagine an "ideal type"computer which had been programmed with all it would needto mimic the human thinking process. Programming of thissort would include a vast data base of information about theenvironment, fluency in the user's natural language, a visualapparatus, and perhaps other channels for receiving inputfrom the environment. It would require programming for goalorientation and for interpretation of ambiguity in terms ofalternative probabilities. In short, it would include many thou­sands of IF statements and subroutines. Even if possible, theenormousness of the task is overwhelming.

Building such a machine is contingent upon being able toobserve and code nature and experience objectively and accu­rately. But, can we assume this is possible in view of thepreviously noted lack of complete correspondence betweenreality, perceptions of reality, and symbolic communicationabout reality? What is required may be much more than finallywriting enough lines of computer instruction, with elaboratesubroutines, sub-subroutines and so on. Even a very sophisti­cated computer, capable of handling thousands or millions ofdiscrete operations at lightning speed, may not be enough.This issue may not be a quantitative one at all but a qualitativeone. If we recognize the limitations involved in perception,cognition and communication, can we go ahead attemptingto decode and code nature and replicate these codes inmachines as if those limitations did not exist?What is the difference between sophisticated machines suchas computers and the way our brains operate? Although anadequate answer is beyond the scope of this paper, somecrucial distinctions can be made from our understanding ofthe differences between mechanistic and creative methods ofresponding to the environment.

One important distinction is that machines, both concreteand abstract, leave out or ignore stimuli they are not designedto manipulate. Normally, this 'leaving out' process allows the

250

Thinking Machines and Creativity

machine to attend to the specific task for which it wasdesigned. While this process usually contributes to greaterefficiency in machines, it can interfere with its operation inunanticipated situations. For concrete machines, environ­mental obstacles the machine is not designed to confront maysimply stop it in its tracks. An example of this might be anordinary passenger car which gets stuck in sand while a4·wheel drive vehicle is able to keep going.

Abstract machines have the same types of limitations.For example, alternative dictionary definitions may leave uspuzzled about what a particular utterance may really meansince context, tone of voice and social relation of speaker andhearer are not addressed. All our everyday life machines, ormechanistic modes of responding to the environment, aredesigned to help us cope with particular predefined stimuli in .specific ways. This is true for concrete machines such ascomputers and for such abstract machinery as organizationalrules and procedures.

In contrast, we might ask how our brains operate. First, ifweconsider the type of input our brains receive we must recog­nize that it includes many different forms of stimuli other thanwhat is relevant to the task at hand. To be sure, human beingsdo not have complete access to all components of their envir­onment. We have already noted the filtering process whichoperates at the level ofperception. But, our brain does record agreat deal of information that for the moment seems irrelevant.Much of this data is probably perceived at some less-than­conscious level and may be stored in forms not consciouslyrecognized. Some might suggest that this unconsciouslystored data is one source of the creative answers to futurepuzzles or problems we encounter.

At every waking moment, our senses are providing us withmany different types of data about our environment. Some ofthis data we attend to because of its relevance to the task athand. OUf approach may well be a machine-like response inwhich we are trained to respond to particular sounds, sights ortouches in a particular way. At the same time we will also havesome degree of awareness about the context and other stimuliwhich are provided by the context. In the terminology of cyber­netics, the appropriate input is received from a backgroundwhich also includes a great deal of "noise," At some point inthe future, however, some of this "noise" may emerge intoconsciousness and be scrutinized for information it may con­tain that is relevant to a new task. Indeed, most advertisersrealize that their radio and television audiences will not rush

251

The Journal of Creative Behavior

right out to buy their products. Their hope, however, is thatfrom the background noise of radio and television advertise­ments people will remember, at some level of consciousness,the appropriate messages when they are in the process ofdeciding between products.

What happens to input once it is stored, at various levels.in our brains? Unlike machines, brains appear constantly tobe involved in internal manipulation of data even withoutany apparent purpose. Often. when we seem to be thinkingabout nothing in particular, we catch ourselves daydreaming.If we were to analyze our daydreams we would probably bestruck by the way in which ideas, images. feelings, memories,hopes and plans seem to emerge into consciousness withouteffort and fade away without any apparent pattern or reason.The specific images and ideas of daydreams vary greatlyand seem to appear somewhat at random. Apparently, evenin our sleep, this constant processing of information goes on(Evans, 1983).

These episodes of daydreaming (or night dreaming) do notoccur all the time. During the day, when we are involved inprojects requiring intellectual concentration, we focus ourattention and our thoughts on the matter at hand. perhapsconcentrating on finding the solution to some puzzle or takingcare not to let our attention slip. Even in these times manypeople must fight to keep their minds from wandering intorandom daydreams.

Could a computer be programmed to engage in this highlyflexible sort of internal behavior of randomly manipulatingdata? Perhaps it could. But what would be the outcome?

With humans, the productive output of most daydreamingprobably does not amount to very much. However, betweenthe process of idle daydreaming and going through a me­chanical mode of reasoning relevant to accomplishing somespecific intellectual task. there is the process of creativethinking. The literature on creativity suggests that the strictlylogical, problem-solving way of thinking does not guaranteeimmediate creative solutions to problems (Ainsworth-Land,1982; Eiseley, 1962; Guilford. 1982; May, 1975). Instead,creative solutions often come "out of the blue," when theconscious mind seems to be at least partially disengaged forthe time being. It is as though the subconscious mind (orperhaps the right hemisphere) has been working to find asolution at some level other than that of conscious awareness.

It is possible that this subconscious process works mosteffectively if it is prepared (or programmed) by a prior high

252

Thinking Machines and Creativity

level of conscious concentration. What seems to happen isthat while the creative solution may not be found duringperiods of intense concentration, the subconscious level isprogrammed by the process. It continues searching forthe solution after the period of intense concentration hasbeen suspended.

If a computer could be programmed to engage in a corn­parable process of quasi-random manipulation of internaldata, how would it recognize when a solution to a puzzle isdiscovered? Presumably, there would have to be some typeof recognition pattern built into the computer of some goaldescription such that the computer could attempt constantlyto match the results of its search process with the previouslyprogrammed desired output. Then. when the process pro­duced a workable solution, the computer would have to bedesigned to recognize it as such. This recognition wouldrequire very specific programming, perhaps something justshort of the solution itself. If we develop a computer programwith specific enough instructions to "know" what it is look·ing for, there would be no need for the search process in thefirst place.

Another way to say this is to remind ourselves that cornpu­ters can only do what they have been programmed to do interms ofexplicitmachine·like processes. In contrast, humans.are much more flexible. They are able to go beyond themachine-like manipulation of stimuli to create meaningfulnew combinations, some of which involve creative solutions toproblems or even creative ways of posing new problems. Thisrequires the ability to distinguish meaningful from meaning­less combinations of data or hypothetical scenarios at a moregeneral, or more abstract, level than simply recognizing thesolution to an explicit, previously defined problem.

It is beyond our purpose to try to analyze this distinctivehuman ability to recognize or create meaning. Such an analy­sis would necessarilyhave to examine the evolutionary historyof our species and how our brains developed. It would have torecognize the important human survival advantages of flexi­bility. It would have to explore the "deep" structures of ourminds (or brains) and the way in which multiple goals areprogrammed at many different levels, including but not limitedto the level of conscious awareness.

The problem with creating a machine capable of true artifi­cial intelligence may be the inability to program into themachine the capability of making the leap from its existingprogram to a recognition of new meaning (or in other words,

253

The Journal of Creative Behavior

recognizing that bricks can be used for purposes other thanbuilding walls).

"Representing meaning, most AI researchers now feel, isthe key to computer intelligence... .It means defining andorganizing the vast amount of knowledge we humans have atour disposal - both the complex, sophisticated knowledge wegenerally associate with intelligence and the ordinary, every­day sort of knowledge we call 'common sense'" (Rose, 1984).

Attempts to program computers with human languageshave not solved the problem. Roger Schank points out thefallacy of the "dictionary/grammar book" approach taken inearly attempts at getting computers to use natural languages.He states: "What people know that enables them to speakand understand seems to be intimately involved with thenotions of meaning rather than form" (1984). Before we canprogram a computer to understand, we must discover the"basic conceptual elements" people use for understandingsentences ( 1984).

This is not a simple task. p;.s mentioned earlier, languagesare themselves machine-like and reductionistic. In fact, thisattempt may be seen as the supreme mechanization, the ulti­mate reductionism. We are attempting to recreate in amachine what we understand to be the essence of humanthinking. However, since we do not really understand the totalthinking process, we can not include all aspects of it in themachine's design. What we would include are probably themore mechanistic modes of human thought of which we areaware. Obviously, we could not include those dimensions ofour cognition of which we are unaware. We would have toassume that what we leave out is unimportant.

But can we safely make that assumption? Perhaps we forgetthat while we, as humans, develop machines and use them andleave out all sorts of information about nature, we are part ofnature. We can never really separate ourselves from it. In most·cases, it is unnecessary for machines to have access to theomitted information referred to above, but, by contrast, forhumans it is often essential. Our evolutionary developmentover millions ofyears has involved an intimate association withand adaptation to the rest of the natural world. This adaptationinvolves many levels of relationship other than consciousawareness and machine-llke problem solving. If, in buildingour perceptual frameworks, our "gigantic metaphors" (to useBronowski's term), we leave out pertinent information aboutthe environment, we can be reminded of it by abrupt andunexpected consequences.

254

Thinking Machines and Creativity

fn contrast, computers are complicated toofs deliberatelymanufactured by human beings for specific purposes. Theyhave no relationship with nature except as designed by theirhuman creators. There is no subconscious processing of dataand no symbiotic relationship with nature whereby a lifeprocess is sustained. Machines have no volition of their ownand no intrinsic strategy for establishing hierarchies of alter­native goals for the purpose of their own survival or pleasure.They do not transcend the programs built into them by theirhuman creators. Regardless of how complicated they may be,they simply perform the machlne-like operations they havebeen programmed to perform.

CONCLUSION Mechanization has indeed changed the world. It has changedpatterns of labor, perceptions of time and space and expecta­tions of efficiency. We have become highly dependent on thespeed and standardization of a mechanized world. For manyjobs, machines are better than people. They are faster, cheaperand more consistent. The future will surely involve an increas­ing involvement of machines in all aspects of our lives.

But let us not forget that machines are man-made tools. Asthe efficiency of machines becomes more and more a part ofour lives, let us hope that the time never comes when humanthinking will be "cleaned up" and become more machine-like.While machine-like processes occur in our brains which arenecessary for thought to take place, the world of thought andmeaning is an emergent reality that cannot be reduced tomechanical or neurochemical processes readily duplicated inmachines, regardless of their complexity and speed. Evenwhen human thought follows machine-like patterns, the resultsare less standardized and often unpredictable. This unpre­dictability, coupled with an ability to recognize meaning iswhat separates human thinking from machine "thinking" andmay very well be the key to creative synthesis.

fOOTNOTES IGrudin is a Professor of English at the University of Oregon at Eugene.

REFERENCES AINSWORTH·LAND, V. Imaging and creativity: an integrating perspective.The Journal of Creative Behavior; 1982, 16 ( 1J. 5·27.

BRONOWSKI, J. The identity of man. Garden City,NY: Natural History Press,1965.

BRONOWSKI, J. The origins of knowledge and imagination, New Haven,CT: YaleUniversity Press. 1978.

DANTO, A. C. Jeen-Peul Same, NYC: Viking Press, 1975.DREYFUS, H. What computers can't do. NYC: Harper & Row, 1972.

(Continued on Page 240)

240

Free Response Mellsures lind TheirRelationship to Sclentlftc Crelltlvlty

TERMAN. L. M. Manual for the concept mastery test. NYC: PsychologicalCorporation, 1956.

WILSON. R.c..GUILFORD.J. P. & CHRISTENSEN. P. R.The measurement ofindividual differences in originality. Psychological Bulletin, 1953.50.

Harrison G. Gough.Address: Institute of Personality Assessment and Research. University ofCalifornia. Berkeley. California 94720.

Thinking Mllchlne. lind Crelltlv"y

(Conl/nued from Page 25")

EISElEY. L. The mind as nature. NYC: Harper s Row. 1962.EVANS. C. Landscapes ofthe night - howand why we dream. NYC:Viking

Press. 1983.GUILFORD. J. P.ls some creative thinking irrational? The Journal ofCreative

Behavior; 1982. 16 (3), 151-154.GRUDIN. R. Freedom. A paper given at Second Annual Leisure Conference.

St. Petersburg. FL. 1985.MAY.R. The courage to create. NYC: Norton. 1975.McCORDUCK. P. Machines who tntn«. San Francisco: Freeman. 1979.RESTAK. R. The brain: the last frontier. Garden City. NY: Doubleday; 1979.ROSE•. F. Into the heart of the mind. NYC: Harper & Row, 1984. "

ROSENBlUETH. A. Mind and brain. Cambridge. MA: MIT Press. 1910'SCHANK. R.C. The cognitive computer. Reading. MA: Addison-Wesley, 1984.TURING. A. Computing machinery and intelligence. In Racer. M.. The

enduring questions. NYC: Holt. Rinehart & Winston. 1980.Websters Ninth New Collegiate Dictionary. Springfield. MA. 1984.

Joan M. Morris.Address: University of South Florida, Department of Sociology, Tampa.Florida 33620.Doyle Paul Johnson. Associate Professor.Address: University of South Florida. Department of Sociology. Tampa.Florida 33620.