Corporations as Intentional Systems.pdf

  • Upload
    derory

  • View
    227

  • Download
    0

Embed Size (px)

Citation preview

  • 8/14/2019 Corporations as Intentional Systems.pdf

    1/11

    ABSTRACT. The theory of corporations as moralpersons was first advanced by Peter French somefifteen years ago. French persuasively argued thatcorporations, as persons, have moral responsibility inpretty much the same way that most human beingsare said to have moral responsibility. One of thecrucial features of Frenchs argument has been hisreliance on the idea that corporations are intentionalsystems, that they have beliefs and desires just ashumans do. But this feature of Frenchs thought hasbeen left largely undeveloped. Applying some philo-sophical ideas of Daniel Dennett, this article providessupport for Frenchs contention that corporations areintentional actors by analyzing what is meant by theterm intentional system, and showing why corpo-rations should be thought of as, in many importantways, indistinguishable from humans.

    The theory of corporations as moral persons wasfirst advanced by Peter French some fifteen yearsago. French persuasively argued that corpora-

    tions, as persons, have moral responsibility inpretty much the same way that most humanbeings are said to have moral responsibility (1979,1984). Frenchs argument for corporations asmoral persons has been attacked in numerousdifferent ways over the years, but I think thatthere are avenues of defense for Frenchs positionthat have not yet been fully explored. This articleargues that French gets to the right conclusion that corporations are moral persons and offerssupport from heretofore untapped sources. In myargument I make liberal use of the work ofDaniel Dennett, an innovative, if controversial,philosopher of mind. Dennett has not been used

    to support any theories with socio-politicalramifications, but I see Dennett as providingcrucial support for Frenchs claims.

    I. Corporations and intentions

    French makes many arguments in support of histheory of corporations as moral persons, but

    perhaps the most crucial of these argumentsneeds more elucidation than it has so far received.

    This argument develops out of the claim thatcorporations are intentional systems; coherentactors that have intentions, beliefs, and desires

    just as do human beings. As French writes,

    [t]o be the subject of an ascription of moralresponsibility, to be a party in responsibility rela-tionships, hence to be a moral person, the subjectmust be at minimum an intentional actor. Ifcorporations are moral persons they will evidence

    a noneliminatable intentionality with regard to thethings they do (1984, p. 38).

    In other words, for French, the actions of a cor-poration are not reducible to a description ofwhat human actors do on behalf of the cor-poration. Corporations have personalities, ten-dencies, blind spots, character flaws, characterstrengths, exceptional abilities, misconceptions,and dreams. These are attributes of the corpora-tion and not simply a shorthand way of summingup the aggregation of characteristics of its

    employees. French makes this point clearly whenhe says:

    For a corporation to be treated as a moral person,it must be the case that some events are describ-able in a way that makes certain sentences true:

    Corporations asIntentional Systems William G. Weaver

    Journal of Business Ethics17: 8797, 1998. 1998Kluwer Academic Publishers. Printed in the Netherlands.

    W illiam G. Weaver is Asst. Professor of Poli ti cal Science

    and the University of Texas at El Paso.

  • 8/14/2019 Corporations as Intentional Systems.pdf

    2/11

    sentences that say that some of the things a cor-poration does were intended by the corporationitself. That is not accomplished if attributingintentions to a corporation is only a shorthand wayof attributing intentions to the biological personswho comprise, e.g., its board of directors. If thatwere to turn out to be the case, then on meta-physical if not logical grounds, there would be noreal way to distinguish between corporations andcrowds. I shall argue, however, that a CorporationsInternal Decision Structure (its CID Structure)provides the requisite redescription device thatlicenses the predication of corporate intentionality(1984, p. 39).

    Frenchs argument that corporations areintentional actors has been subjected to severalvarieties of criticism. The first form of criticism,usually more implied than argued, holds that weare misled into thinking that corporations are

    moral persons simply because of shorthandreferences in our language (Garrett, Pfeiffer).Here the complaint is a quasi-Wittgensteinianone that we have been seduced by our grammar,that we improperly ascribe a subjectivity tocorporations simply because we use them as typesof subject-actors in our language. We create ametaphysics out of an accident of metaphor.

    A second type of criticism attacks Frenchsdistinction between conglomerates and aggre-gates (Donaldson, 1982, Pfeiffer). French care-fully devises criteria to distinguish betweencollections of individuals that may be ascribedmoral responsibility and those that cannot. Hesays that conglomerates, or human collectivitieseligible for moral personhood, must have: (1) aninternal decision structure; (2) enforced standardsof conduct; (3) defined roles by which power iswielded over others. Lynch mobs, riots, etc., aremade up of human actors, and may be said tobe human collectivities, but they hardly qualifyas candidates for moral agents. But it seems tome that French puts too much stock in these

    criteria, and that they can be either supplementedor supplanted by more effective devices ofevaluation. These supplemental evaluative deviceswill be explained in section III.

    Patricia Werhane, in an effort to split the dif-ference between French and his critics, effectivelyargues that corporations do not act and there-

    fore cannot be moral persons (1985, 1988).Nonetheless, Werhane believes that becausecollections of these individual actions on behalfof a corporation can create anonymous policiesand practices no longer traceable to individuals,policies and practices which, in turn, generatecorporate activities, I claim that corporations are

    collective secondary moral agents (1988).Werhane seems to deny that corporations areintentional systems, a position counter to thatheld by French. Nevertheless, some critics go onto lump Werhane and French together, believingthat they both reach the same flawed positionfrom two different directions. As Jan EdwardGarrett writes,

    Both French and Werhane seem to locate theunreassignable portion of corporate moral respon-sibility with the corporate practices or policies as

    such. Werhane has interpreted French as arguingthat because policies and practices that are thesource of corporate action are themselves productsof corporate intentional activities, the actions thatresult are not solelydistributable to individuals (p.539).

    And Garrett goes on to assert that [t]he criticof UCMR [Unredistributable Corporate MoralResponsibility] need only insist that the indi-vidual moral responsibility for corporate actiondirectly caused by collectively determinedpolicies lies with individual actions further backin time, perhaps spread over many years (pp.359540). But of course, one can insist awayanything. By saying that UCMR disappears if we

    just look further back in a causal chain of eventsis unhelpful and unpersuasive. It does not dissolvethe argument of UCMR to say that corporationsare wholly comprised of small operations acrosstime. As we shall see in our discussion of ThomasDonaldson, at root here is an effort to find acriterial distinction between persons and non-persons. Garrett thinks that reducing corpora-

    tions to causal chains does away with thepossibility that the corporation is a moral person.This argument is discussed at some length laterin this essay.

    The point Garrett makes anticipates and isrelated to a third type of criticism made againstFrench. Thinkers who take up this argument see

    88 Wil liam G. Weaver

  • 8/14/2019 Corporations as Intentional Systems.pdf

    3/11

    it as only a matter of common sense that a personmust also be a human being, and attack Frenchfor drawing a crucial distinction where none canbe drawn (Donaldson, 1982; Pfeiffer; Velasquez,1983). Adherents of this argument tend to stillbe under the influence of Enlightenmentmetaphor, believing that humans are uniquely

    privileged by Nature or God and have souls ora special faculty called Reason which makes themdifferent in kind from any possible other complexsystem.

    Here I am mostly concerned with defendingthe theory that corporations can be moralpersons against the third sort of argument, againstthe claim that humans are intrinsically endowedwith intentionality and everything else is not.French has perhaps made himself vulnerable tocriticisms coming from this direction because hehas never fully explained and defended his claim

    that corporations are intentional systems. Specifi-cally, after laying out a functionalist accountof intentional systems, I will take up ThomasDonaldsons arguments against corporate per-sonhood. Donaldsons main criticism of Frenchsformulations are useful because they are clear andconcise, and reflect the intuitive objections heldby many casual observers of this debate.

    I am hesitant to be dogmatic about criteriallydriven notions of what constitutes a moralperson. Nonetheless I argue in section III thatwhen two general characteristics for personhoodare added to Frenchs thoughts his claim thatcorporations are moral persons becomes muchstronger. In reaching these two characteristics Ienlist the aid of Daniel Dennett for explaining afunctionalist view of intentionality.

    II. Dennett and the predictive strategyII. of intentionality

    Daniel Dennett tenaciously holds to two things

    in his writing: parsimony and functionality. Forreasons of prudence we must, as theorists, be asparsimonious as possible in the attempt to explainhuman (and, as we will see, nonhuman) behavior.In a number of articles Dennett appeals to LloydMorgans Canon of Parsimony, which holds thatone should attribute to an organism as little

    intelligence or consciousness or rationality ormind as will suffice to account for its behavior(Dennett, 1978). The rule of parsimony isnecessary for such attributions because otherwiseincorrect assumptions can create and magnifymisperceptions on the part of the observer. Evengiven this minimalist account of rationality and

    consciousness, when it is combined withDennetts functionalism it yields some interestingconclusions for the intentional character of cor-porations.

    Dennett treats people, and as will be seen,much else, as intentional systems (Dennett,1971). By intentional system Dennett means,

    the concept of a system whose behavior can be at least sometimes explained and predicted byreplying on ascriptions to the system of beliefsand desires (and hopes, fears, intentions, hunches,

    . . .). I will call such systems intentional systems,and such explanations and predictions intentionalexplanations and predictions, in virtue of theidioms of belief and desire . . . (1978, p. 3).

    And an intentional system is precisely the sortof system to be affected by the input of infor-mation . . . (1978, pp. 247248). But as Dennettmakes clear, an intentional system is not so calledbecause of anything it intrinsicallyhas (like belief-states, language, cognition, etc.). In fact, thenotion of consciousness at least on this point is not an important one for Dennetts formu-lation. Dennett sees the idea of consciousness,as it is treated by theorists who believe inintrinsicality, as getting in the way of usefulunderstanding about intention (e.g., Nagel, 1974,1979, 1986; Searle, 1980, 1983). Dennett, by hisconstant reference to machines, and the languageof engineering and design, is on one levellooking to demystify the notion of intending andbelief (1991, pp. 259262). He feels that he hasthe cure for the Cartesian hangover which hascaused so much bad talk about human essence,

    original intention, and the like.So it is not surprising that Dennett willimmediately let the reader know that intentionalsystems are not determined criterially ratherthe determination of something as an intentionalsystem is made on the basis of utility. This, ofcourse, requires such a determination to be in

    Corporations as Intentional Systems 89

  • 8/14/2019 Corporations as Intentional Systems.pdf

    4/11

    the hands of a third-person observer. As Dennettwrites, a particular thing is an intentional systemonly in relation to the strategies of someone whois trying to explain and predict its behavior(1978, pp. 34). And on Dennetts view, there isnothing that requires the ascription of intentionto be limited to humans. The ascription of

    intention has nothing to do with intelligenceor creative capacity; it has to do with the com-plexity of the system one wishes to talk about.Obviously, depending upon a persons training,background, education, and other environmentalfactors, what one person regards an intentionalsystem another would not. As Dennett says, [a]llthat has been claimed is that on occasion, apurely physical system can be so complex, andyet so organized, that we find it convenient,explanatory, pragmatically necessary for predic-tion, to treat it as if it had beliefs and desires and

    was rational (1978, p. 8). In what follows wewill investigate three ways in which Dennett saysthat we approach organized collections ofmaterial and people.

    Dennetts trinity

    Dennett relates what he calls three stances whicha person can take toward any system which canbe said to have a behavior. Behavior as it is usedhere is extremely loose and covers just about allsystems with an interactive feature from electriceyes to persons. Dennett calls these the design,physical, and intentional stances. All of thesestances are strategies for coping with systembehavior. Determining which stance to adopt isoften unconscious, but, of course need not be so.And deciding which stance to adopt in a givensituation is based on the belief about which onewill yield the most accurate predictions about thesubject systems behavior.

    First is the design stance. This stance is taken

    by people attempting to predict the behavior ofmechanical objects (1978, p. 4; 1991, p. 276). Itis the how things work approach. An engineer,for example, will take a design stance whentalking about a thermostat. The engineer knowsfully the possible causal outcomes of all of themechanisms operations. This stance, as with the

    intentional stance, also varies with education,environment, etc. It is possible, for example, foran extremely unmechanical person to take anintentional stance toward a relatively simplesystem.

    The key to the design approach is that theelements get stupider as one goes down. In

    theory one could take a design down to the locusof a single logical operation. Dennett relates thisidea in the following way:

    [The] first and highest level of design breaks thecomputer down into subsystems, each of whichis given intentionally characterized tasks; hecomposes a flow chart of evaluators, rememberers,discriminators, overseers and the like. These arehomunculiwith a vengeance; the highest leveldesign breaks the computer down into a committeeor army of intelligent homunculi with purposes,information and strategies. Each homunculus in

    turn is analysed into smallerhomunculi, but, moreimportant, into less cleverhomunculi. When thelevel is reached where the homunculi. When thelevel is reached where the homunculi are no morethan adders and subtractors, by the time they needonly the intelligence to pick the larger of twonumbers when directed to, they have been reducedto functionaries who can be replaced by amachine (1978, pp. 8081).

    It is only through the large combination ofpossible outcomes created by the joining of lotsof logical operators that a machine is said to bean intentional system. At some level it willbecome impractical for even the most skilledengineer to maintain a design stance toward thiscollection of stupid units.

    The design stance leads into the second stancediscussed by Dennett, the physical stance. Thephysical stance is one taken towards a system thatis in some way dysfunctional (1978, p. 4). It isalso the stance we take when attempting topredict malfunctions of systems. This stance,while it may be unusual to do so, may also be

    taken toward humans. Doctors and psychother-apists generally learn to reason in this way abouttheir patients; as do weight trainers and perfor-mance experts for athletes. Also, the physicalstance may be used to determine a systemsbehavior through an analysis of the physicalmakeup of that system. As Dennett explains:

    90 Wil liam G. Weaver

  • 8/14/2019 Corporations as Intentional Systems.pdf

    5/11

    The chemist or physicist in the laboratory canuse this strategy to predict the behavior of exoticmaterials, but equally the cook in the kitchencan predict the effect of leaving the pot on theburner too long. The strategy is not always prac-tically available, but that it will always work inprincipleis a dogma of the physical sciences (1987,p. 16).

    When systems are so operationally complexthat it is impractical for the observer to employthe design or physical stance in the attempt topredict system behavior, the observer will employthe intentional stance. This stance is so preva-lent and automatically assumed in the normalactivities of life that we take it for granted. Muchof our waking existence is taken up with pre-dicting the behavior of intentional systems. Thisintentional stance is not a posited, or fictional

    occurrence we use it all the time. Neither isthe application of this stance theoretical; muchless a theory it often requires no thought at all.

    The intentional stance is astrategyfor predictingthe behavior of other complex systems andneither depends upon a notion of any internalqualia of the observed system nor conjectures ifsuch a system really thinks, feels, has alanguage, etc. Such concerns with internal statesare superfluous to the intentional stance. All thatis important is whether or not the ascription ofbeliefs and reason to the observed system yields

    desirable results for the observing/predictingparty.

    We must get over the urge, Dennett tells us,to continue to worry about the insides of suchsystems, the urge to question whether or notthere is a parallel internal state to the externalrational behavior we are counting on in our pre-dictions. As Dennett writes,

    [i]t is not that we attribute (or should attribute)beliefs and desires only to things in which we findinternal representations, but rather that when we

    discover some object for which the intentionalstrategy works, we endeavor to interpret some ofits internal states or processes as internal represen-tations . . . What makes some internal feature ofa thing a representation could only be its role inregulating the behavior of an intentional system(1987, p. 32).

    Dennett is saying that the onlyuseful way tothink of intention is from the external, third-party perspective of functionality. Dennett isdogmatic in his defense of intention as a third-party prediction of some things response tosome environmental stimulus (1987, pp. 1522).

    The key to this external view is the ascription

    of rationality to the system in question, and ifintentionality is from the stance of the observingparty, then rationality must also be a third-personconstruction.

    Ascriptions of rationality do not imply thatthe intentional system under observation belanguage using or even intelligent; for Dennett,rationality is part of an equipment concernedwith prediction. And he claims that,

    allthere is to being a true believer is being a systemwhose behavior is reliably predictable via the inten-

    tional strategy, and hence all there isto really andtruly believing that p(for any proposition p) isbeing an intentional system for which poccurs asa belief in the best (most predictive) interpretation(1987, p. 29).

    In short, beliefs and intentions are not the sortof things that are helpful to think of as belongingto humans and only to humans. The ascriptionof rationality also means that we must talkas ifintentional systems have belief states in the waywe talk about people having belief states. HereDennett is trying to break down the powerful

    background view of most readers that the worldis severable cleanly along the lines of what is andis not a person. Dennett thinks that the beliefthat humans have beliefs and everything else doesnot is an intellectually harmful vanity. As Dennettsays,

    [t]he assumption that something is an intentionalsystem is the assumption that it is rational; that is,one gets nowhere with the assumption that entityxhas beliefs, p, q, r. . . unless one supposes that xbelieves what follows from p, g, r. . .; otherwisethere is no way of ruling out the predictions thatxwill, in the face of its beliefs p, g, r. . . do some-thing utterly stupid, and, if we cannot rule out thatprediction, we will have acquired no predictivepower at all (1978, p. 11).

    The idea then is to problematize the belief thatrationality and personhood are only entailed by

    Corporations as Intentional Systems 91

  • 8/14/2019 Corporations as Intentional Systems.pdf

    6/11

    biological humans. Since most humans have alanguage, that makes us the most protean andpowerful of intentional systems, but communi-cation . . . is not a separable and higher stanceone may choose to adopt toward something, buta type of interaction within the intentionalstance (1978, p. 242). And [r]eason, not regard,

    is what sets off the intentional from the mecha-nistic; we do not just reason about what inten-tional systems will do, we reason about how theywill reason (1978, p. 243). On this view, then,rationality is always a third party ascription, thereis nothing useful to be thought of as reasonthat exists in itself, or is criterially determined.Rationality need not be throughout of as a corehumanality around which swirl supplemental,contingent items. It is an evaluation of systembehavior, and persons just happen to be the mostpowerful and frequently encountered intentional

    systems. And as such, we persons generally grant,without reflection, the rationality of each other.Sometimes this charity proves unwarranted, andwe are forced to adopt a predictive strategy otherthan the intentional stance. Untreated paranoidschizophrenics, or schizophrenics resistant tomedication, often force a physical stance, anattempt to account for a failure of rationality the failure of prediction of behavior under theintentional stance based on a system malfunc-tion.

    So far we have come to an understanding ofrationality and intention which requires notheorizing or investigation of mental states,core qualities, or of what demarcates humansfrom other intentional systems. We have nocriteria of the rational, but we do have a strategybased on prediction which is a powerful evalua-tive device and information-gathering tool ofother intentional systems. We need not worryabout whether or not intentional systems possessintrinsic intentionality, whether or not theyreally have beliefs, for the results of that investi-

    gation even if it could be resolved carry noconsequences for prediction or evaluations ofrationality. All we need be concerned about isif the ascription of intentional idioms to a par-ticular intentional system works, or helps us tomake decisions about the system. It may be thathumans do have intrinsic intentionality and that

    many other intentional systems do not, and thatwe, as wetware, have an intentionality differentin kind, rather than degree, from hardwaresystems or corporations. But as a pragmatist and,therefore, a functionalist in the broad sense of theterm, I want to stick with what appears to beleast theoretically attenuated from the activity

    under consideration.

    III. Corporations as intentional systems

    From the account above, it should be clear thatrationality is not static, nor does it have a life ofits own; it is completely beholden to the efficacyof prediction. As the context of predictionchanges so will rationality. So there are a multi-tude of rationalities, but no central or privilegedrationality. And what we predict of an intentional

    system is based on context and what we expectfrom members of a particular group under similarcircumstances. Such predictions can take theform of what would be expected of Americansin circumstance x, or the very narrow expecta-tion of what a gene therapist scientist is expectedto do in the lab under situation y.

    Prediction and the concomitant evaluation ofrationality is based on education, experience, andintelligence. And of course we can find peopleto be acting irrationality under certain contextswithout forcing us to forego the intentionalstance. For example, a judge who resorts to herown personally held religious doctrine to decidea case may, from the perspective of law practice,be acting irrationally, but her status as an objectof the intentional stance is not in jeopardy. Shemay not be fit to be a judge, but it is likely thatover a range of legal and nonlegal contexts herbehavior will confirm predictions at a rate notunlike other persons in the same culture.

    Also, in assuming the intentional stance wecome presumptively preconfigured with respect

    to expectations, since what we expect or predictout of other intentional systems depends on whatwe have learned and observed from individualbehavior within the context of culture. I saypresumptively preconfigured because, of course,we can adapt, with training or observation, tonew rationalities and expectations. This very

    92 Wil liam G. Weaver

  • 8/14/2019 Corporations as Intentional Systems.pdf

    7/11

    ability to adapt is what also creates the distancefrom one rationality to another, and it is notmisleading to say that rationalities are groupadaptations.

    French starts us in the right direction when hesays that it is the minimal requirement for acorporation to be a moral person that it is an

    intentional actor. But of course, he does notmean to imply that intentionality is a sufficientcondition for moral personhood. There are allkinds of intentional actors that are not andcannot be moral persons. Tigers, chess-playingcomputers, and nuclear reactors all warrant, onsome occasions, that we adopt the intentionalstance in order to predict their activity. But noneof these things can reasonably be thought of aspotential or actual moral persons. UnderDennetts analysis, tigers, chess-playing com-puters, and nuclear reactors are intentional

    systems and per force, for Dennett, are inten-tional actors. But characteristics beyond thoseheld by these items seem necessary for anintentional system to be a person.

    Do corporations have minds?

    Many thinkers believe that corporations can besaid to have intentionality in only a trivial andinsubstantial sort of way. On this view, humanshave original or intrinsic intentionality,while all other things spoken of as intentionalactors have derived intentionality. The dis-tinction for these critics that is crucial is thatthere is something going on in a human brain,some irreducible event, which does not occurin intentional actors with derived intentionality.Coin readers in soft drink machines, for example,have only derived intentionality: its intentionalstate to read coins accurately is completelybeholden to the desires of items with originalintentionality (humans). It would be incorrect

    to talk about coin readers as things with inten-tional states unless one made reference to theintentions of its human creators. And nothingwithout irreducible mind-stuff can have originalintentionality.

    Obviously, corporations do not have centralbrains where electro-chemical reactions occur,

    but it is unclear that this lack of a brain alsomeans that corporations lack minds. If we sidewith Dennett and mean by mind the sheer orga-nized combinatorial complexity of a particularlysystem, then corporations do indeed have minds.Dennetts point is that the complexity of humanminds gives rise to the belief that there is a

    difference in kind rather than degree betweenhumans and other intentional systems. He dis-agrees with that belief, and, as he puts it, Myview is that belief and desire are like froggy beliefand desire all the way up . . . We human beingsare only the most prodigious intentional systemson the planet . . . (1987, p. 112).

    In analyzing the nature of consciousness,Dennett adopts a design stance and sees the brainas consisting of many stupid homunculi, eachof which is dedicated to some specific problem-solving task. These homunculi are grouped

    together and forced into communicationspatterns by what Dennett and Richard Dawkinshave termed memes; more or less identifi-able (and complex) cultural units (1991, p. 201).Examples of memes are wheel, vendetta,calendar. Consciousness is

    itself a huge complex of memes (or more exactly,meme-effects in brains) that can best be understoodas the operation of a von Neumannesque virtualmachine implemented in the parallel architectureof a brain that was not designed for any such

    activities. The powers of the virtual machinegreatly enhance the underlying powers of theorganic hardware on which it runs (1991, p. 210).

    Assuming the defensibility of Dennetts claims,it is unclear how one would exclude, on histerms, corporations as conscious entities. Cor-porations are just as subject to meme-effects asare humans. They have identifiable personalitiesand are driven to certain conclusions and actionsby the acceptance of some memes as importantfor corporate identity and the rejection of other

    memes as inimical to that perceived identity. Takethe meme liability exposure, a meme that nocorporation rejects or ignores if it wants tosurvive very long. The creation of this memeitself assumes the adaptive rationality of corpo-rations, for over the last century this meme hascome to stand for penalty induced corporate

    Corporations as Intentional Systems 93

  • 8/14/2019 Corporations as Intentional Systems.pdf

    8/11

    changes of behavior; changes of behavior viewedas socially desirable. But the meme-effects ofliability exposure on corporations varieswidely, largely depending on the corporatepersonality. As language users, corporations areculturally conditioned in the same way peopleare.

    French is right to say that at a minimum amoral person must be an intentional actor, butthis claim can be made more convincing bysupplementing it with several other claims. Forwhile intentionality is a necessary criterion formoral personhood it is not a sufficient conditionfor moral personhood.

    Corporate grammar and adaptability

    I would add two further conditions for mem-

    bership in the class of moral persons. First, ourtremendous capacity for intentional action ismade possible by language, and humans, becauseof language, have a broad-ranging and subtleintentional complex. What is obvious but some-times unappreciated is that corporations, no lessthan humans, are language users. They are notonly affected by the input of information (as mustbe all intentional systems) but they also talk back.Corporations not only use information, they alsomake information, they attempt to persuade,manipulate, inform, depress, uplift, debate, anddebunk other intentional systems and theiractions. All the communicative capacities avail-able to a normal human are just as available tocorporations. Perhaps corporations have evengreater capacity since they are not as subject tocultural and linguistic limitations as the averagehuman is. If we wanted to seize on the crucialcharacteristic for moral personhood it may notbe intentionality but language-using capacity. Forif a system has a language, then ipso facto it mustbe an intentionalsystem and also be adaptable to

    multiple rationalities.Each corporation has its own idiosyncrasies ofgrammar and syntax just as each human does.

    This corporate syntax is not reducible to thehuman members of the corporation, just ascorporate actions have been shown by Frenchand others to not be reducible to actions of its

    employees. Corporate grammar and syntaxmanifest themselves not only through team-written items or board directives, but alsothrough informal channels not reflected in theofficial CID Structure and through the per-sonality of the corporation. Even single-authoreditems in a corporation are likely to exhibit cor-

    porate syntax, for one intentional system (thehuman author) will probably be concerned withsublating her own idiosyncrasies, beliefs, anddesires to those of the corporation as she per-ceives them. Any corporate lawyer who has everauthored a liability exposure study, or anymanagement staff who has written up a buyingor product suggestion probably knows preciselywhat it means to become part of the corporategrammar. Human members of corporations learnto speak the corporate language, they learnfacility with a discursive practice which com-

    municates corporate intentions, beliefs, hopes,etc.

    Second, besides the capacity for acquiring andusing language in an original or idiosyncraticway, an intentional system must be adaptive inorder to be held a moral person. I t must be ableto function in a number of different rationali-ties. In a major sense the root of moral con-demnation is founded in the tacit understandingof adaptivity. When intentional systems areunable to adapt their behavior we generally donot hold them to be morally responsible for theiractions.1The perceived capacity for adaptationis necessary to give rise to moral judgment. Ofcourse this has not always been the case. Underthe ancient legal doctrine of deodand manyitems were held morally culpable for theirbehavior. In Great Britain trees, horses, dogs,etc., were sometimes held accountable for harmthat they caused.

    But even on the contemporary view thereseems no reason to suspect that corporationsdo not meet the adaptivity requirement.

    Corporations do travel in and out of a greatnumber of rationalities, just as other persons do.Stanley Fish might alternatively say that corpo-rations function in a variety of interpretivecommunities (Fish, especially chs. 14 and 15).

    94 Wil liam G. Weaver

  • 8/14/2019 Corporations as Intentional Systems.pdf

    9/11

    IV. Thomas Donaldsons criticisms ofIV. corporate moral personhood

    Keeping in mind the functionalist account ofintentionality discussed here as it relates tocorporations, we can now look at some specificcriticisms of corporations as intentional actors.

    Thomas Donaldson in Corporations and Morality(1984) discusses the corporation as a moralperson, and his criticisms of this approach arepointed, concise, and well made. Donaldsonattacks the idea of corporations as intentionalactors in several ways. First, he writes,

    In order for corporations to be agents, the MoralPerson view holds that they must satisfy the defi-nitions of agency, or in other words, be capable ofperforming intentional actions. But can corpora-tions really perform such actions? Flesh and blood

    people clearly perform them when they act on thebasis of their beliefs and desires, yet corporationsdo not appear to have beliefs, desires, thoughts, orreasons (1982, p. 21).

    Here Donaldson is uncritically assuming theposition held by Nagel and Searle. He believesthat there can be no minds without brains. Butthe functionalist might respond by pointingout that we often treat corporations as we doother people. That is, we adopt the intentionalstance toward corporations because it is thebest predictive strategy of what a corporation willdo. We sometimes think that corporations willthink that we think that they think about x.In other words we not only think of corpora-tions as intentional systems, we also think ofthem as so complex that it is best, for our pre-dictive strategies, to think of them as things withminds.

    The combinatorial complexity of corporationsmay not rival that of the human mind, but itrivals everything else we know of in nature. IfDennett is right, if combinatorial complexity and

    meme-effects are two of the keys to under-standing what we call consciousness, then it isarguable that corporations have minds. If it is likefroggy belief and desire all the way up, if whatsets humans apart from other intentional systemsis their capacityfor complexity then we should bewilling to see personhood as open for club

    membership to nonhuman systems which possessnear or like capacity.

    Further, corporations arguably share withhumans the vulnerability to humiliation, a char-acteristic which some contemporary theoristsmake much of (e.g., Rorty, 1989). Corporationsare not merely language users that respond to the

    input of information, they also experience thefull range of emotions. And perhaps humiliationis the emotion most closely thought of asavailable only to humans, for it implicates anentire panoply of cultural subtlety. It is beyondthe range of this essay to attempt to prove thispoint, and in any case such a proof is not nec-essary for my claims. But nothing tells me thatcorporations cannot suffer from humiliation.

    Second, Donaldson writes that:

    Consider the analogy of a game. In games, the

    rules determine which actions count as legitimatemoves, and in corporations certain rules determinewhat counts as, say, a decision by the board ofdirectors. But the rules of a game fail to tell uswhat the game itselfintends in fact, it makes littlesense to say that the game intends anything andone can argue that the same is true for corpora-tions. If corporations are made up of rules, policies,and power structures, then we can tell what countsin the context of those rules, policies and struc-tures; but we cannot tell clearly from these whatthe combined rules, policies, and structures them-selves intend (1982, p. 22).

    There are a number of problems with this state-ment. First, it should be obvious that corpora-tions are not merely collections of rules, policiesand structures, any more than humans are col-lections of neurons, musculature, bone mass,internal organs, etc. If I am right in claiming thatcorporations have language and minds, then thedesign account of corporations given byDonaldson here no more gets at the nature ofcorporations than the design account of a human

    body gets at the nature of what it is to be human.At times, in certain contexts, as discussed, thedesign and physicalist stances are appropriatepredictive strategies. But these stances no moreexhaust the possible explanations of what acorporation is than they exhaust the possibleexplanations of what a human is.

    Corporations as Intentional Systems 95

  • 8/14/2019 Corporations as Intentional Systems.pdf

    10/11

    Second, Donaldson assumes that the source ofintentional action must come from the collectionof rules, policies, and procedures that helpcomprise a corporation. But this seems as helpfulas saying that the source of intention for speakersof English are the rules of grammar governingthat language. Such an observation, of course,

    does not lead to the conclusion that since nosuch intention can be pulled out of these rulesof grammar, then speakers of English must notbe intentional systems. Donaldsons mistake is tothink that the source of intentionality in corpo-rations must be substantially or only found in itspolicies, procedures, rules, and the like. But justas with humans, corporate sources and causes ofintentionality are broad-ranging and subtle;limited only by eithers linguistic capacity.

    Third, Donaldson seems to be using the wordintention in a general, abstract way, and then

    saying that since corporations do not have ageneralintention, they must not be eligible formoral corporate personhoood. Two observationscan be made about this claim. First, if we reflectback on Dennett we will see that the intentionalstance is not about abstracts, it is about actionand prediction under a specified set of circum-stances. It is situational, and it is not obvious thatcorporations are generally any worse thanhumans in effectively assuming on intentionalstance given a particular set of circumstances.Donaldsons claim is akin to the ancientsquestion of How shall we live? or the theolog-ical query as to the reason for the existence ofhumans. If Donaldson means to say that corpo-rations cannot and do not wonder about thingsin these big ways as do humans, we may stillgrant this point yet maintain that this does notmean that corporations cannot be moral actors.If this is what Donaldson believes, then he wouldmake these large wonderings a crucial criterionfor moral personhood. This may be true, but itis not obviously so. Donaldson needs to argue for

    this point, not simply point it out as a difference(one that I am not sure I would concede to atany rate) between humans and corporations andthen continue as if the point is self-executing.Second, he assumes that if we ask humans aboutwhat a game intends, or what is its purpose, orwhy do we play it that he will get responses

    which overlap to a substantial degree. But thisassumption seems unlikely to be true. In anyevent, Donaldson has provided us with nosituation specific enough to be testable.

    Finally, Donaldson claims that

    [The Moral Person] view assumes that anything

    which can behave intentionally is an agent, andthat anything which is an agent is a moral agent.But some entities appear to behave intentionallywhich do not qualify as moral agents. A cat maybehave intentionally when it crouches for a mouse.We know that it intends to catch the mouse, butwe do not credit it with moral agency (though wemay object on moral grounds to its mistreatment).A computer behaves intentionally when it sortsthrough a list of names and rearranges them inalphabetical order, but we do not consider thecomputer to be a moral agent. Perhaps corpora-tions resemble complicated computers; perhaps

    they, according to complicated inner logic,function in an intentional manner but fail alto-gether to qualify as moral agents. One seeminglyneeds more than the presence of intentions todeduce moral agency (1982, p. 22).

    But there seems no apparent reason why weshould think of intentional systems as no morethan a coherent collection of parts and then usethat observation to demarcate the person fromthe nonperson. Here the functionalist wouldsimply agree with Donaldsons statement, but alsoadd that the human brain, like corporations, alsoresembles a complicated computer. And both ofthese types of computers are subject to meme-effects and conceptual influence. To think thatcorporations are reducible to their parts, but thatno such reduction is possible for humans is toreplicate the thinking of those who believehumans to be the only holders of intrinsic inten-tionality. Such a position is a highly respectedone in philosophy, but as noted it is not withouta large and vocal opposition. Donaldson cannotsimply recast the conclusions of this argument

    and expect those conclusions to be held self-evi-dently true. To put this another way, Donaldsonassumes a design stance toward corporations andbelieves that that exhausts all the useful explana-tory space necessary to understand corporatebehavior.

    96 Wil liam G. Weaver

  • 8/14/2019 Corporations as Intentional Systems.pdf

    11/11