Complexity in organizations

Embed Size (px)

Citation preview

  • 7/30/2019 Complexity in organizations

    1/18

    9Moldoveanu

    An intersubjective measure of organizationalcomplexity: A new approach to the study ofcomplexity in organizations*Mihnea Moldoveanu

    Rotman School of Management, University of Toronto, CA

    This paper attempts to accomplish the followinggoals:

    formulate and elaborate the epistemo-logical problem of studying organizationalcomplexity qua phenomenon and of usingorganizational complexity qua analyticalconcept in the study of other organizationalphenomena;

    propose and defend a solution to this epis-WHPRORJLFDOSUREOHPE\LQWURGXFLQJDGH

    nition of complexity that (i) introduces thedependence of complexity of an object onthe model of the object used, without either(ii) falling into a fully subjective and rela-tive view of complexity or (iii) falling into afalsely subject-independent view thereof andthus (iv) making precise the subjective andREMHFWLYHoFRQWULEXWLRQVpWRWKHGHQLWLRQRIcomplexity to the end of (v) making complex-ity tout court a useful analytical construct orhermeneutic device for understanding orga-nizational phenomena;

    show how the new view of complexity can be

    usefully applied in conjunction with classical,well-established models of organizations tounderstand the organizational phenomenathat are paradigmatic for the research tradi-tion of each of those models;

    derive the implications of the new view of or-ganizational complexity for the way we studyand intervene in organizational life-worlds.

    The study of organizational complexity

    IDFHVDGLIFXOWHSLVWHPRORJLFDOSUREOHP

    Organizational complexity has become a sub-

    ject of study in organizational research (see,for instance, Anderson, 1999). Researchingorganizational complexity requires one to confrontand ultimately resolve, dissolve or capitulate to theGLIFXOWLHVRIGHQLQJWKHSURSHUW\RIFRPSOH[LW\RI

    1.

    2.

    3.

    4.

    An intersubjective measure of organizational complexity: A new approach to the study of complexity in organizationsE:CO Issue Vol. 6 No. 3 2004 pp. 9-26

    DQRUJDQL]DWLRQDOSKHQRPHQRQDQGRIWHQGHQLQJand defending a complexity measure for organizationalphenomena, which allows one to declare one phe-nomenon more complex than another. This minimalconceptual equipment is necessary in view of the age-ROGFRQFHUQRIVFLHQWLFDOO\PLQGHGUHVHDUFKHUVWRWXUQqualitative impressions into quantitative measuresand representations, but faces a serious epistemologi-cal problem that is connected to the basic ontology ofproposed complexity metrics and complexity spaces.Here is an exposition of that problem, in brief.

    2XWOLQHRIWKHHSLVWHPRORJLFDOSUREOHPRI

    WDONLQJDERXWFRPSOH[LW\DQGXVLQJWKHWHUP

    FRPSOH[LW\

    As with all measures that can be used as researchtools, we would like our measures of complexity to beintersubjectively valid: two observers A and B ought,through communication and interaction, to come toan agreement about the complexity of object X, in thesame way that the same two observers, using a yardstick, can come to an agreement about the length of thisIRRWEDOOHOGp7KHHSLVWHPRORJLFDOSUREOHPZHZRXOGlike to draw attention to is that of achieving such an

    intersubjectively valid measure. Of course, questionssuch as how would we know a complex phenomenonif we saw it? and how can complexity of differentphenomena be compared? would also be resolved bya solution to the core epistemological problem.

    9DULRXVGHQLWLRQVRIoFRPSOH[LW\pLQWKHOLWerature can be understood as purposeful attempts tocome to grips with the problem I have outlined above.Thus, consider:

    Complexity as structural intricacy: The strongobjective view of complexityThe outcome of an era of greater self-evidence inmatters epistemological, the structuralist view ofcomplexity is echoed (as we shall see, with a twist) inorganizational research in Simons (1962) early workand in Thompsons (1967) seminal work on organiza-tional dynamics. It is not dissimilar from structural-ist analyses of physical and biological structure andfunction (DArcy Thompson, 1934; von Bertalanffy,1968). It is based on a simple idea: complex systemsare structurally intricate. Structural intricacy can best

    be described by reference to a system that has (a) many

    Academic

    * This paper is taken from the soon-to-be published ed-ited collection Managing Organizational Complexity:Philosophy, Theory and Application, K. A. Richardson(ed.), Information Age Publishing, due June 2005.

  • 7/30/2019 Complexity in organizations

    2/18

    10E:CO Vol. 6 No. 3 2004 pp. 9-26

    parts that are (b) multiply interconnected. In the yearssince Edward Lorenzs discovery in 1963 of chaotic

    behavior in simple dynamical systems, we have cometo know both: that there exist simple (by the struct ur-DOLVWGHQLWLRQV\VWHPVWKDWQHYHUWKHOHVVFRPSHOXVto classify them as complex (such as chaotic systems)DQGFRPSOH[V\VWHPVE\WKHVWUXFWXUDOLVWGHQLWLRQ

    that behave in ways that are more characteristic ofsimple systems (large modular arrays of transistors,IRULQVWDQFH$QHUVHWRIGLVWLQFWLRQVZDVFDOOHGIRUand many of these distinctions can be perceived froma more careful study of Simons work.

    Simon (1962) did not leave the view of complexsystems at this: he postulated that complex systems aremade up ofmultiply-interacting multitudes of compo-nents in such a way as to make predictions of the overall

    behavior of the system starting from knowledge of thebehavior of the individual components and their inter-action laws or mechanisms. Unwittingly (to many of

    his structuralist followers), he had introduced the pre-dicament of the observer, the predictor, the modeler,the forecaster, perhaps the actor him/herself into thenotion of complexity of a phenomenon. But this slightsleight of hand remained unnoticed, perhaps in partdue to Simons own emphasis on the structural com-SRQHQWRIDGHQLWLRQRIFRPSOH[LW\LQWKHUHPDLQGHUof his (1962) paper. The (large, and growing) literaturein organization science that seeks to understand com-plexity in structuralist terms (as numbers of problem,decision, strategic or control variables and number oflinks among these variables, or number of value-linkedactivity sets and number of links among these activity

    sets - Levinthal & Warglien, 1999; McKelvey, 1999)DWWHVWVWRWKHIUXLWIXOQHVVRIWKHVWUXFWXUDOLVWGHQL-tion of complexity (NK(C) models of organizationalphenomena can be deployed as explanation-generat-ing engines for product development modularization,UPOHYHODQGJURXSOHYHOVWUDWHJLFGHFLVLRQSURFHVVHVWKHHYROXWLRQDU\G\QDPLFVRIUPVSURGXFWVDQGWHFK-nologies, and many other scenarios), but does not fullyown up to the cost that the modeler has to make in thegeneralizability of his or her results.

    These costs can be understood easily enoughLIRQHLVVXIFLHQWO\VHQVLWLYHWRDWKHUHODWLYLW\RI

    ontology, and; (b) the effects of ontology on modelstructure. There is no fact about the identity and thenumber of interacting components that we may use inorder to conceptualize an organizational phenomenon.(Alternatively, we may think of the problem of estab-lishing an ontology as self-evident as an undecidableproblem). We may think of organizations as interactingnetworks of people, behaviors, routines, strategies,epistemologies, emotional states, cultural traditions,and so forth. Moreover, we may expect that withinthe same organizational phenomenon, multiple such

    individuations may arise, interact with one another anddisappear. This leaves in doubt both the essence of themodules or entities that make up the part-structure ofthe organizational whole, and the law-like-ness of theconnections between these entities. Surely, phenom-ena characterized by shifting ontologies, changing rulesets and interactions between co-existing, incommen-

    surable ontologies exist (consider cultural transitions inpost-communist societies) and are complex, but are noteasily captured in NK(C) models or other models basedon networks on simple modules interacting accordingto locally simple rules. Thus, in spite of the very illu-minating analysis of some complex macro-structuresas nothing but collections of simple structures inter-acting according to simple local rules, the structuralistanalysis of complexity imposes a cost on the modelerEHFDXVHRIDQLQVXIFLHQWHQJDJHPHQWZLWKWKHGLIFXOWepistemological problem of complexity.

    &RPSOH[LW\DVGLIFXOW\7KHVXEMHFWLYHYLHZ

    Running parallel to the structuralist approach to theGHQLWLRQRIFRPSOH[LW\LVDYLHZWKDWFRQVLGHUVWKHcomplexity of a phenomenon to be related to the dif-FXOW\RIPDNLQJFRPSHWHQWYDOLGRUDFFXUDWHSUHGLF -tions about that particular phenomenon. This viewwas certainly foreshadowed in Simons (1962) work,when he stipulated that structurally complex systemsare complex in virtue of the fact that predicting theirevolution is computationally nontrivial. Of course, hedid not consider the possibility that structurally simplesystems can also give rise to unpredictable behavior,as is the case with chaotic systems (Bar-Yam, 2000).A system exhibiting chaotic behavior may be simplefrom a struct uralist standpoint (a double pendulum isDQH[DPSOHRIVXFKDV\VWHPEXWDQLQQLWHO\DFFXUDWHrepresentation of its initial conditions is required for anarbitrarily accurate prediction of its long-time evolu-tion: phase space trajectories in such a system diverge atan exponential rate from one another (Bar-Yam, 2000).7KXV6LPRQpVHDUO\GHQLWLRQRIFRPSOH[LW\QHHGVWR

    be amended so as to uncouple structural intricacy fromWKHGLIFXOW\RIPDNLQJSUHGLFWLRQVDERXWWKHHYROX -tion of a system.

    7KLVGLIFXOW\RISUHGLFWLQJWKHHYROXWLRQRIcomplex systems - may not be purely informationalLHPD\QRWPHUHO\UHTXLUHDWKHRUHWLFDOO\LQQLWHamount of information about initial or boundary condi-tions). Thus, Rivkin (2000) shows that the problem ofpredicting the evolution of Boolean networks made upof simple nodes coupled by simple causal laws (NK(C)networks) is computationally intractable when theaverage number of connections per node in the systemincreases past a (low) threshold value. And, simpleparadoxes in second-order logic highlight the fact thatundecidability can arise even in logical systems witha very small number of axioms (e.g., deciding on the

  • 7/30/2019 Complexity in organizations

    3/18

    11Moldoveanu

    truth value of I am telling a lie).

    The subjective difficulty of predicting theevolution of a complex phenomenon thus seems to beconnected to structural complexity in ways that areVLJQLFDQWO\PRUHVXEWOHDQGFRPSOLFDWHGWKDQZDVSUHJXUHGLQ6LPRQpVHDUO\PRGHO7KLVVLWXDWLRQKDV

    led some to treat complexity as a purely subjectivephenomenon, related to predictive or representationalGLIFXOW\DORQH/L9LWDQ\L7KLVKDVPDGHcomplex phenomena natural candidates for study us-ing paradigms for the study of judgment and decisionmaking under uncertainty.

    This tendency is easy enough to understand:an uninformed, computationally weak observer willQGLQWHUDFWLRQZLWKDFRPSOH[SKHQRPHQRQWREHDpredicament fraught with uncertainty and ambiguity(as he or she will not be able to predict its evolution).

    What matters then is not whether or not a phenome-

    non is complex in some objectively or intersubjectivelyYDOLGZD\EXWUDWKHUZKHWKHURUQRWLWLVGLIFXOWIRUthe observer that must interact with this phenomenon,to make competent predictions about it, and how suchan observer makes his or her predictions. Thus, thevery large literature on cognitive biases andfallacieshuman reasoning under uncertainty and ambiguity(Kahneman, et al., 1982), or of heuristic reasoning infoggy predicaments can be understood as a branch ofthe study of complexity, as it studies the ways in whichpeople deal with a core characteristic of complex phe-nomena, namely, the predictive fog with which theyconfront human intervenors and observers.

    This state of epistemological affairs will hardlybe satisfying to those who want to study essentialcharacteristics of complex phenomena - characteristicsthat are invariant across observational frames, cogni-tive schemata and computational endowments of theobservers of these phenomena. Such researchers willwant to cut through the wealth of complexity-copingstrategies that humans have developed over the mil-lennia to the core of what it means for a phenomenonto be complex, and to investigate complexity per se,rather than complexity relative to the way of being-in-the-world of the observer. Such an ambition is not, on

    the face of it, ridiculous or misguided as many usefulstrategies for dealing with complex systems can be dis-FHUQHGIURPWKHVWXG\RISURWRW\SLFDOVLPSOLHGoWR\pmodels of complexity. For instance, the study of chaoticsystems has given rise to approaches to the harnessingof chaos for the generation of secure communicationssystems that use chaotic waveforms to mask the secretdata that one would like to convey across a wire-tappedchannel; and the study of computationally complexalgorithms (Cormen, et al., 1993) has given rise tostrategies for distinguishing among computationally

    WUDFWDEOHDQGLQWUDFWDEOHSUREOHPVDQGQGLQJXVH-ful tractable approximations to intractable problems(Moldoveanu & Bauer, 2004).

    Nevertheless, purely structural efforts tocapture the essence of complexity via caricaturizedmodels have failed to achieve the frame-invariant

    characterization of complexity that some research-ers have hoped for. Structurally intricate systems canexhibit simple-to-predict behavior, depending on theinterpretive frame and computational prowess of theREVHUYHU'LIFXOWWRSUHGLFWSKHQRPHQDFDQEHJHQ-erated by structurally trivial systems. All combinationsRIVWUXFWXUDOLQWULFDF\DQGSUHGLFWLYHGLIFXOW\VHHPpossible, and there is no clear mechanism for assign-ing complexity measures to phenomena on the basisof their structural or topological characteristics. Anapproach that combines the felicitous elements andinsights of both the objective and subjective approachesis called for. We will now attempt to provide such a

    synthetic view of complex phenomena.

    7KHIXQGDPHQWDOSUREOHPRIFRPSOH[LW\

    VWXGLHVFDQEHGLVVROYHGLIZHORRN

    FDUHIXOO\LQWRWKHH\HRIWKHEHKROGHU7KH

    SKHQRPHQRQQHYHUVSHDNVIRULWVHOIE\

    itself

    Asolution to the epistemological problem ofspeaking about the complexity of a phenom-enon is provided by looking carefully at the eye

    RIWKHEHKROGHU,WLVLWVHOIDoGLIFXOWWRXQGHUVWDQGpentity, because it is intimately coupled to the cogni-tive schemata, models, theories and proto-concepts

    that the beholder brings to his or her understandingof a phenomenon. It is through the interaction of thiseye and the phenomenon in itself that what is issynthesized. In Hilary Putnams words (1981), themind and world together make up the mind and theworld. Thus, whatever solution to the epistemologi-cal problem of complexity is proposed, it will have toheed the irreducibly subjective aspect of perception,conceptualization, representation and modeling. Butthere is also an irreducibly objective component to thesolution as well: schemata, models and theories that arein the eye of the beholder cannot, by themselves, beWKHIRXQGDWLRQRIDFRPSOH[LW\PHDVXUHWKDWVDWLVHV

    minimal concerns about inter-subjective agreementbecause such cognitive entities are constantly underthe check and censure of the world, which providesopportunities for validation and refutation. This sug-gests that a fruitful way to synthesize the subjectiveand objective viewpoints on complexity of a phenom-enon is to measure the complexity of intersubjectivelyagreed-upon or in-principle intersubjectively test-able models, representations and simulations of thatphenomenon.

  • 7/30/2019 Complexity in organizations

    4/18

    12E:CO Vol. 6 No. 3 2004 pp. 9-26

    This presents us with a problem that is well-known to epistemologists, at least since the writingsof Kuhn (1962). It is the problem of coming up with alanguage (for referring to the complexity of a modelor theory) that is itself outside of the universe of dis-course of any one model, theory or representation.Kuhn pointed to the impossibility of a theory-free

    observation language, a language that provides obser-vation statements that are not sullied by theoreticallanguage. Putnam (1981) pointed to the impossibilityof a theory-free meta-language, a language that containsstatements about other possible languages withoutitself being beholden to any of those languages. Both,however, remained in the realm of language as it is un-derstood in everyday parlance, or in the formal parlanceof the scientist. To provide a maximally model-free con-ceptualization of complexity, I will instead concentrateon language as an algorithmic entity, a program thatruns on a universal computational device, such as aUniversal Turing Machine (UTM). Admittedly, UTMs

    do not exist in practice, but the complexity measure ISXWIRUWKFDQEHSDUWLFXODUL]HGWRVSHFLFLQVWDQWLDWLRQVof a Turing Machine. (The costs for doing so, while nottrivial, are not prohibitive of our effort).

    If we allow this construction of a languagein which a complexity measure can be provided, thefollowing way of conceptualizing the complexity ofa phenomenon suggests itself: the complexity of a

    phenomenon is the complexity of the most predictivelycompetent, intersubjectively agreeable algorithmicrepresentation (or computational simulation) of that

    phenomenon. This measure captures both subjective

    DQGREMHFWLYHFRQFHUQVDERXWWKHGHQLWLRQRIDFRP-plexity measure. It is, centrally, about predictive dif-FXOW\%XWLWLVDOVRDERXWLQWHUVXEMHFWLYHDJUHHPHQWabout both the semantic and syntactic elements of themodel used, about the purpose, scope, scale and accu-racy required of the predictions, and therefore about theresulting complexity measure. Thus, the complexity ofa phenomenon is relative to the models and schemataused to represent and simulate that phenomenon. It issubjective. But, once we have intersubjective agree-ment on ontology, validation procedure and predictivepurpose, the complexity measure of the phenomenon

    being modeled, represented or simulated is intersub-

    jective (the modern word for objective).

    ,QRZKDYHWRVKRZKRZoGLIFXOW\pFDQEHmeasured, in a way that is itself free of the subjectivetaint of models and schemata that are used to representDSKHQRPHQRQ7RGRVR,EUHDNXSoGLIFXOW\pLQWRWZRFRPSRQHQWV7KHUVWinformational complexity, orinformational depth (Moldoveanu & Bauer, 2004) - re-lates to the minimum amount of information requiredto competently simulate or represent a phenomenonon a universal computational device. It is the working

    memory requirement for the task of simulating thatphenomenon. The second - computational complexity,or computational load(Moldoveanu & Bauer, 2004)- relates to the relationship between the number ofinput variables and the number of operations that arerequired by a competent representation of that phe-QRPHQRQ$SKHQRPHQRQLVoGLIFXOWWRXQGHUVWDQGp

    (or, to predict): if its most predictively competent,intersubjectively agreeable model requires an amountof information that is at or above the working memoryendowments of the modeler or observer; if the com-putational requirements of generating predictionsabout such a phenomenon are at or above the com-putational endowments of the modeler or observer,RUERWKWRJHWKHU7RPDNHSURJUHVVRQWKLVGHQLWLRQof complexity and, especially, on its application to theunderstanding of the complexity of organizationalphenomena of interest, we need to delve deeper intothe nature of computational load and informationaldepth.

    7KHLQIRUPDWLRQDOO\LUUHGXFLEOH:KDWLQIRUPDWLRQDOGHSWK

    is and is not

    The view of informational depth presented here doesnot differ from that used in the theory of algorithmsand computational complexity theory (Chaitin, 1974).The informational depth of a digital object (an image,a representation, a model, a theory) is the minimumnumber of elementary symbols required in order togenerate that object using a general purpose compu-tational device (Chaitin, 1974). Without loss of gen-erality, we shall stipulate that these symbols should

    be binary (ones and zeros), knowing than any M-aryalphabet can be reformulated in terms of a binaryalphabet. Of course, it matters to the precise measureof informational depth which computational deviceone uses for representational purposes, and, for thisreason, we stipulate that such a device be a UniversalTuring Machine (UTM). We do this in order to achievemaximum generality for the measure that I am propos-ing, but at the cost of using a physically unrealizabledevice. Maximum generality is achieved because aUTM can simulate any other computational device,and therefore can provide a complexity measure forany simulable digital object. If that object is simulableon some computational device, then it will also besimulable on a UTM.

    Admittedly, the cost of using an abstract no-tion of a computational device (rather than a physicallyinstantiable version of one) may be seen as high bysome who are minded to apply measures in order tomeasure that which (in reality) can be measured, ratherthan in order to produce qualitative complexity classesof phenomena. In response, we can choose to relax thisUHVWULFWLRQRQWKHGHQLWLRQRIoWKHULJKWpFRPSXWD-tional device for measuring complexity, and opt to talk

  • 7/30/2019 Complexity in organizations

    5/18

    13Moldoveanu

    about a partic ular Turing machine (or other computa-tional device, such as a Pentium or PowerPC processor,powering IBM and MAC clone machines). This movehas immediate consequences in terms of the resultingGHQLWLRQRILQIRUPDWLRQDOGHSWKDGLJLWDOREMHFWPD\take fewer symbols if it is stored in the memory of aPentium processor than if it is stored in the memory

    of a PowerPC processor), but this is not an overpower-ing argument against the generality of the measure ofinformational depth I am putting forth. It simply im-poses a further restriction on the computational devicethat is considered to be the standard for the purposeof establishing a particular complexity measure. Toachieve reliability of their complexity measures, tworesearchers must agree on the computational platformthat they are using to measure complexity, not just onthe model of the phenomenon whose complexity theyare trying to measure, and on the boundary conditionsof this model (the class of observation statements thatDUHWREHFRQVLGHUHGOHJLWLPDWHYHULHUVRUIDOVLHUVRI

    their model).

    What is critically important about the informationaldepth of a digital object is its irreducibility: it is theminimum length (in bits) of a representation that can

    be used to generate a digit al object given a computa-tional device, not the length of any representationof that object on a particular computational device.Informational depth is irreducible, as it refers to a rep-resentation that is informationally incompressible. Thesentence (1) the quick brown fox jumped over the lazydog can be compressed into the sentence (2) th qck brnfx jmpd ovr lzy dg without information loss (what is

    lost is the convenience of quick decoding) or even to (3)t qk br fx jd or lz dg, but information is irretrievablylost if we adopt (4) t q b fx jd or lz dg as shorthand forLW&RUUHFWGHFRGLQJJHWVLQFUHDVLQJO\GLIFXOWDVZHgo from (1) to (2) to (3), and suddenly impossible aswe go from (3) to (4). We may say, roughly, that (3) isan irreducible representation of (1), and therefore thatthe informational depth of (1) is the number of symbolscontained in (3). (Note that it is not a computationallyeasy task to establish informational irreducibility bytrial and error. In order to show, for instance, that (3)is minimal with regards to the true meaning of (1)(given reliable knowledge of the decoder, which is the

    reader as s/he knows her/himself), one has to deleteeach symbol in (3) and examine the decodability of theresulting representation. The computational load of theYHULFDWLRQRILQIRUPDWLRQDOPLQLPDOLW\RIDSDUWLFXODUrepresentation increases nonlinearly with the informa-tional depth of that representation.)

    Informational irreducibility and the commonalityof a platform for representation together are necessaryconditions for the objectification of informationalGHSWKDVDFRPSOH[LW\PHDVXUH7KHUVWJXLGHVWKH

    observers attention and effort towards the attainmentof a global informational minimum in the representa-tion of the effort. The latter stipulates that observers usea common benchmark for establishing informationaldepth. The informational depth of a phenomenon, theinformational component of its complexity measure,FDQQRZEHGHQHGDVWKHminimum number of bits that

    an intersubjectively agreeable, predictively competentsimulation of that phenomenon takes up in the memoryof an intersubjectively agreeable computational plat-

    form. All the platform now needs to do is to performa number of internal operations (to compute) in orderto produce a simulation of the phenomenon in ques-tion. This takes us to the second component of ourcomplexity measure:

    7KHFRPSXWDWLRQDOO\ LUUHGXFLEOH:KDW FRPSXWDWLRQ LV

    and is not

    $VDERYHZHZLOOFRQVLGHUWRKDYH[HGDRXUPRGHOof a phenomenon, (b) the boundary conditions forverification or refutation of the model and (c) thecomputational device that we are working with. Weare interested in getting a measure of the computa-WLRQDOGLIFXOW\LHFRPSXWDWLRQDOORDGRIJHQHUDWLQJpredictions or a simulation of that phenomenon usingthe computational device we have used to store ourrepresentation of it. If, for example, the phenomenonLVWKHPDUNHWLQWHUDFWLRQRIROLJRSROLVWLFUPVLQDproduct or services market and the agreed-upon modelis a competitive theoretic one, then the representation(the informational component) of the phenomenonwill take the form of a set of players, strategies, payoffsand mutual conjectures about rationality, strategies andpayoffs, and the computational component will com-prise the iterated elimination of dominated strategiesUHTXLUHGWRGHULYHWKHQDOPDUNHWHTXLOLEULXP7KHmost obvious thing to do to derive a complexity mea-sure is to count the operations that the computationaldevice requires in order to converge to the requiredanswer. Two problems immediately arise:

    P1. The resulting number of operations increases withthe number of data points or input variables evenfor what is essentially the same phenomenon.Adding players, strategies or conjectures to theexample of game-theoretic reasoning above, forinstance, does not essentially change the fact ofthe matter, which is that we are dealing with acompetitive game. We would surely prefer a com-SOH[LW\PHDVXUHWKDWUHHFWVWKHTXDOLWDWLYHGLI-ference between solving for the Nash equilibriumand solving (for instance) for the eigen-values of amatrix (as would be the case in a linear optimiza-tion problem);

    P2. Many algorithms are iterative (such as that forcomputing the square root of N) and can be usedDGLQQLWXPUHFXUVLYHO\WRJHQHUDWHVXFFHVVLYHO\

  • 7/30/2019 Complexity in organizations

    6/18

    14E:CO Vol. 6 No. 3 2004 pp. 9-26

    sharper, more accurate approximations to the an-swer. Thus, their computational load is in theoryLQQLWHEXWZHNQRZEHWWHUWKH\DUHUHTXLUHGWRstop when achieving a certain level of tolerance (acertain distance from the right answer, whosedependence on the number of iterations can bederived analytically, on a priori grounds).

    Both (P1) and (P2) seem to throw our way ofUHDVRQLQJDERXWFRPSXWDWLRQDOGLIFXOW\EDFNLQWRthe realm of arbitrariness and subjectivity, throughthe resulting dependence on the precise details ofthe problem statement (P1) and the level of tolerancethat the user requires (P2). To rectify these problems,ZHZLOOUHTXLUHWZRPRGLFDWLRQVWRRXUPHDVXUHRIcomputational load:

    0,VKDOOGHQHFRPSXWDWLRQDOORDGUHODWLYHWRWKHnumber of input variables to the algorithm thatsolves the problem of simulating a phenomenon.This is a standard move in the theory of computa-tion (see, for instance, Cormen, et al., 1993);

    0,VKDOO[RUUHTXLUHDQ\WZRREVHUYHUVWRDJUHHupon) the tolerance with which predictions are toEHJHQHUDWHG7KLVPRYHUHVXOWVLQGHQLQJFRP-putational load relative to a particular tolerance inthe predictions that the model or representationgenerates.

    4XDOLWDWLYH FRPSOH[LW\FODVVHV7KH VLPSOH IDWKRPDEOH

    XQIDWKRPDEOHWUDFWDEOHLQWUDFWDEOHFRPSOH[FRPSOLFDWHG

    DQGLPSRVVLEOHGHQHG

    We are now in a position to give some objective (i.e.,

    intersubjectively agreeable) content to various sub-jectively suggestive or evocative ways of describingcomplex phenomena. I will show that common senseapproaches to the description of complex phenomenaare rather sharp when it comes to making distinctionsamong different informational and computationalcomplexity differences.

    'LVWLQFWLRQVLQLQIRUPDWLRQDOVSDFH)DWKRPDEOHDQG

    XQIDWKRPDEOHSKHQRPHQD

    A laboratory experiment studying the results of a two-player competitive bidding situation is a fathomablephenomenon, if we stick to some basic assumptionsthat the subjects will follow basic rules of cooperative

    behavior relative to the experimenter, and of incentive-driven behavior relative to one another. We can createa representation of the situation that is workable: stor-able in a reasonably-sized computational device. Thesame experiment, when conceived as an open-endedsituation in which the turmoils and torments of eachof the subjects matters to the outcome, along withminute differences in their environmental conditions,upbringing or neuropsychological characteristics, be-comes unfathomable: its representation exceeds not

    only that of an average observer, but also can easilyoverwhelm the working memory of even very largecomputational devices. Unfathomability can also resultfrom too little information, as makers of movies in thethriller genre have discovered. A sliced-off humanear sitting on a lawn on a peaceful summer day (as inBlue Velvet) is unfathomable in the sense that too little

    FRQWH[W[LQJLQIRUPDWLRQLVJLYHQIRURQHWRDGGXFHDplausible explanation of what happened (or, a plausiblereconstructive simulation of the events that resultedin this state of affairs). Thus, (1) the quick brown fox

    jumped over the lazy dog is fathomable from (2) thqck brn fx jmpd ovr lzy dg or even from (3) t qk br fx

    jd or lz dg, but not from (4) t q b fx jd or lz dg. Com-pression below the informational depth of an objectcan also lead to unfathomability, in the same way inwhich informational overload can.

    'LVWLQFWLRQVLQFRPSXWDWLRQDOVSDFH7UDFWDEOH

    LQWUDFWDEOHDQGLPSRVVLEOH

    Along the computational dimension of complexity,

    we can distinguish between three different classes ofGLIFXOW\Tractablephenomena are those whose simu-lation (starting from a valid model) is computationallysimple. We can predict rather easily the impact velocityof a coin released through the air from a known height,to an acceptable level of accurac y, even if we ignore airresistance and starting from the constitutive equationsIRUNLQHWLFDQGSRWHQWLDOHQHUJ\6LPLODUO\ZHFDQHI -FLHQWO\SUHGLFWWKHVWUDWHJLFFKRLFHVRIDUPLIZHNQRZthe subjective probabilities and values attached to vari-ous outcomes by its strategic decision makers, and westart from a rational choice model of their behavior. Itis, on the other hand, much computationally harder to

    predict with tolerable accuracy the direction and veloc-LW\RIWKHRZRIDWLGDOZDYHUXQDJURXQGVWDUWLQJfrom an initial space-time distribution of momentum(the product of mass and velocity), knowledge of the1DYLHU6WRNHVHTXDWLRQVDQGDSUROHRIWKHVKRUH,WLVVLPLODUO\FRPSXWDWLRQDOO\GLIFXOWWRSUHGLFWWKHstrategic choices of an organization whose output andpricing choices beget rationally targeted reaction func-tions from its competitors, starting from a competitivegame model of interactive decision making and knowl-edge of the demand curve in their market.

    Computation theorists (see, for example, Cor-

    men, et al., 1993) distinguish between computationallyHDV\WUDFWDEOHDQGGLIFXOWLQWUDFWDEOHSUREOHPVE\examining the dependence between the number ofindependent variables to the problem and the num-

    ber of operations required to solve the problem. Theycall tractable those problems requiring a number ofoperations that is at most a polynomial function ofthe number of independent or input variables (P-hard

    problems) and intractable those problems requiring anumber of operations that is a higher-than-any-poly-

  • 7/30/2019 Complexity in organizations

    7/18

    15Moldoveanu

    nomial function of the number of independent orinput variables (NP-hard problems). This demarcationpoint provides a qualitative marker for computation-induced complexity: we might expect, as scholars oforganizational phenomena, different organizational

    behaviors in response to interaction with P and NP-complex phenomena, as has indeed been pointed out

    (Moldoveanu & Bauer, 2003b).

    Of course, not all problems are soluble, andnot all phenomena are simulateable or representableRQDQLWHVWDWHFRPSXWDWLRQDOGHYLFH,PSRVVLEOHSKH-nomena are precisely those that cannot be so simulated,or, more precisely, whose simulation gives rise to aprovably impossible problem. The problem of decid-ing whether or not I am telling a lie is true or false,for instance, is provably impossible to solve; so is theproblem of predicting, to an arbitrary accuracy and atan arbitrarily distant point in time, the position andvelocity of the end-point of a double pendulum de-

    scribed by a second order nonlinear equation exhibitingFKDRWLFEHKDYLRUDQGVWDUWLQJIURPDQLWHSUHFLVLRQcharacterization of the initial conditions (displacement,velocity) of the different components of the pendu-lum.

    'LVWLQFWLRQVEDVHGRQ,QWHUDFWLRQVEHWZHHQWKHLQIRU-

    PDWLRQDODQGFRPSXWDWLRQDOVSDFHV6LPSOHFRPSOL-

    cated and complex

    I have, thus far, introduced qualitative distinctionsamong different kinds of complex phenomena, whichI have based on natural or intuitive quantizations ofthe informational and computational components ofcomplexity. Now, I shall introduce qualitative distinc-

    tions in complexity regimes that arise from interactionsof the informational and computational dimensionsof complexity. We intuitively call complicated thosephenomena whose representations are information-DOO\VKDOORZRUVLPSOHEXWFRPSXWDWLRQDOO\GLIFXOW(but not impossible). The Great Wall of China orthe Egyptian pyramids, for instance, are made up ofsimple building blocks (stone slabs) that are disposedin intricate patterns. One way in which we can un-derstand what it is to understand these structures isto imagine the task of having to reconstruct them us-ing virtual stone slabs in the memory of a large digitalGHYLFHDQGWRH[DPLQHWKHGLIFXOWLHVRIWKLVSURFHVV

    of reconstruction. In both cases, simple elementarybuilding blocks (slabs and simple patterns of slabs) areLWHUDWLYHO\FRQFDWHQDWHGDQGWWRJHWKHUWRFUHDWHWKHlarger whole. The process can be represented easilyenough by a skilled programmer by a series of nestedloops that all iterate on combinations of the elementarypatterns. Thus, the digital program that reconstructsthe Great Wall of China or the Pyramids of Egypt in thememory of a digital computer does not take up a largeamount of memory (and certainly far less memory than

    a straightforward listing of all of the features in thesestructures as they appear to the observer), but are com-putationally very intensive (the nested loops, whilerunning, perform a great number of operations). In theorganizational realm, complicated phenomena may befound to abound in highly routinized environments(such as assembly and production lines) where the

    overall plans are informationally highly compressedbut drive a high computational load.

    I propose calling complex those phenomenaand structures whose representations are information-ally deep but computationally light. Consider an anthillexhibiting no correlations among the various observ-able features that characterize it. Using the method ofthe previous example, consider the process of recon-structing the anthill in the memory of a digital device.In the absence of correlations that can be exploited toreduce the information required to represent the ant-hill, the only way to achieve an accurate representation

    thereof is to store it as a three-dimensional image (ahologram, for instance). The process of representingit is computationally simple enough (it just consistsof listing each voxel - three-dimensional pixels), butinformationally it is quite involved, as it entails storingthe entire structure. Complex phenomena in the orga-nizational realm may be found wherever intelligibilityof overall behavioral patterns is only very slight, as it isin securities trading and complex negotiations withinand between executive teams.

    By analogy, I propose calling simple those phe-nomena whose representations are computationally

    light and informationally shallow. These phenomena(such as frictionless pulleys and springs and pointmasses sliding down frictionless incline planes inphysics, choice and learning phenomena in low stakesenvironments in economics and psychology, mimetictransfer of knowledge and routines in sociology) areusually building blocks for understanding other, morecomplicated phenomena (collections of pulleys mak-ing up a hoist, the suspension system of a car, marketinteractions, organizational patterns of knowledge dif-fusion). They often constitute the paradigm thoughtexperiments around which disciplines (i.e., attemptsto represent the world in words and equations) are

    founded (Kuhn, 1990). We will interact with suchsimple phenomena in greater detail in the subse-quent sections, which aim to show how our measureof complexity cuts across various ways of looking atorganizations and modeling their behavior.

  • 7/30/2019 Complexity in organizations

    8/18

    16E:CO Vol. 6 No. 3 2004 pp. 9-26

    7KHQHZFRQFHSWXDOL]DWLRQRIFRPSOH[LW\

    KHOSVXVVHHRXUZD\WKURXJKDQ\

    organizational model to the complexity of

    WKHXQGHUO\LQJSKHQRPHQRQ

    T

    KHEHQHWRIWKLVQHZUHSUHVHQWDWLRQRIFRP-plexity does not lie purely in the fact that it canmake precise lay intuitions about various terms

    that have been (loosely) used to describe the userspredicament when faced with a complex phenomenon(as shown above), but also in the fact that it can providea model-invariant approach to the representation andmeasurement of the complexity or organizational phe-nomena. Model-invariant means that the representa-tion of complexity can be used in conjunction with anymodel of organizational phenomena that is amenableto algorithmic representation (a weak condition, satis-HGE\DOOPRGHOVWKDWDUHFXUUHQWO\LQXVHLQRUJDQL]D -tion science, studies and theory). To substantiate thisclaim, I will now analyze the models of organizationalphenomena that have come to dominate the literature

    during the past 20 years, and show how the complex-ity measure that I have developed here can be used toquantify the complexity of the paradigmatic phenom-ena that these models were meant to explain.

    It is important to understand what quanti-fying the complexity of a phenomenon is supposedto signify here. As shown above, the complexity ofDSKHQRPHQRQpLVDQLOOGHQHGFRQFHSWXQOHVVZHmake reference to a particular (intersubjectively testedor testable) model of that phenomenon, which wewill agree to provide the basis for a replication of thatphenomenon as a digital object (i.e., to provide a simu-

    lation of that phenomenon). The phenomenon entersWKHSURFHVVRIFRPSOH[LW\TXDQWLFDWLRQYLDWKHPRGHOthat has been chosen for its representation. Hence,the subjective analytical and perceptual mindset ofthe observer of the phenomenon is incorporated intothe complexity measure of the phenomenon, and theintersubjective (i.e., objective) features of the phenom-enon are taken into consideration via the requirementthat the model used as the basis of the complexitymeasurements be intersubjectively agreeable (i.e., thattwo observers using the model to represent or predictDSKHQRPHQRQFDQUHDFKDJUHHPHQWRQGHQLWLRQVRIterms, the relationship of raw sense data to observation

    statements, and so forth). The models that will be ana-lyzed below are already, in virtue of their entrenchmentLQWKHHOGRIRUJDQL]DWLRQDOUHVHDUFKZHOOHVWDEOLVKHGin the inter-subjective sense: they have been used ascoordinative devices by researchers for many years,and have generated successful research programmes(i.e., research programmes well-represented in theOLWHUDWXUH7KXVZHDUHMXVWLHGLQVWXG\LQJWKHFRP -plexity of phenomena that are paradigmatic for the useof these models via studying the (informational andcomputational) complexity of the models themselves,

    DQGUHPDLQFRQGHQWWKDWZHDUHQRWMXVWHQJDJLQJLQWKHPHDVXUHPHQWRUTXDQWLFDWLRQRISXUHFRJQLWLYHstructures. Moreover, the complexity measures thatwill emerge are intersubjectively agreeable (i.e., objec-tive) in spite of the fact that the inputs to the processof producing them have a subjective component.

    Organizations as systems of rules and rule-

    EDVHGLQWHUDFWLRQV

    Recent efforts at modeling organizations have explic-itly recognized them as systems of rules and rule-basedinteractions among multiple agents who follow (lo-FDOO\WKHVSHFLHGUXOHV7KHPRGHOLQJDSSURDFKWRorganizations as rule-based systems comprises threeVWHSVDWKHVSHFLFDWLRQRIDSODXVLEOHVHWRIPLFURrules governing interactions among different agents;EWKHVSHFLFDWLRQRIDPDFUROHYHOSKHQRPHQRQWKDWstands in need of an explanation that can be tracedto micro-local phenomena, and; c. the use of themicro-local rules, together with initial and boundary

    conditions, to produce simulations of the macroscopicpattern that emerges. Simple local rules (such as localrules of deference, cooperation, competition and dis-course) can give rise to complex macroscopic patternsof behavior which may or may not be deterministic (inthe sense that they vary with the nature of the micro-local rules but do not change as a function of changes ininitial and boundary conditions). A simple micro-localrule set that is plausible on introspective and empiri-cal grounds, such as Grices (1975) cooperative logic ofcommunications (which requires agents to interpreteach others utterances as being both informativeand relevant to the subject of the conversation, i.e.,

    cooperative) can, for instance, lead to organizationalpatterns of herd behavior in which everyone followsthe example of a group of early movers without chal-lenging their assumptions.

    The rule-based approach to modeling organi-zational phenomena is congenial to the computationallanguage introduced in this paper, and lends itself toan easy representation in complexity space. A rule is asimple semantic-syntactic structure of the type if A,then B, if not A, then not B, if A, then B, except forthe conditions under which C occurs, or, if A, thenpossibly B. Agents, acting locally, ascertain the state of

    the world (i.e., A) and take action that is deterministi-FDOO\VSHFLHGE\WKHUXOHWKDWLVGHHPHGDSSOLFDEOHoLIA, then B, for instance). In so doing, they instantiate anew state of the world (C), to which other agents reactusing the appropriate set of rules. (I shall leave aside,for the purpose of this discussion, the very importantTXHVWLRQVRIDPELJXRXVUXOHVFRQLFWVDPRQJUXOHVand rules about the use of rules, but they are discussedin Moldoveanu & Singh (2003). Sets of agents inter-acting on the basis of micro-local rules (statistical ordeterministic) can be represented as cellular automata

  • 7/30/2019 Complexity in organizations

    9/18

    17Moldoveanu

    (Wolfram, 2002) with agents represented by nodes(completely described each by a set of elementary statesthat change as a function of rules and the states of otheragents) and a set of rules of interaction (denumerable,QLWHDQGHLWKHUVWDWLVWLFDORUGHWHUPLQLVWLF7KLVFOHDUO\computational (but quite general, see Wolfram, 2002)interpretation of organizations-as-rule-systems is

    easily amenable to an application of the complexitymeasures that I have introduced above. First, the infor-mational depth of a phenomenon explained by a validrule-based model is the minimum description length ofa. the agents; b. the micro-local rules, and; c. initial and

    boundary conditions required to suitably simulate thatphenomenon. The computational load of such a phe-nomenon is the relationship between the number ofinput variables (agents, agent states, rules, initial con-ditions boundary conditions) that the model requiresto produce a successful simulation (i.e., a successfulreplication of the macroscopic pattern that stands inneed of explanation). Thus, when seen through the lens

    of rule-based systems of interactions, the measure oforganizational phenomena in complexity space is eas-ily taken - a fort unate by-product of the universality ofcellular automata as models of rule-based interactingsystems of micro-agents. (Note that the applicabilityof the complexity measure phenomena seen through arule-based interacting system lens depends sensitivelyon the universality of the cellular automata instantia-tion of rule-based systems).

    The complexity measures that I have intro-duced can also be used to ask new questions (to the endof garnering new insights and exploring - or generating

    - new phenomena) of rule-based models of organiza-tional phenomena, such as:

    1. How does the informational depth of micro-rulesaffect the computational load of the macro-phe-nomena that depend causally on them? Is there asystematic relationship between the complexityof micro-rules and the complexity of macro-pat-terns?

    Answering such questions could lead to thearticulation of a new, intelligent craft of organizationalrule design.

    2. How does macroscopic complexity affect the de-sign of micro-local rule sets? What are the condi-tions on the rationality of individual rule designersthat would be required for them to purposefullyalter macroscopic patterns through the alterationof micro-local rules?

    Answering these questions could lead to a newset of training and simulation tools for organizationalrule design and rule designers, and would also point

    to the bounds and limitations of the engineering ap-proach to rule system design.

    2UJDQL]DWLRQVDVVSDWLRWHPSRUDOO\VWDEOH

    EHKDYLRUDOSDWWHUQVURXWLQHVDQGYDOXHOLQNHG

    DFWLYLW\FKDLQV

    Organizations can also be modeled as systems of identi-

    DEOHURXWLQHVRUDFWLYLW\VHWVDFFRUGLQJWRDGRPLQDQWtradition in organizational research that dates back toat least the seminal work of Nelson & Winter (1982). AURXWLQHLVDVWDEOHQLWHUHSHDWHGEHKDYLRUDOSDWWHUQinvolving one or more individuals within the organiza-tion. It may or may not be conceptualized as a routine(i.e., it may or may not have been made explicit as aURXWLQHE\WKHIROORZHUVRIWKHURXWLQH%HLQJQLWHand repeated, routines are easily modeled either asalgorithms or as the process by which algorithms runon a computational (hardware) substrate. Because analgorithm prescribes a sequence of causally linked stepsor elementary tasks, wherein the output of one step or

    task is the input to the next step or task, the language ofDOJRULWKPVPD\LQIDFWVXSSO\DPRUHSUHFLVHGHQLWLRQof what a routine is: it is a behavioral pattern that is sus-ceptible to an algorithmic representation (Moldoveanu& Bauer, 2004). For example, an organizational routinefor performing due diligence on a new supplier or cus-tomer might include: a. getting names of references; b.checking those references; c. tracking the results of theevidence-gathering process; d. computing a weighteddecision metric that incorporates the evidence in ques-tion, and; e. making a go/no go decision regarding thatparticular supplier or customer. The steps are linked(the output of one is the input to the other) and the

    process is easily teachable and repeatable.

    Understanding routines as algorithms (or, asthe running of algorithms) allows us to easily applyour complexity measures to routine-based modelsRIRUJDQL]DWLRQ6SHFLFDOO\ZHPDSWKHQXPEHURIlinked steps involved in the algorithmic representationof a routine to the computational load of the routine.The informational depth of routine is given by the sizeof the blue-print or representation of the routine quaalgorithm. These modeling moves together allow us toinvestigate the complexity of organizational phenom-ena through the lens provided by routine-based models

    thereof. Routine sets may be more or less complex inthe informational sense as a function of the size of thememory required to store the algorithms that representthem. They may be more or less complex in the com-putational sense as a function of the number of stepsthat they entail.

    Given this approach to the complexity of rou-tines, it becomes possible to examine the importantquestion of designing effective routines in the spacespanned by the informational and computational

  • 7/30/2019 Complexity in organizations

    10/18

    18E:CO Vol. 6 No. 3 2004 pp. 9-26

    dimensions of the resulting phenomena. Here, thelanguage of algorithm design and computational com-plexity theory proves to be very helpful to the modeler.)RUH[DPSOHDOJRULWKPVPD\EHGHQHGUHFXUVLYHO\to take advantage of the great sequential speeds ofcomputational devices. A recursive algorithm is onethat takes, in successive iterations, its own output at a

    previous step as its input at the next step, converging,with each step, towards the required answer or close-enough approximation to the answer. Examples ofrecursive algorithms include those used to approximatetranscendental numbers such as Sor e, which produceadditional decimals with each successive iterations,and can be re-iterated CFKPPKVWO to produce arbi-trarily close approximations to the exact value of theYDULDEOHLQTXHVWLRQ'HQLQJDOJRULWKPVUHFXUVLYHO\has the advantage (in a machine that is low on memory

    but high on sequential processing speed) that costlystorage (and memory access) is replaced with mechani-cal (mindless) raw (and cheap) computation.

    Organizational routines may, following theDQDORJ\RIDOJRULWKPVEHXVHIXOO\FODVVLHGDVoUHFXU-sive or non-recursive, depending on their structure.Some organizational tasks (such as the planning ofinventories of products, assemblies, sub-assembliesand components) may be seen as successive (recursive)applications of a computationally intelligible kernel(matrix multiplication or matrix inversion, Moldove-DQX%DXHUDWVXFFHVVLYHO\oQHUpUHVROXWLRQV(or levels of analysis), in such a way that high-leveldecisions (regarding product inventory, say) becomeinputs to applications of the algorithm at lower levels of

    analysis (regarding assemblies or components). Otherorganizational tasks (such as reasoning by abductionin order to provide an inference to the best explana-tion of a particular organizational or environmentalphenomenon are not susceptibleprima facieto a recur-sive algorithmic interpretation and entail a far greaterinformational depth.

    Mapping routines to algorithms allows us toconsider both the evolution of routines (from a struc-tural or teleological perspective) and to incorporatethe phenomenological aspect of the complexity of theresulting phenomena into the routine-based analysis

    of organizational phenomena. In particular, we canproduce a canonical representation of different rou-tine-based phenomena in the language of algorithmswhose informational depth and computational loadFDQEHTXDQWLHGDQGDVN

    How do routine sets adapt to their own complex-ity? Are there canonical self-adaptation patternsof routine sets to increases in computational loador informational depth?

    How should routine designers trade off between

    1.

    2.

    informational depth and computational load inFRQGLWLRQVFKDUDFWHUL]HGE\GLIIHUHQWFRQJXUD-tions of (informational-computational) bounds torationality? How do they make these trade-offs?

    Are there general laws for the evolution of com-plexity? How does the complexity of routine setsevolve over time?

    Organizations as information processing and

    V\PEROPDQLSXODWLRQV\VWHPV

    Perhaps the most congenial of the classical traditionsto the study of organizations to the computationalinterpretation being put forth in this paper is thatoriginating with the Carnegie School (March & Si-mon, 1958; Cyert & March, 1963; Simon, 1962). Inthat tradition, organizations are considered as infor-mation processing systems. Some approaches stressthe rule-based nature of the information processingtask (Cyert & March, 1963), but do not ignore the te-leological components thereof. Others (Simon, 1962)

    stress the teleological component, without letting goof the fundamentally rule-based nature of systematicsymbolic manipulation. What brings these approachestogether, however, is a commitment to a view of or-ganizations as purposive (but boundedly far-sighted)symbol-manipulation devices, relying on hardware(human and human-created) and a syntax (grammaticalV\QWD[UVWRUGHUORJLFIRUoVROYLQJSUREOHPVpZKRVHarticulation is generally considered exogenous to theorganization: given, rather than constructed withinthe organization).

    There is a small (but critical) step involved

    in the passage from a view of organizations-as-in-formation-processing-structures to an algorithmicdescription of the phenomena that this view makes itpossible for us to represent. This step has to do withincreasing the precision with which we represent theways in which organizations process information, inparticular, with representing information-processingfunctions as algorithms running on the physical sub-strate provided by the organization itself (a view thatis strongly connected to the strong-AI view of themind that fuelled, at a metaphysical level, the cognitiverevolution in psychology, a by-product of the Carnegietradition). This step is only possible once we achieve

    a phenomenologically and teleologically reasonablepartitioning of problems that organizations can be saidto solve or attempt to solve into structurally reliableproblem classes. It is precisely such a partitioningthat is provided by the science of algorithm designand analysis (Cormen, et al., 1993), which, as we sawabove, partitions problems into tractable, intractableand impossible categories on the basis of structuralisomorphisms among solution algorithms. What theorganization does, qua processor of information, cannow be parsed in the language of tractability analysis

    3.

  • 7/30/2019 Complexity in organizations

    11/18

    19Moldoveanu

    in order to understand: a. the optimal structure ofinformation processing tasks; b. the actual structureof information processing tasks, and; c. structural,teleological and phenomenological reasons for thedivergence between the ideal and the actual.

    That we can now do such an analysis (starting

    from a taxonomy of problems and algorithms used toaddress them) is in no small measure due to the avail-ability of complexity measures for algorithms (andimplicitly for the problems these algorithms weremeant to resolve). Informational and computationalcomplexity bound from above the adaptation potentialof the organization to new and unforeseen conditions,while at the same time providing lower bounds for theminimum amount of information processing requiredfor the organization to survive qua organization. InWKLVWZRGLPHQVLRQDOVSDFHLWLVSRVVLEOHWRGHQHDstructural performance zone of the organization seenas an information processing device: it must func-

    tion at a minimum level of complexity (which can beVWUXFWXUDOO\VSHFLHGLQRUGHUWRVXUYLYHEXWFDQQRWsurpass a cert ain level of complexity in order to adaptZKLFKFDQDOVREHVWUXFWXUDOO\VSHFLHGDQGYDOLGDWHGon teleological and phenomenological grounds). Adap-tation, thus understood, becomes adaptation not onlyto an exogenous phenomenon, but also to the internalcomplexity that efforts to adapt to that phenomenongenerate. This move makes complexity (informationaland computational) a variable whose roles as causedand causer must be simultaneously considered.

    Organizations as systems of interpretation and

    sense-making

    ,WPD\VHHPGLIFXOWWRUHFRQFLOHWKHVWDUNO\DOJRULWK-mic view of complexity that I have put forth here witha view of organizations as producers and experiencersof the classical entities of meaning: narrative, symbol,sense, interpretation and meaning itself (Weick, 1995).This is because it does not seem an easy (or even pos-sible) task to map narrative into algorithm without los-ing the essential quality of either narrative or algorithmin the process. It is, however, possible to measure (incomplexity space) that which can be represented inalgorithmic form, not only about narrative itself, butalso, perhaps more importantly, about the processes bywhich narratives are created, articulated, elaborated,validated and forgotten. These processes (describingthe processes by which organizations interact withthe narratives that they produce and how they livethese narratives) are often more amenable to algorith-mic interpretation than are narratives themselves andequally important to the evolution of the organizationsthemselves.

    To see how narrative and the classical struc-tures of meaning can be mapped into the algorithmic

    space that allows us to measure the complexity of theresulting phenomena, let us break down the narrativeSURGXFWLRQIXQFWLRQLQWRWKUHHVWHSV7KHUVWLVDQontological one: an ontology is created, and comes toinhabit the subjects of the narrative. The organiza-tion may be said to be populated by people, by em-

    bodied emotions, by transactions, by designs and

    technologies, and so forth, These are the entities thatdo causal work, in terms of which other entities aredescribed. Surely, the process by which an ontologyis created cannot (and should not) be algorithmicallyrepresented, but this is not required for the algorithmicrepresentation of this initial ontological step. Everyalgorithm begins with a number of givens (which fac-WRULQWRLWVLQIRUPDWLRQDOGHSWKZKLFKDUHoXQGHQHGpHLWKHUEHFDXVHWKH\KDYHEHHQLPSOLFLWO\GHQHGRU

    because there is nothing to be alarmed about in leavingWKHPXQGHQHG7KDWZKLFKGRHVPDWWHUWRWKHDOJR -ULWKPLFUHSUHVHQWDWLRQRIWKLVUVWQDUUDWLYHGHQLQJstep is precisely the process of mapping of ontological

    primitives to other primitives, over which most narra-tive-designers (like any other theorist) often fret, andin particular: how deep is it? How many times doesthe question what is X? have to be answered in thenarrative? For instance, does (one particularly reduc-tive kind of) narrative require that organizations beanalyzed in terms of individuals, individuals in termsof beliefs and desired, beliefs and desires in terms ofneurophysiological states, neurophysiological statesin terms of electrochemical states and so forth? Or,rather, does the analysis stop with beliefs and desires?The articulation of the ontological first step in theproduction of narrative can, it turns out, be analyzed

    in terms of the complexity metrics I have introducedhere. At the very least, we can distinguish betweeninformationally deep ontologies and informationallyshallow ones, with implications, as we shall see, forthe computational complexity of the narrative-bearingphenomenon that we are studying.

    The second important step is that of devel-opment or proliferation of a narrative: the process

    by which, through the purposive use of language,WKHOWHULQJRIUHOHYDQWIURPLUUHOHYDQWLQIRUPDWLRQand the use of the relevant information to give wordsmeaning, narratives take over a particular component

    of organizational life. It is the case that the subjectiveexperience of living a story cannot be precisely al-gorithmically replicated, but each of the steps of thisexperience, once we agree on what, precisely they are,can be given algorithmic interpretation whose com-plexity we can measure (in both the informationaland computational sense). The process of validationof a story for instance, can be easily simulated usingDPHPRU\IHHGEDFNOWHUDFRPSDULVRQDQGDGHFL -sion based on a threshold comparison (regardless ofZKHWKHUWKHQDUUDWLYHYDOLGDWRULVDMXVWLFDWLRQLVWRU

  • 7/30/2019 Complexity in organizations

    12/18

    20E:CO Vol. 6 No. 3 2004 pp. 9-26

    IDOVLFDWLRQLVW2IFRXUVHYDOLGDWLRQSURFHVVHVPD\differ in computational complexity according to theGHVLJQRIWKHOWHUXVHGWRVHOHFWWKHGDWDWKDWSXUSRUWVWRoPDNHWUXHpWKHQDUUDWLYH$EGXFWLYHOWHUVEDVHGon inference to the best explanation) will be computa-WLRQDOO\IDUPRUHFRPSOH[WKDQLQGXFWLYHOWHUVEDVHGon straight extrapolation of a pattern, Bylander et al.,

    1991), just as deductive processes of theory testing willbe more computationally heavy than will inductiveprocesses that serve the same purpose.

    Thus, we can distinguish usefully betweensimple and complex narratives (and even among,simple, complicated, tractable, intractable and impos-sible ones) and examine the effects of the complexityof these narratives on the evolution of the organization(and on the narratives themselves) as long as we areZLOOLQJWRPDNHDVDFULFHLQWKHSKHQRPHQRORJLFDOrealm and allow that not all components of a structure-of-meaning can be usefully represented in algorithmic

    form, in exchange for being able to cut more deeply andQDUURZO\LQWRDVHWRIYDULDEOHVWKDWLQXHQFHWKHHYROX -tion and dynamics of such structures in organizationallife.

    Organizations as nexi of contracts and as

    FRPSHWLWLYHDQGFRRSHUDWLYHFRRUGLQDWLYH

    HTXLOLEULDDPRQJUDWLRQDORUERXQGHGO\UDWLRQDO

    agents

    Not surprisingly, approaches to organizational phe-nomena based on economic reasoning (Tirole, 1988)are congenial to an algorithmic interpretation andtherefore to the measures of complexity introduced

    LQWKLVSDSHU2QHOLQHRIPRGHOLQJFRQVLGHUVUPVDVnexi of contracts between principals (shareholders)and agents (managers and employees) (Tirole, 1988).A contract is an (implicit or explicit) set of contingentagreements that aims to credibly specify incentives toagents in a way that aligns their interests with those ofthe principals. It can be written up as a set of contingentFODLPVE\DQDJHQWRQWKHFDVKRZVDQGUHVLGXDOYDOXHRIWKHUPLHDVDVHWRIoLIf WKHQpRUoLIIfWKHQpstatements), or, more precisely, as an algorithm thatcan be used to compute the (expected value of) theagents payoff as a function of changes in the value ofthe asset that he or she has signed up to manage. In the

    agency-theoretic tradition, the behavior of the (self-interested, monetary expected value-maximizing)agent is understood as a quasi-deterministic responseto the contract that he or she has signed up for. Thus,the contract can be understood not only as an algorithmfor prediction, by the agent, of his or her payoff as aIXQFWLRQRIWKHYDOXHRIWKHUPLQWLPHEXWDOVRDVDpredictive tool for understanding the behavior of theagent tout court.

    The (informational and computational) com-plexity components of principal-agent agreementscan be understood,prima facie, as the informationaldepth and computational load of the contractualschemata that form the governance blueprint of theorganization: the de facto rules of the organizationalgame, which become wired into the behavior of the

    contractants. Such an interpretation is straightforward,and can be expected to yield clean measures of contrac-tual complexity, and, implicitly, of the complexity oforganizational phenomena understood through theagency-theoretic lens. On deeper study, however, itis clear that the complexity of a contract is not an im-mediate and transparent index of the complexity ofthe organizational phenomena that are played outZLWKLQWKHFRQQHVRIWKHFRQWUDFWDVWKHSUREOHPVRISHUIRUPDQFHPHDVXUHPHQWVSHFLFDWLRQRIREVHUYDEOHstates of the world, and gaming by both parties of thecontractual schema have to also be taken into consid-eration. Thus, measuring the complexity of contracts

    (conceptualized qua algorithms) provides us merelywith a lower bound of the complexity of the phenom-ena that are understood through a contractual lens.

    A more complete reconstruction of self-in-terested behavior in organizations, which is no lessamenable to an algorithmic interpretation than is theagency-theoretic approach is that based on game-theoretic methods. In such an approach, organizationalphenomena are understood as instantiations of com-petitive or cooperative equilibria among members ofthe organization, each trying to maximize his or herZHOIDUH:KDWLVUHTXLUHGIRUWKHVSHFLFDWLRQRIDQ

    intra-organizational equilibrium (competitive or co-operative) is a representation of the set of participants(players), their payoffs in all possible states of theworld, their strategies and their beliefs (or conjectures),including their beliefs about the beliefs of the otherparticipants. These entities together can be consid-ered as inputs to an algorithm for the computation ofthe equilibrium set of strategies, through whose lensorganizational phenomena and individual actions cannow be interpreted. It is the complexity of this algo-rithm (backward induction, for instance) that becomesthe de facto complexity measure of the organizationalphenomenon that is represented through the game-

    theoretic lens.

    Some work has already started on understand-ing game-theoretic models through a computationallens (Gilboa, 1989), and the basic idea is to considerindividual players (participants) as computationaldevices attempting (but not always succeeding, de-pending on their informational and computationalconstraints) to solve for the equilibrium set of strategiesand to compute the expected value of their payoff in theprocess. These approaches have focused predominantly

  • 7/30/2019 Complexity in organizations

    13/18

    21Moldoveanu

    on computational complexity, and have attemptedto model the influence of bounded rationality onthe resulting equilibria by iteratively reducing thecomputational capabilities of the players in the model(i.e., by placing upper bounds on the computationalcomplexity of the problems they attempt to resolveor on the algorithms that they use). The complexity

    measures introduced here add texture to this compu-tational modeling paradigm, by introducing a set ofuseful distinctions among different problem classesin the space of computational load (tractable, intrac-table, impossible) and by introducing an informationalcomponent to the complexity measures used to date,which has not always been taken into consideration(and corresponding to the working memory of eachindividual player).

    :KDWGRHVWKHQHZRSHUDWLRQDOL]DWLRQRI

    FRPSOH[LW\PHDQIRUKRZZHFDUU\RXW

    RUJDQL]DWLRQVFLHQFHDQGRUJDQL]DWLRQDO

    LQWHUYHQWLRQ"

    The complexity measures that have articulatedabove make it possible to develop a researchprogram that combines three ways of represent-

    ing organizations (phenomenological, teleological,structural) that have until now generated separate(and largely incommensurable) ways of knowing andways of inquiring about organization. I will examineLQWKLVQDOVHFWLRQWKHZD\VLQZKLFKWKHQHZPHD-sure of complexity (and the implicit space in whichthis complexity measure lives) enables insights fromstructural, teleological and phenomenological studiesof organizational and individual behavior to come

    together in a research programme that examineshow complexity (as a dependent variable) emergesas a discernible and causally powerful property of or-ganizational plans, routines and life-worlds and howcomplexity (as an independent variable) shapes andinfluences organizational ways of planning, actingand being.

    &RQWULEXWLRQVWRWKHVWUXFWXUDOSHUVSHFWLYH

    Structural perspectives on organizations (such asmany of the ones analyzed above) conceptualize orga-nizations in terms of causal models (deterministic orprobabilistic). These models refer to organizations as

    systems of rules, routines, capabilities or value-linkedactivities. As I showed above, any such model, oncephased in terms of algorithms (whose convergenceproperties are under the control of the modeler) can

    be used to synthesize a complexity measure for thephenomenon that it is used to simulate. Complexity(informational and computational) emerges as a newkind of modeling variable one that can now be pre-cisely captured. It can be used within a structuralistperspective in two ways:

    i. As a dependent variable, it is a property of the(modeled) phenomenon that depends sensitivelyon the algorithmic features (informational depth,computational load) of the model that is used tounderstand that phenomenon. In this sense, it istruly an emergent property, in two ways:

    1. It emerges from the non-separable combina-tion of the model and the phenomenon: ita property of the process of modeling andvalidation, not merely of the model alone orof the phenomenon alone;

    2. It emerges from the relationship between theobserver/modeler and the phenomenon, inthe sense that it is a function of the interactionof the observer and the phenomenon, not ofthe characteristics of the observer alone or ofthe phenomenon alone.

    Thus, it is now possible to engage in structuralist mod-

    eling of organizational phenomena which can explicitlyproduce complexity measures of the phenomena inquestion (as seen through a particular structural lens).Such measures can then be used both in order to trackvariations in organizational complexity as a functionof changes in organizational conditions (new variables,new relationships among these variables, new kinds ofrelationships among the variables, new relationshipsamong the relationships) and to track variations inorganizational complexity as a function of changesof the underlying structural models themselves (itmay turn out that some modeling approaches lead tolower complexity measures than do others, and may

    for this very reason be preferred by both researchersand practitioners).

    ii. As an independent variable, complexity (as op-erationalized above) can be used as a modelingvariable itself: the complexity of an organizationalSKHQRPHQRQPD\JXUHDVDFDXVDOYDULDEOHLQDstructural model of that phenomenon. This ma-neuver leads us to be able to consider, in analyticalfashion, a large class of reflexive, complexity-driven phenomena that have the property thattheir own complexity (an emergent feature) shapestheir subsequent spatio-temporal dynamics. Such a

    move is valuable in that if, as many studies suggest(see Miller, 1993; Thompson, 1967; Moldoveanu &Bauer, 2004) organizations attempt adapt to thecomplexity of the phenomena that they encoun-ter, it is no less true that they try to adapt to thecomplexity of the phenomena that they engenderand that they themselves are, in which case havinga (sharp) complexity measure that one can pluginto structural models as an independent variablemakes it possible to examine,

  • 7/30/2019 Complexity in organizations

    14/18

    22E:CO Vol. 6 No. 3 2004 pp. 9-26

    1. organizational adaptation to self-generatedcomplexity, by building temporally recursivemodels in which complexity at one time affectsdynamics at subsequent periods of time;

    2. the evolution of complexity itself, by buildingbehaviorally informed models of the complex-ity of various adaptations to complexity.

    &RQWULEXWLRQVWRWKHWHOHRORJLFDO

    SHUVSHFWLYH

    The teleological perspective conceptualizes organiza-WLRQVDVDGDSWLYHFRVWEHQHWFRPSXWLQJDQGRSWLPL]-ing structures that adopt and discard structures andstructural models as a function of their contribution toan overall utility f unction. They have a purpose (telos)ZKLFKPD\EHWRPD[LPL]HSURWVSUHGLFWDELOLW\RUSUREDELOLW\RIVXUYLYDOLQVWDQWLDWHGLQDQRYHUDOOWQHVVfunction). The computational perspective on complex-ity and the algorithmic measures of complexity that Ihave introduced allow us to study - within teleological

    models of organizations - the effects of phenomenalcomplexity on the trade-offs that decision-takers makein their organizational design choices.

    Even more importantly, we can deploy theapparatus developed by algorithm designers and com-putational complexity theorists to study the copingstrategies that managers to deal with organizationalcomplexity. To understand how this analytic apparat uscan be usefully deployed, it is essential to think of theorganization as a large scale universal computationaldevice (such as a Universal Turing Machine, or UTM),of organizational plans and strategies as algorithms

    and of organizational activities and routines as theprocesses by which those very same algorithms runon the UTM. Now, we can distinguish between severalstrategies that managers - qua organizational designers- might adopt in order to deal with complexity in bothcomputational and informational space, drawing onstrategies that algorithm designers use in order to dealwith large-scale problem complexity.

    L&RPSXWDWLRQDOVSDFH.VSDFHVWUDWHJLHV6WUXFWXUDODQG

    IXQFWLRQDOSDUWLWLRQLQJRILQWUDFWDEOHSUREOHPV

    When faced with computationally intractable prob-lems, algorithm designers usually partition these

    problems using two generic partitioning strategies(Cormen, et al., 1993). They can split up a large scaleproblem into several smaller-scale problems which can

    be tackled in parallel; or, they can separate out the pro-cess of generating solutions from the process of verify-ing these solutions. Either one of these partitioningscan be accomplished in a more reversible (soft) orirreversible (hard) fashion: the problem itself may besplit up into sub-problems that are tackled by the sameODUJHVFDOHFRPSXWDWLRQDODUFKLWHFWXUHVXLWDEO\FRQJ -ured to solve each sub-problem optimally (functional

    partitioning), or the architecture used to carry out thecomputation may be previously split up into parallelVXEDUFKLWHFWXUHVWKDWLPSRVHSUHGHQHGKDUGZLUHGlimits on the kinds of sub-problems that they can beused to solve.

    These distinctions can be used to make sense

    of the strategies for dealing with K-space complexitythat the organizational designer can make use of. Con-sider the problem of system design, recently shown to

    be isomorphic to the intractable (NP-hard) knapsackproblem. Because the problem is in the NP-hard class,the number of operations required to solve it will be ahigher-than-any-polynomial (e.g., exponential) func-tion of the number of system parameters that enter thedesign process as independent variables. Solving theentire system design problem without any partition-ing (i.e., searching for the global optimum) without theXVHRIDQ\VLPSOLFDWLRQPD\EHLQIHDVLEOHIURPDFRVWperspective for the organization as a whole. Partition-

    ing schemata work to partition the problem space intosub-problems whose joint complexity is far lower thanthe complexity of the system design problem taken asa whole.

    Consider first how functional partitioningworks for the designer of a system-designing organi-zation. S/he can direct the organization as a whole tosearch through the possible subsets of design variablesin order to achieve the optimal problem partitioninginto sub-problems of low complexity, sequentiallyengage in solving these sub-problems, then bring thesolutions together to craft an approximation to the

    optimal system design problem. Alternatively, thesystems-designing organization designer can directthe organization to randomly generate a large list ofglobal solutions to the system design problem takenDVDZKROHLQWKHUVWSKDVHRIWKHRSWLPL]DWLRQWDVNand then get together and sequentially test the validityof each putative solution qua solution to the systemGHVLJQSUREOHP,QWKHUVWFDVHRSWLPDOLW\LVWUDGHGoff in favor of (deterministic and rapid) convergence(albeit to a sub-optimal, approximate) answer. In thesecond case, certainty about convergence to the globaloptimum is traded off against overall system complex-ity.

    Such partitioning schemata can also beachieved structurally. The systems-designing organiza-tion designer can pre-structure the organization intosub-groups that are bounded in K-space as to the com-putational load of the problems they can take on. Thispartitioning can be hard-wired into the organizationthrough formal and informal rule systems, contracts,DQGRUJDQL]DWLRQDODW1RZIDFHGZLWKWKHRYHUDOO13hard) system design problem, organization will adapt

    by splitting it up spontaneously into sub-problems that

  • 7/30/2019 Complexity in organizations

    15/18

    23Moldoveanu

    are matched to the maximum computational load thateach group can take on. Alternatively, the organiza-tional designer can hard-wire the solution-generation /VROXWLRQYHULFDWLRQGLVWLQFWLRQLQWRWKHRUJDQL]DWLRQDOstructure by outsourcing either the solution-genera-tion step (to a consulting organization, to a market ofHQWUHSUHQHXULDOUPVWRDODUJHQXPEHURIIUHHODQFH

    producers) while maintaining the organizations rolein providing solution validation, or by outsourcing thesolution validation step (to the consumer market, inthe form of experimental products with low costs offailure) while maintaining its role as a quasi-randomgenerator of new solution concepts.

    LL,QIRUPDWLRQVSDFH,6SDFHVWUDWHJLHV

    Of course, hard problems may also be hard in the in-formational sense: the working memory or relevantinformation required to solve them may exceed thestorage (and access-to-storage) capacities of the prob-lem solver. Once again, studying the strategies used bycomputational designers to deal with informationaloverload gives us a sense of what to look for as com-plexity coping strategies that organizational designersuse in I-space. As might be expected, I-space strategiesfocus on informational reduction or compression, andRQLQFUHDVLQJWKHHIFLHQF\ZLWKZKLFKLQIRUPDWLRQdispersed throughout the organization is conveyedto decision-makers to whom it is relevant. We willFRQVLGHUFRPSUHVVLRQVFKHPDWDUVWDQGDFFHVVVFKH-mata second.

    Compression Schemata. Lossy compression achievesinformational depth reduction at the expense of dele-tion of certain potentially useful information or distor-tion of that information. Examples of lossy compres-sion schemata are offered by model-based estimationof a large data set (such that the data is represented by amodel and an error term) and by ex anteOWHULQJRIWKHdata set with the aim of reducing its size. In contrast tolossy information compression, lossless compressionachieves informational depth reduction without thedistortion or deletion of potentially useful information,albeit at the cost of higher computational complexityof the compression encoder.

    Consider, as an example of how these strategiesmight be deployed within an organization, the prob-lem a top management team faces in performing duediligence on a new market opportunity for a productmanufacturer (instantiated as the appearance of a newWHFKQRORJ\ZLWKLQWKHUPRUDQHZQLFKHZLWKLQWKHmarket). There is a large amount of information readilyavailable about competitors, their technologies, prod-ucts, intellectual capital, marketing and distributionVWUDWHJLHVPDUJLQDODQG[HGFRVWVDERXWFXVWRPHUVand their preferences, about the organizations owncapabilities and competitive advantages in different

    product areas, about long-term technological trendsand the effects of short-run demand-side and sup-ply-side discontinuities in the product market. Thisinformation comes from different sources, both withinthe organization and outside of it. It has various levelsof precision and credibility, and can be used to sup-port a multiplicity of possible product development

    and release strategies. Let us examine how complexitymanagement of the due diligence problem mimics I-space reduction strategies pursued by computationalsystem and algorithm designers.

    First, lossy compression mechanisms areapplied to the data set in several ways: through theuse ofa priorimodels of competitive interaction anddemand-side behavior that limit the range of dat a thatRQHLVZLOOLQJWRORRNDWWKURXJKWKHVSHFLFex anteformulation of a problem statement (or a set of hypoth-eses to be tested on the data) which further restricts thesize of the information space one considers in making

    strategic decisions, and through the application ofcommon sense cognitive habits (such as data smooth-ing to account for missing points, extrapolation togenerate predictions of future behavior on the basisof observations of past behavior, and inference to the

    best explanation to select the most promising groupsof explanatory and predictive hypotheses from the setof hypotheses that are supported by the data). Lossless(or, quasi-lossless) compression of the informationaldepth of the remaining relevant data set may then

    be performed by the development of more detailedPRGHOVWKDWHODERUDWHDQGUHQHWKHPRVWSURPLVLQJhypotheses and explanations in order to increase the

    precision and validity with which they simulate theobserved data sequence. They amount to high resolu-tion elaborations of the (low resolution) approachesthat are used to synthesize the organizations basicbusiness model or, perhaps more precisely model ofitself.

    Access Schemata. The second core problem faced bythe organizational designer in I-space is that of makingUHOHYDQWLQIRUPDWLRQHIFLHQWO\DYDLODEOHWRWKHULJKWGHFLVLRQDJHQWV7KHGHVLJQHURIHIFLHQWQHWZRUNVworries about two fundamental problem classes: theSUREOHPRIQHWZRUNGHVLJQDQGWKHSUREOHPRIRZ

    GHVLJQ%HUWVHNDV7KHUVWSUREOHPUHODWHVWRWKHGHVLJQRIQHWZRUNWRSRORJLHVWKDWPDNHWKHRZof relevant information maximally efficient. Thesecond problem relates to the design of prioritizationand scheduling schemes for relevant information thatmaximizes the reliability with which decision agentsget relevant information on a timely basis.

    The organizational designer manages networkstructure in I-space when s/he structures or triesto shape formal and informal communication links

  • 7/30/2019 Complexity in organizations

    16/18

    24E:CO Vol. 6 No. 3 2004 pp. 9-26

    among individual members of the organization inways that minimize the path length (or geodesic) fromthe transmitter of critical information to the intendedreceiver. Such wiring of an organization can be per-formed through either reversible (working group) orirreversible (executive committees) organizationalmechanisms. It can be achieved through either proba-

    bilistic (generating conditions for link formation) ordeterministic (mandating link formation) means. Theorganizational designer manages information flowZKHQKHVKHGHVLJQVRUDWWHPSWVWRLQXHQFHWKHTXHXLQJVWUXFWXUHRIFULWLFDOLQIRUPDWLRQRZVDQGVSHFLFDOO\WKHSULRULWL]DWLRQRILQIRUPDWLRQRZVWRdifferent users (as a function of the perceived informa-WLRQDOQHHGVRIWKHVHXVHUVDQGWKHVFKHGXOLQJRIRZVof various priorities to different users, as a function ofrelative importance of the information to the user, theuser to the organizational decision process, and thedecision process to the overall welfare of the organiza-tion.

    Let us examine the proliferation of I-spaceQHWZRUNDQGRZGHVLJQVWUDWHJLHVZLWKUHIHUHQFHWRthe executive team and their direct reports discussedearlier, in reference to the management of a due dili-gence process in I-space. First, network design. Theteam comprises members made up of members ofindependent functional specialist cliques (productGHYHORSPHQWPDUNHWLQJQDQFHEXVLQHVVGHYHORS -ment) that are likely to be internally densely wired(i.e., everyone talks to everyone else). The executiveteam serves as a bridge among functionalist cliques, andLWVHIFLHQF\DVDSLHFHRIWKHLQIRUPDWLRQDOQHWZRUNRI

    WKHUPZLOOGHSHQGRQWKHUHOLDELOLW\DQGWLPHOLQHVVRItransmission of relevant information from the cliquethat originates relevant information to the clique thatneeds it. Cross-clique path lengths can be manipulat-ed by setting up or dissolving cross-functional workinggroups and task groups. Within-clique coordinationcan be manipulated by changing the relative centralityof various agents within the clique. These effects mayEHDFKLHYHGHLWKHUWKURXJKH[HFXWLYHDWDQGRUJDQL]D-tional rules, or through the differential encouragementof tie formation within various groups and subgroupswithin the organization.

    6HFRQGRZGHVLJQ2QFHDQLQIRUPDWLRQDOnetwork structure is wired within the organization,the organizational designer still faces the problem ofDVVLJQLQJGLIIHUHQWRZUHJLPHVWRGLIIHUHQWXVHUVRIthe network. Flow regimes vary both in the informa-tional depth of a transmission and in its time-sequencepriority relative to the time at which the informationwas generated. The assignment of individual-levelstructural roles with formal and informal groupswithin the organization can be seen as a way of shap-LQJRZSULRULWL]DWLRQUHJLPHVDVDIXQFWLRQRIWKH

    responsibility and authority of each decision agent. Incontrast, the design of processes by which informationis shared, deliberated, researched and validated can beVHHQDVDZD\RIVKDSLQJRZVFKHGXOLQJUHJLPHVDVDfunction of the relative priority of the receiver and thenature of the information being conveyed.

    &RQWULEXWLRQVWRWKHSKHQRPHQRORJLFDOSHUVSHF-

    WLYH

    The proposed approach to the measurement of com-plexity started out as an attempt to reconcile the sub-MHFWLYHYLHZRIFRPSOH[LW\DVGLIFXOW\ZLWK YDULRXVREMHFWLYHYLHZVRIFRPSOH[LW\ZKLFKFRQDWHFRP-plexity with one structural property or another of anorganizational phenomenon. The synthesis betweenthe objective and subjective views was accomplished

    by making explicit the contribution that the observersmodels, or cognitive schemata, used to understand aphenomenon makes to the complexity of the phenom-enon, by identifying the complexity of a phenomenon

    with the informational depth and computational loadof the most predictively successful, intersubjectivelyvalidated model of that phenomenon. This move, I ar-gue, makes the complexity measure put forth applicablethrough a broad range of models to the paradigmaticphenomena they explain.

    But, it also opened a conceptual door into theLQFRUSRUDWLRQRIQHZVXEMHFWLYHHIIHFWVLQWRWKHGHQL-tion of complexity. In our informational and computa-WLRQDOGLPHQVLRQVoFRPSOH[LW\pLVoGLIFXOW\p$SKH-nomenon will be declared by an observer to be complexLIWKHREVHUYHUHQFRXQWHUVDGLIFXOW\LQHLWKHUVWRULQJ

    the information required to simulate that phenomenonor in performing the computations required to simulateLWVXFFHVVIXOO\6XFKGLIFXOWLHVFDQEHHDVLO\FDSWXUHGusing the apparatus of algorithmic information theory(Chaitin, 1974) and computational complexity theory(Cormen, et al., 1993). Are they meaningful? And, dothey allow us to make further progress in phenomeno-logical investigations of individuals grapplings withcomplex phenomena?

    Earlier in the paper it was shown that the vagueconcepts that individuals use to describe complex phe-nomena, such as unfathomable, simple, intractable,

    impossible and complicated can be given precisemeanings using one, another or combinations of bothof the informational and computational dimensionsWKDW,KDYHGHQHGIRUWKHPHDVXUHPHQWRIFRPSOH[LW\These distinctions make it possible for us to separateRXWWKHGLIFXOWLHVWKDWREVHUYHUVRIFRPSOH[SKHQRP-ena have, and enables the articulation of a researchprogramme that facilitates the interaction betweenthe mind and the complex. They also make it pos-sible for us to quantitatively investigate three separatephenomena that have traditionally been interesting to

  • 7/30/2019 Complexity in organizations

    17/18

    25Moldoveanu

    researchers in the behaviorist tradition:

    a. The informational and computational boundariesof competent behavior, and in particular the I-K-VSDFHFRQJXUDWLRQVWKDWOLPLWDGDSWDWLRQWRFRP -plexity. Here, the use of simulations of cognitivefunction and the ability to model the algorithmicrequirements (informational depth, computationalload) of any model or schema make it possible to

    break down any complex predicament along aninformational and a computational dimension andto hypothesize informational, computational and

    joint informational/computational limits on intel-ligent agent adaptation and adaptation potential;

    b. The trade-offs between informational and com-putational difficulty that intelligent adaptiveagents should make when faced with complexphenomena, which can be studied by simulatingthe choices that intelligent adaptive agents makeamong different available models and schemata

    for understanding complex phenomena, which inturn enables the study of:

    c. The trade-offs between informational and com-putational difficulty that intelligent adaptiveagents actually do make when faced with complexphenomena, which, in the time-honored traditionof behaviorist methodology applied to decisionanalysis, could be studied by measuring depar-tures of empirically observed behavior of agentsfaced with choosing among alternative models andschemata from the normative models discoveredthrough numerical and thought experiments.

    ReferencesAnderson, P. (1999). Introduction to special issue on

    organizational complexity, Organization Science, 10:1-16.

    Bar-Yam, Y. (2000). Dynamics of complex systems, NECSImimeo.

    Bertsekas, D. (1985). Data networks, Cambridge, MA: MITPress.

    Bylander, T., Allemang, D., Tanner, M. C. and Josephson, J.(1991). The computational complexity of abduction,#TVKEKCN+PVGNNKIGPEG8, 49: 125-151.

    Chaitin, G . (1974). Information-theoretic computationalcomplexity, +'''6TCPUCEVKQPUQP+PHQTOCVKQP6JGQT[