16
Strategic Management Journal, Vol. 16, 519-533 (1995) THE PROBLEM OF UNOBSERVABLES IN STRATEGIC MANAGEMENT RESEARCH PAUL C. GODFREY Marriott School of Management, Brigham Young University, Provo, Utah, U.S.A. CHARLES W. L HILL Graduate School of Business Administration, University of Washington, Seattle, Washington, U.S.A. In this paper we argue that unobservable constructs lie at the core of a number of influential theories used in the strategic marmgement literatureincluding agency theory, transaction cost theory, and the resource-based view of the firm. The debate over how best to deal with the problem of unobservables has raged in the philosophy of science literature for the best part of the current century. On the one hand, there are the positivists, who believe that theories containing unobservable constructs are only useful as tools for making predictions. According to positivists, such theories do not inform us about the deep structure of reality. On the other hand, there are the realists, who believe that our theories can give us knowledge about unobservables. Herein we review this debate, we argue for adopting a realist position, and we draw out the implications for strategic management research. INTRODUCTION From the earliest days of the Renaissance, empirical validation provided the selection mech- anism for scientific theories. The saga of scientific development is rife with accounts of theories which postulated a then unobservable entity as the root explanation of some phenomenon, followed only later by the development of tools to measure the new entity. The progress of science depended on the increasing ability of scientists to observe, and thus verify, key components of theories and their interrelations. The invention of the telescope by Galileo proved significant because it provided a window for the observation of previously unobservable, yet hypothesized, entities. The advent of quantum physics provided the most current, and perhaps most compelling, account in the saga of scientific development. For quantum physicists the building blocks of matter were unobservable on two fronts. Subatomic particles (e.g., electrons, neutrinos, quarks) are measurement unobservable in the sense that instrumentation cannot be calibrated to such a degree as to permit their direct observation. Moreover, quantum mechanics holds that the act of observing a subatomic particle effects a change in the state of that particle which is by no means trivial (Putnam, 1990). This constitutes state unobservability because the observation of the entity causes a change of state in the entity.^ For more than half a century now the debate over how best to deal with the problem of unobservables has raged in the philosophy of science literature (Boyd, 1991a). According to one school of thought—^the logical positivists— Key words: ology philosophy of science; research method- ' It is noteworthy that social scientists are also aware of the problems of state unobservability. The assumption of ethnographic research methodologies is that the direct observation of social processes by 'outside' observers causes a substantive change in the processes of interest. What is observed is what is constructed for and by the observer, not the process of interest. CCC 0143-2095/95/080519-15 © 1995 by John Wiley & Sons, Ltd. Received 28 July 1993 Firml revision received 10 November 1994

Godfrey and Hill

Embed Size (px)

Citation preview

Page 1: Godfrey and Hill

Strategic Management Journal, Vol. 16, 519-533 (1995)

THE PROBLEM OF UNOBSERVABLES IN STRATEGICMANAGEMENT RESEARCHPAUL C. GODFREYMarriott School of Management, Brigham Young University, Provo, Utah, U.S.A.

CHARLES W. L HILLGraduate School of Business Administration, University of Washington, Seattle,Washington, U.S.A.

In this paper we argue that unobservable constructs lie at the core of a number of influentialtheories used in the strategic marmgement literature—including agency theory, transactioncost theory, and the resource-based view of the firm. The debate over how best to dealwith the problem of unobservables has raged in the philosophy of science literature for thebest part of the current century. On the one hand, there are the positivists, who believethat theories containing unobservable constructs are only useful as tools for makingpredictions. According to positivists, such theories do not inform us about the deep structureof reality. On the other hand, there are the realists, who believe that our theories can giveus knowledge about unobservables. Herein we review this debate, we argue for adopting arealist position, and we draw out the implications for strategic management research.

INTRODUCTION

From the earliest days of the Renaissance,empirical validation provided the selection mech-anism for scientific theories. The saga of scientificdevelopment is rife with accounts of theorieswhich postulated a then unobservable entity asthe root explanation of some phenomenon,followed only later by the development of toolsto measure the new entity. The progress ofscience depended on the increasing ability ofscientists to observe, and thus verify, keycomponents of theories and their interrelations.The invention of the telescope by Galileo provedsignificant because it provided a window forthe observation of previously unobservable, yethypothesized, entities. The advent of quantumphysics provided the most current, and perhapsmost compelling, account in the saga of scientificdevelopment. For quantum physicists the building

blocks of matter were unobservable on two fronts.Subatomic particles (e.g., electrons, neutrinos,quarks) are measurement unobservable in thesense that instrumentation cannot be calibratedto such a degree as to permit their directobservation. Moreover, quantum mechanics holdsthat the act of observing a subatomic particleeffects a change in the state of that particlewhich is by no means trivial (Putnam, 1990).This constitutes state unobservability because theobservation of the entity causes a change of statein the entity.^

For more than half a century now the debateover how best to deal with the problem ofunobservables has raged in the philosophy ofscience literature (Boyd, 1991a). According toone school of thought— the logical positivists—

Key words:ology

philosophy of science; research method-

' It is noteworthy that social scientists are also aware ofthe problems of state unobservability. The assumption ofethnographic research methodologies is that the directobservation of social processes by 'outside' observers causesa substantive change in the processes of interest. What isobserved is what is constructed for and by the observer, notthe process of interest.

CCC 0143-2095/95/080519-15© 1995 by John Wiley & Sons, Ltd.

Received 28 July 1993Firml revision received 10 November 1994

Page 2: Godfrey and Hill

520 P. C. Godfrey and C. W. L. Hill

we can never' be sure of the existence ofunobservables. Consequently, theories that con-tain unobservables should not be judged on thebasis of their correspondence to reality, butinstead on their instrumental value as tools forgenerating predictions about the behavior ofphysical, natural, and social systems. A secondschool of thought—the realists—takes a differentposition. According to realists, the scientificenterprise can give us knowledge about theexistence of unobservable entities. Realists arguethat when a theory that contains unobservableentities is well corroborated by scientific evidence,then we may have good reason for believing thatthose unobservable entities have a correspon-dence in reality. Thus, the realist believes thatwe can make statements about the truth valueof theories that contain unobservables—the moreskeptical logical positivist does not.

We argue that the debate between logicalpositivists and realists has important implicationsfor the discipline of strategic management. Manyof the theories that are used to address thecentral questions of strategic managementresearch contain key constructs that are stateunobservable. The very observation of theseconstructs would eliminate the explanatory andpredictive power of the theories. These includetransaction cost theory, agency theory, and theresource-based view of the firm. If one adoptsthe instrumentalist position of modern logicalpositivists, the presence of unobservables in thesetheories suggests that while they may predictobservable phenomena accurately enough, thederivation of normative rules for managerialaction from such theories constitutes an unscien-tific endeavor. In contrast, the realist position isthat since our theories can give us knowledgeabout unobservables, it is legitimate to derivenormative rules from those theories that can beused to guide managerial action.

At the heart of the debate between logicalpositivists and realists, therefore, are divergentviews about the limits of human knowledge andthe kinds of conclusions that we can draw fromtheories that successfully predict observablephenomena. The importance of the debatebetween logical positivism and realism for stra-tegic management is the veracity of normativepredictions which come from theories such asthe resource-based view of the firm. Since manyview the raison d'itre of strategic management

research as being the generation of normativeheuristics, it is of critical importance to understandthis debate and the implications that it holds forboth strategy research and for the value of theclaims made for the results of that research. Ourmission in writing this paper, however, goesbeyond merely explaining these different views,for we do take an advocacy position. As Popperonce noted, we believe that 'while realism isneither demonstrable nor refutable . . . it isarguable, and the weight of the argument isoverwhelmingly in its favor . . . Common senseis clearly on the side of realism' (1972: 38).

UNOBSERVABLES IN STRATEGICMANAGEMENT RESEARCH

Transaction cost theory, agency theory, and theresource-based view of the firm provide excellentexamples of the central role of state-unobservableconstructs in the strategic management literature.Our selection of these theories is not meant toimply that they are the only, or even the mostimportant, theories that have a bearing for thestrategic management research—arguably, othertheories that have been used in strategic manage-ment also contain unobservables, including tra-ditional industrial organization economics and itsderivatives such as strategic group theory (entryand mobility barriers may be unobservable).

Transaction cost theory

Transaction cost economics (TCE) has been usedto explore a variety of issues of interest tostrategic management researchers, includingdiversification, vertical integration, and quasi-integration (e.g., Teece, 1982; Williamson, 1985).TCE states that transactions (exchanges) shouldbe viewed as the basic unit of economic analysis.TCE asserts that markets and hierarchies can beviewed as alternative mechanisms for governingtransactions. The central proposition of TCE isthat there are costs to executing any transaction,whether that transaction occurs in a market orwithin a hierarchy. Maximizing efficiency requiresthat transactions are governed by the mechanismthat minimizes those transaction costs (Coase,1937). For some transactions TCE predictsthat hierarchy is the appropriate governancemechanism, for others market governance is

Page 3: Godfrey and Hill

The Problem of Unobservables 521

appropriate, and for still other transactionsa governance mechanism that straddles themarket-hierarchy divide is predicted to be themost appropriate (Williamson, 1991).

Two of the main determinants of the trans-actions costs associated with a market-mediatedexchange are argued to be opportunism andasset specificity (Williamson, 1985). Opportunismrefers to the proclivity that economic actors haveto engage in self-interest seeking with guile.While not all economic actors are opportunistic,it is impossible to know for sure ex antewhich trading partners will be opportunistic(Williamson, 1985). Asset specificity refers tothe extent to which transactions are supportedby productivity-enhancing (and rent-producing)specialized assets that are uniquely tailored tothat transaction. TCE predicts that when oneparty makes substantial investments in specializedassets in order to trade with another, the othermay attempt to opportunistically appropriate therent stream generated by the specialized asset,thereby defrauding the owner of the specializedasset of his/her expected return. In order toprotect the rent stream from appropriation, theowner of the specialized asset may invest insafeguards such as contingent claims contracts andmonitoring mechanisms. The costs of establishingsafeguards are known as ex ante transaction costs.They serve to reduce the risk of opportunism,but not to eliminate it, since it is impossible todraft truly comprehensive contingent claimscontracts that eliminate the risk of opportunism(Williamson, 1985).

A residual risk of opportunism remains evenafter ex ante transaction costs have been borne.In the event that this risk is realized andopportunism does occur, additional costs areborne. These include the costs of contractenforcement and the unrecovered loss of rentstream due to opportunism. The ex post trans-action costs associated with a transaction aredefined as these additional costs, multiplied bythe probability of opportunism occurring, whichis itself a function of the safeguards put in place(i.e., of the ex ante transaction costs). Thus,total transaction costs associated with a market-mediated exchange are seen as the sum of exante and ex post transaction costs. The coreprediction of TCE is that if total transactioncosts exceed the costs of governing the sametransaction with a hierarchical setting, the

exchange should be internalized within a hier-archy.

The key variable which drives the transactioncost engine is opportunism. According to William-son (1985), one cannot know how an economicactor will behave in a given situation until he/she is placed in that situation. While othertransaction cost theorists argue that factors suchas an actor's reputation can send a reasonablystrong signal as to his/her likely behavior, theyalso acknowledge that reputation is at best animperfect guide to future behavior (Hill, 1990).Thus, opportunism is a purely ex post phenom-enon—it cannot be observed until it has occurred,and yet it is the perceived risk of opportunismthat facilitates the calculation of ex ante trans-action costs. Moreover, under situations whereinformation pertaining to a transaction is highlyambiguous, opportunism may be unobservableex post. One may not know when one has beenripped off. Opportunism clearly suffers frommeasurement unobservability.

Opportunism is also characterized by stateunobservability. The decision to locate a trans-action in a market or a hierarchy is driven bythe costs associated with the risk of opportunismin conjunction with asset specificity. If opportun-ism were observable ex ante then encompassing,contingent claims contracts could be written togovern the transaction without the exorbitantcosts of hierarchical governance. If opportunismwere observable ex ante, there would existlittle economic justification for the presence ofhierarchy. Ex ante observation of opportunismwould change the state of the world and vitiatethe explanatory power of TCE.

Agency theory

Agency theory has been invoked in the strategicmanagement literature to explain the structureof corporate governance mechanisms and theefficacy of the takeover mechanism. An agencyrelationship is defined as one in which one ormore persons (the principal(s)) engages anotherperson (the agent) to perform some service ontheir behalf which involves delegating somedecision-making authority to the agent (Jensonand Meckling, 1976). The cornerstone of agencytheory is the assumption that the utility functionsof principals and agents diverge. This divergenceof interests gives rise to an efficiency loss to the

Page 4: Godfrey and Hill

522 P. C. Godfrey and C. W. L. Hill

principal. According to agency theory, thisefficiency loss can be reduced if the principalestablishes appropriate incentive systems, moni-toring mechanisms, and enforcement mechan-isms, and if the agent bears bonding costs. Thesum of any incentive, monitoring, enforcement,and bonding costs, along with any remainingefficiency loss, are referred to as agency costs.Agency theory asserts that economic selectionprocesses favor governance structures that econ-omize on agency costs (Fama, 1980). By govern-ance structures, agency theorists mean the mech-anisms that police the implicit or explicit contractsbetween principals and agents. These includeincentive-based performance contracts, monitor-ing mechanisms such as the board of directors, andenforcement mechanisms such as the manageriallabor market and market for corporate control.

Even this thumbnail sketch reveals that anumber of unobservable constructs are to befound at the heart of agency theory. Theutility functions of principals and agents areunobservable because the elements in eachindividual's utility function are subjectivelydetermined (Mirowski, 1989). Researchers mayobserve choices made by agents (or principals),such as how they allocate their time betweenwork-related effort and 'on-the-job consumption';however, this is substantially different from theactual observation of utility. Moreover, thearguments made by Alchain and Demsetz (1972)with regard to team production suggest that thechoices made by agents may themselves often beunobservable—which further complicates theprocess of observing utility. The construct ofutility is a shining example of measurementunobservability; utility defies measurement as aquid pro quo of its construction (Mirowski,1989).

Just as importantly, the divergence of interestsbetween principals and agents constitutes a stateunobservable. The entire contribution of agencytheory hangs on the ex ante state unobservabilityof this divergence of interests. The accurate exante observation of divergent interests betweenprincipals and agents would eliminate the needfor principals to incur agency costs, and thegovernance structures said to minimize agencycosts would become superfluous. Stated simply,the value added by agency theory depends on theex ante state unobservability of the degree ofdivergent interests between agents and principals.

Moreover, the fact that utility and the resultingdivergence of interests between principals andagents cannot be directly observed implies thatagency costs, which are a function of thisdivergence of interests, are also inherentlyunobservable.

The resource-based view of the firm

The resource-based view (RBV) of the firm isthe most recent of the three theories reviewedhere to break upon the strategic managementscene (Barney, 1991; Conner, 1991; Mahoneyand Pandian, 1992; Wernerfelt, 1984). The RBVseeks to explain the pattem of performancedifferences between firms over time. Central tothe RBV is a conception of the firm as acollection of heterogeneous resources, or factorsof production. Resources include physicalresources, such as plant and equipment, humanresources, such as managerial and technicalstaff, and organizational routines, which are the'software programs' that organizations use tocoordinate their human and physical resourcesand put them to productive use (Penrose, 1959;Nelson and Winter, 1982). The RBV argues thatheterogeneous resource endowments are thesource of competitive advantage (ordisadvantage). The magnitude of competitiveadvantage generated by a resource depends uponthe extent to which it either reduces the coststructure of the firm, or helps differentiate thefirm's product offering. It also depends upon theuniqueness of the resource in relation to thosepossessed by competitors.

The sustainability of competitive advantagesrelies upon three factors: the rate of resourceobsolescence due to environmental change; theavailability of substitutes for the resource; andthe inimitability of the resource. The inimitabilityof a resource is argued to depend upon theheight of barriers to imitation, which in tum isa function of the extent to which the targetresource is observable. Resources are argued tobe unobservable if they are tacit, diffusedthroughout the organization, or socially embed-ded (Reed and DeFillippi, 1990). Organizationalroutines, in particular, are argued to have thesecharacteristics (Nelson and Winter, 1982; Reedand DeFillippi, 1990). The core proposition ofthe RBV with regard to sustainability proceedson the logic that, all else being held constant.

Page 5: Godfrey and Hill

The Problem of Unobservables 523

the more unobservable a value resource, thehigher are the barriers to imitation, and the moresustainable will be a competitive advantage basedupon that resource.

The power of the theory to explain performancepersistence over time is based upon the assump-tion that certain resources are by their natureunobservable, and hence give rise to highbarriers to imitation. These resources are stateunobservable because the observation of theresource, in whatever degree, immediately erodesthe height of the barrier to imitation. Theobservation of this resource effects a change inits own nature as well as the nature of the firmwithin which it is embedded. In short, if thereare no unobservable resources, the RBV losesmuch of its explanatory power.

LOGICAL POSITIVISM AND REALISM

Almost all of the work in the philosophy ofscience during the present century has eitherbeen produced within the tradition of logicalpositivism, or has been written as a response toit (Boyd, 1991a). Realism is no exception tothis; indeed, modem realism emerged in theearly twentieth century as the dominant responseto the philosophical problems that quantummechanics created for logical positivism. Sincethen the debate between logical positivism andrealism has been waged continuously in thephilosophical literature (Boyd, Gasper, andTrout, 1991). Despite this lengthy discourse, thedebate has not yet been resolved—nor will it be.Like all philosophical debates, ultimate resolutionis impossible and one's position is arrived at byweighing the arguments.

Logical positivism

Logical positivism traces its genealogy back tothe eighteenth-century enlightenment thinkers ofLocke, Berkeley, and Hume (Russell, 1946). Asa school, the logical positivists reached theirpinnacle in the middle of the twentieth centuryin the so-called 'Vienna Circle' (Brown, 1970).Positivists have traditionally espoused a verifi-cationist theory of meaning. More recently, theyhave moved towards what is referred to as aninstmmental position. Positivists also espouse aformalistic theory of truth.

Verificationist theories of meaning

The central doctrine of logical positivism is theverificationist theory of meaning (Boyd, 1991b;Brown, 1970; Hacking, 1983). This is the thesisthat a proposition (theoretical statement) ismeaningful if, and only if, its elements can beempirically verified. The emphasis on verificationcan be traced back to Hume in whose system'all perceptions of the mind resolve themselvesinto two distinct kinds, which I shall callIDEAS and IMPRESSIONS' (Hume, 1739: 1).Impressions are based upon sensory data, andinclude emotions and passions. Thus, to see anapple is an impression in the mind. Ideas arethe 'faint images' of impressions which arebrought into the memory. When an individualremembers that they saw an apple yesterday,they are creating an idea in their mind of an apple.The crux of Humean empiricism, therefore, isthe thesis that there can be no ideas formedindependently of impressions. Given this, itfollows that only those objects in the world whichcan be empirically verified have meaning. Putanother way, central to logical positivism is thethesis that all genuine knowledge is based onsensory observation—whether that be directobservation, or observation aided by instruments(Hacking, 1983).

The verificationist theory of meaning has twokey implications for the development of scientifictheories. First, only those statements whoseelements have empirically verifiable meaningscan qualify as propositions (all scientific theoriesare propositions). Second, as a direct consequenceof the above, there are no elements in aproposition, or theory, which are purely theoreti-cal. Purely theoretical elements cannot be verifiedand thus have no meaning. There is no valueadded to knowledge by the inclusion of a purelytheoretical element—one that cannot be' verifiedby empirical observation—in determining thetruth value of a proposition. Only terms that canbe empirically observed, and which, therefore,have meaning, are necessary and useful inestablishing the truth of a proposition.

Logical positivists apply the verificationisttheory of meaning to the problem of demar-cation—that is, to the problem of distinguishingbetween science and nonscience. According tothis approach, theories that contain purelytheoretical elements whose meaning cannot be

Page 6: Godfrey and Hill

524 P. C. Godfrey and C. W. L. Hill

verified through sensory observation are notscientific theories. Thus, logical positivism hasoften been associated with attacks on metaphysics(e.g., religion), and attempts to show that, incontrast to tme science, such inquiries cannotresult in real knowledge (O'Hear, 1989). Logicalpositivists are unwilling to take any 'leaps offaith' regarding nonempirical based knowledge.A strict application of the verificationist theoryof meaning to agency, transaction costs, andresource-based theory would doom these theoriesto the realm of metaphysics, along with the likesof quantum mechanics, precisely because theycontain unobservable elements whose existenceit is impossible to verify. However, as we shallsee next, the positivists have shifted their positionto admit unobservables into scientific theories,while maintaining a deep skepticism about theultimate truth value of such theories.

The instrumental position and the problem ofunobservables

Logical positivism has been roundly attacked forits inability to handle theories heavily reliant onunobservable constructs, such as quantum physics(Putnam, 1990). It has not helped the positivistcause that some of these theories, and particularlyquantum mechanics, have been spectacularlysuccessful in making predictions that are sub-sequently confirmed by empirical observation.In response, positivists have moved away froma strict interpretation of the verificationist theoryof meaning, and towards an instrumental positionthat admits to the value of incorporating unob-servables in scientific theories, without formallyaccepting the realist position that such theoriescan give us knowledge about those unobservables.

The instrumental position asserts that theultimate tmth or falsity of a scientific theory isirrelevant; it is the ability of a theory to explainempirical reality that is the proof of its value(Nagel, 1979). To the instrumentalist, theoriesare merely tools, much like hammers and saws,used by scientists to construct predictions ofobservable phenomena. It would be absurd todebate the tmth or falsity of a hammer, and theinstmmental argument holds that the concernwith the truth of scientific theories is similarlymisplaced. The strength of the instmmentalistresponse is to fundamentally shift the criteria onwhich science should be judged, away from the

search for truth and toward the search foradequate explanation.

Milton Friedman's influential essay. The Meth-odology of Positive Economics, justifies the useof unobservable economic constructs by adoptingthe instrumentalist position. He argues:

the relevant question to ask about the 'assump-tions [constructs]' of a theory is not whetherthey are descriptively 'realistic [observable],' forthey never are, but whether they are sufficientlygood approximations for the purpose in hand.And this question can be answered only byseeing whether the theory works, which meanswhether it yields sufficiently accurate predictions(Friedman, 1953: 15).

Thus, for example, whether or not peopleactually calculate the marginal utility of a quantityof any good they care to purchase, or whetherutility exists at all, is of little relevance in theinstrumentalist approach. What matters is thatwhen researchers proceed ay:/people did engagein such calculations, the predictions of themarginalist model conform to empiricallyobserved reality. To give another example, asimilar logic sustains the validity of transactioncost economics, regardless of the observability(or existence) of transaction costs. The assump-tion that markets select economic entities thateconomize upon transaction costs apparentlyyields accurate predictions about the governancestructures observed in the real world (seeWilliamson, 1985, Chapter 5). Thus, an instru-mentalist can accept the value of transaction costtheory as an instrument for predicting governanceform, without having to commit to the beliefthat transactions costs actually exist in the realworld. For the instrumentalist, transaction costsare simply a useful theoretical construct; but onethat is unobservable and therefore hypothetical.

In defense of this position an instrumentalistwill claim that for any observable phenomenon,P, that has been accurately predicted by a theoryTi, which contains an unobservable construct, itwill always be possible to construct alternativetheories, Tj to Tn, which yield the samepredictions as Ti, but which offer contradictoryaccounts and evoke different (or additional)unobservable entities. It follows that scientificevidence can never decide between competingtheories that contain unobservable phenomena,but which yield equivalent and empirically con-

Page 7: Godfrey and Hill

The Problem of Unobservables 525

firmed predictions. We may choose the simplestmodel for pragmatic reasons (i.e., by utilizingOckham's razor), but pragmatic standards ofchoice have nothing to do with truth or knowl-edge.

Formal theories of truth

Since a positivist position maintains that thetmth value of theories containing unobservablescannot be assessed on the basis of their correspon-dence to reality, positivists assess the tmth valueof such theories on the basis of their logicalform. The most recent defender of this approachhas been Hempel (1962), who has championedthe deductive-nomological (DN) model of expla-nation. This model can be set out schematicallyas follows:

Ll, Lz, . . ., L^ (general laws)

Cl, Q , . . .,€„ (backgroundconditions)E (phenomenon to be explained)

According to this approach, a proposition hastruth value if the phenomenon it is seeking toexplain, E, can be deductively derived from aset of general laws, Li, L2, . . ., Ln, andbackground conditions, Ci, C2, . . ., Cn. TheDN model suggests that the truth value ofscientific theories depends upon the syllogisticform of the theory and not upon the substancesof general laws, background conditions, andphenomena to be explained. That is, generallaws and background conditions do not have tohave any basis in reality for a theory to be tme.The truth value of a theoretical statement is notassessed on the basis of its correspondence toreality, but upon the basis of its mathematicalform.

The DN approach has come to dominate thepractice of the social sciences. Current socialscience research is laden with a predilection forwarranted assertability based on the syllogisticform of theories and models (for details seeMiller, 1991). Economics stakes its claim as atme science because its core propositions can allbe reduced to mathematical equations whosetmth depends upon form, not content. Theincreasing use of the logic tools of economicanalysis, and particularly game theory, by stra-tegic management researchers provides an illus-tration of the reliance on formal models within

the context of the DN approach to warrant thetmth claims of propositions (e.g., Camerer, 1991;Saloner, 1991).

Realism

Realism gained momentum in the middle of thetwentieth century as the epistemological problemscreated by the inclusion of unobservable entities inquantum mechanics were debated in philosophicalcircles (Aronson, 1984; Putnam, 1990). Sincepositivists have a thorough-going hostility tounobservable or purely theoretical entities quan-tum mechanics presented a head-on challenge tothe positivist philosophy of science. This isthe challenge that realism addresses. Here weexamine the realist position with regard tothe problem of meaning, which goes beyondverification and instrumentalism, the problem oftruth, which emphasizes correspondence, and theproblem of confirmation.

Beyond verification and instrumentalism

The hallmark of realism is a belief that theoriesof science give us knowledge about the unobserv-able, and that under certain circumstances wemay have good reason for believing statementsabout unobservable entities to be tme. Thus,realists are willing to take 'leaps of faith'regarding unobservables. This is the antithesis ofpositivism, with its insistence that no knowledgeof unobservable phenomena is possible and thattheories based upon unobservable entities arenot scientific but metaphysical (for the traditionalpositivist), or, at best, tools for explainingphenomena (for the instrumentalist). Accordingto realists, when a well-confirmed scientific theoryappears to describe unobservable theoreticalentities, it is almost always appropriate to think ofthese terms as really referring to unobservablefeatures of the world, which exist independently ofour theorizing about them, and of which the theoryis probably approximately tme (Boyd, 1991a). So,for example, according to realists the quantumtheory of physics suggests that atoms are made upof very small entities with various properties. Theevidence that we have that such entities existindependent of our theorizing about them is notbased upon observation of the entities themselves,since they are unobservable, but upon observationof their effects.

Page 8: Godfrey and Hill

526 P. C. Godfrey and C. W. L. Hill

Realists argue that the instrumentalist positionis untenable in practice, since the instrumentsthat allow scientists to confirm the predictionsof theories are themselves frequently constmctedon the basis of theories that contain unobservableelements (for example, this is the case withelectron microscopes). A tme instrumentalist,therefore, would have to reject the observationsprovided by such instmmentation as metaphys-ical—a position that if widely accepted wouldresult in the whole scientific enterprise quicklygrinding to a halt (Putnam, 1990).

In contrast, realists adopt what Popper (1972)has referred to as the common sense approachto knowledge. This simply states that if a scientistmakes a prediction on the basis of some theorythat contains unobservable elements, and if thistheory survives repeated attempts to falsify it,then we are justified in acting as if the theorywere tme. This holds even though we can neverknow for sure that the unobservable entities inthe theory exist. Popper buttresses this argumentby quoting Winston Churchill—and the quote isworth reproducing at length:

Some of my cousins who had the great advantageof University education used to tease me witharguments to prove that nothing has any existenceexcept what we think of it. I always rested onthe following argument . . . [Here] is this greatsun standing apparently on no better foundationthan our physical senses. But happily there is amethod, apart altogether from our physicalsenses, of testing the reality of the sun.Astronomers predict by [mathematics and] purereason that a black spot will pass across the sunon a certain day. You look, and your sense ofsight immediately tells you that their calculationsare vindicated . . . We have taken what is calledin military map-making a 'cross bearing'. Wehave got independent testimony to the realityof the sun. When my metaphysical friends tellme that the data upon which the astronomersmade their calculations . . . were necessarilyobtained originally through the evidence of theirsenses, I say 'No'. They might, in theory at anyrate, be obtained by automatic calculatingmachines set in motion by the light falling uponthem without admixture of human senses at anystage . . . I . . . reaffirm with emphasis . . . thatthe sun is real, and also that it is hot—in factas hot as Hell, and that if the metaphysiciansdoubt it they should go there and see.^

^From Winston S. Churchill (1944). My Early Life—ARoving Commission. Macmillan, London, p. 131.

Popper follows this quote by noting thatwhile Churchill does not prove realism, for aphilosophical argument can never be decisive,his arguments do constitute an excellent refutationof the specious arguments of the positivist—aconclusion with which we concur.

Correspondence theories of truth

Realists adhere to a correspondence theory oftmth, according to which propositions are tmeif, and only if, they correspond to actualconditions in the real world (Boyd, 1991a;Horwich, 1990; Tarski, 1935). This view standsin contrast to the formal theory of tmth, inwhich the truth value of a proposition is assessedby its syllogistic form, as opposed to its substance.The realist conception is that the tmth value ofa proposition can only be assessed by itssubstance—it is only true if its substance corre-sponds to that of the real world.

It is important to realize that the correspon-dence theory does not constitute a rejection ofthe DN model as a useful method for deducingpredictions. However, it does constitute a rejec-tion of the instmmentalist argument that theonly legitimate claim for the tmth value of atheoretical statement is that based upon thesyllogistic form of the statement. The correspon-dence theory of tmth maintains that a statementderived from the DN model may be correct, asjudged by its syllogistic form, but it may be falsein the sense that it does not correspond toreality. An example of this is Postrel's (1991)use of game theory to demonstrate that givencertain background conditions it is rational forbank managers to set their pants on fire in public(the so-called Flaming Trousers Conjecture).Postrel's point is that game theory is no morethan a logical tool, and as such it can be misusedto produce absurd propositions that neverthelesshave the correct syllogistic form. Only if theproposition corresponds to reality does it havetmth value.

Confirmation

As for theory confirmation, here too realists partcompany with the skepticism of positivists. Whilethe realist would not disagree with the positivistclaims that we cannot ever conclusively prove atheory containing unobservables to be tme, the

Page 9: Godfrey and Hill

The Problem of Unobservables 527

realist argues that we can have good reasons forbelieving that a theory is 'approximately tme'.^Therefore, we may be justified in acting as if itwere tme. The realist approach here is commonlyknown as the 'inference to the best explanation'(Aronson, 1984). The thought underlying theinference to the best explanation is that if atheory consistently explains some data betterthan any other theory explains them, we have agood reason to act as if it were tme. Moreover,realists argue that our belief in a theory can bestronger when it explains a diverse set ofphenomena. It would be an absurd coincidenceindeed if a wide variety of different kinds ofphenomena were all explained by a particulartheory, and yet that theory were not tme. Thus,the argument from coincidence supports a goodmany of the inferences that we make to bestexplanation (Cartwright, 1991). Realists alsopoint out, with some justification, that theinference to the best explanation is the onlycommon sense position to take. After all,our design of bridges, airplanes, atomic powerstations, computers, and space vehicles is guidedby theories that we believe to be approximatelytme, even if we cannot ever conclusively provethem to be so.

IMPLICATIONS FOR STRATEGICMANAGEMENT RESEARCH

We have shown that unobservable constructs areto be found at the core of a number oftheories that underpin a good deal of strategicmanagement research—including agency theory,transaction cost theory, and the resource-basedview of the firm. We have also reviewed thedebate between logical positivists and realists,and argued, convincingly we hope, that realismis more defensible than positivism. Building onthis, in the current section we consider a numberof implications of a realist's philosophy for thestrategic management field.

^ Lakatos (1968) argues that this is the original positionwhich Popper (1959) intended to take. Popper's argumentshave been used by positivists in support of the verificationistclaims of their position. Popper's own statements regardingrealism, quoted earlier, provided evidence that his positionon these matter is more subtle than commonly assumed.

The legitimacy of normative science

Positivists and their instmmental cousins are bydefinition skeptics. They see theories containingunobservables as mere tools. This necessarilylimits the ability of the positivist to derivenormative mles from theories that can be usedto guide managerial action. Recall the positivistargument that for any theory Ti containingunobservables that explains a phenomenon, P,there are a number of feasible alternativetheories, T2 to T^, that might be constmcted toexplain P. The positivist will always be suspicious,therefore, of normative mles derived from Ti,since the theory may be false. They will treat Tias a mere tool for generating predictions, whilearguing that the theory itself may not have anycorrespondence in reality. In contrast, a realistwill state that if Ti is supported by the inferenceto the best explanation, we should not shy fromderiving normative mles from that theory, evenwhen they involve unobservable constmcts. Asstrategic management researchers and teacherswho hope to improve managerial action, therealist position seems to us to be the best wayforward and, indeed, the only way that we canjustify what we do. At the same time, we cautionthat adopting the realist position does not giveresearchers the right to preach normative mlesderived from their favorite theory. As always, atheory must survive rigorous empirical testingbefore we can speak of it as being wellcorroborated.

Criteria for choosing between theories

A realist philosophy states that we cannot rejecttheories just because they contain key constmctsthat are unobservable. It is not enough to statethat the unobservability of utility dooms agencytheory, that transaction cost theory is untestablebecause transaction costs cannot be measured,or that the RBV is invalid because key resourcesmay be unobservable. To reject a theory onemust be able to show that the predictions ofobservable phenomena that are derived fromthat theory do not hold up under empiricaltesting. By the same token, however, a realistphilosophy also indicates that we cannot accepta theory just because it is based upon a logicalor mathematical model that has the correctsyllogistic form. A theory may have the correct

Page 10: Godfrey and Hill

528 P. C. Godfrey and C. W. L. Hill

syllogistic form, but if it has no correspondencein reality, the theory must be viewed as false.Put another way, theory building following themles of the DN model is a necessary condition,but not a sufficient condition, for the acceptanceof a theory.

Consideration of both the realist and instrumen-talist positions implies that the only sure methodof choosing between rival theories is the corre-spondence of the predictions of the theory withthe real world. Wherever possible, researchersshould design critical experiments, where thepredictions of theory A are diametrically opposedto the predictions of theory B (Aronson, 1984).Observation of the real world thus allows us toinfer with confidence that either theory A ortheory B is correct, but not both.

The method of critical experiment is the paththrough which quantum mechanics and the theoryof relativity have become accepted. The designof critical experiments is a difficult task instrategic management, where researchers arefaced with a multitude of variables and contin-gencies, each of which potentially determines theoutcome. However, we are confident that theingenuity and creativity of strategic managementscholars will meet the challenge of theory testingby critical experiments. We discuss some theory-testing strategies below.

Theory testing strategies

When testing theories that contain unobservableelements, scholars need to carefully think throughtheir methodological approach. Recognizing thatunobservable constmcts cannot by definition bemeasured, they must develop testing strategiesthat take this into account. Here we reviewtheory-testing strategies for each of the threetheories reviewed earlier: transaction cost theory,agency theory, and the resource-based view ofthe firm.

Transaction cost theory

The predictions of transaction cost theory statethat the optimal governance stmcture, G, is afunction of the level of transaction costs arisingin market-mediated exchange, TC, which in turnis a function of the degree of asset specificity,k, and the ex ante probability of opportunism,p. That is:

Since researchers have recognized the problemof measuring p directly {k is measurable), thetesting strategy has relied upon a reduced modelthat essentially has the following form:

In this formulation p is assumed to be constantacross settings and the level of transaction costsis inferred from k (e.g., see Monteverde andTeece, 1982; Walker and Weber, 1984). From arealist or positivist perspective, there is nothingwrong with such a theory-testing strategy giventhat key constructs—in this case p—are unobserv-able, and that the model G = f^{k) still yieldstestable and falsifiable predictions about G thatare consistent with transaction cost theory.Furthermore, this basic model can be extendedto give us more knowledge about unobservablessuch as p. Granovetter (1985) and Hill (1990),for example, have suggested that the ex anteprobability of opportunism, p, is itself a functionof the extent to which economic transactions areembedded within a social network, n. That is:

P=U{n)

Given that networks can be mapped out(Powell, 1990), this modification to transactioncost theory enables us to test the followingmodel:

G = f,{k.n)

where the network variable, n, moderates therelationship between G and it. Thus, this formu-lation can yield more knowledge about anunobservable, p. Specifically, it tells us whetherp is attenuated when transactions take placewithin a network, as suggested by Granovetter(1985) and Hill (1990).

While the appropriate theory-testing strategycan give us knowledge about unobservables, itis important to point out that unless advocatesof a theory that contains unobservables candevelop such a testing strategy the theory itselfis doomed to ultimate failure. The viability oftransaction cost theory is enhanced because thereare ways of 'getting at' transaction costs. Theviability of transaction cost economics hangs on

Page 11: Godfrey and Hill

The Problem of Unobservables 529

the ability of the theory to make refutablepredictions about G based on k and n.

Agency theory

Agency theory assumes that the (unobservable)utility functions of principal and agents diverge,theorizes that this divergence gives rise toinefficiencies which impact upon agency costs(another unobservable), and proposes that theadoption of appropriate governance structuresfor policing the relationship between principalsand agents can economize on agency costs. Thus,agency costs, C, can be modeled as a function ofthe governance structure policing the relationshipbetween principal and agents, Gpa. That is:

Starting from this core proposition, agencytheorists have developed detailed argumentsabout the value of different governance structuresas mechanisms for aligning interests and reducingagency costs. Thus, on the basis of deductivelogic, they might argue that some governancestructure G'pa is superior to Gpa because underG'pa C'<C. The event study methodology rep-resents a popular way of testing hypothesesderived from such arguments. The idea isto measure the change in agency costs thataccompanies a certain type of change in govern-ance structure. The core problem remains—agency costs are unobservable. However, inthose cases where stockholders are the principalsand corporate managers the agents, this problemhas been solved by finding a workable proxy foragency costs; namely, the abnormal returnsto stockholders that accompany a change ingovernance structure. The shift in governancestructure may be a leveraged buyout, a changein board structure (e.g., separating the positionof chairman and CEO), the adoption of somekind of contractual mechanism for aligninginterests (e.g., golden parachute contracts), ahostile takeover, and so on (Jensen, 1988).Kaufman and Englander (1993) show that theabnormal returns offered by the new governancestructure present in Kohlberg Kravis Roberts(KKR)-owned companies served as a powerfulmagnet for new investment capital, primarilyfrom large institutional investors.

The crucial point is that agency theorists have

found an observable proxy—abnormal returns—that allows them to operationalize an unobserv-able construct: agency costs. They do notdirectly measure the change in agency costs thataccompanies a change in governance structure.Instead, they utilize a proxy that, from a realistperspective, can be argued to refiect the reductionin agency costs. A realist would maintain thatsuch a testing strategy gives us information aboutagency costs and, by extension, the underlyingdivergence of interest between principals andagents that gives rise to those costs in the firstplace.

Alternatively, a researcher can use agencytheory to argue that differences in governancestructure across firms will be refiected in differ-ences in firm strategy and performance overtime. For example, in firms with relatively weakgovernance structures, managerial hegemony canbe theorized to result in the pursuit of empire-building diversification strategies that depressfirm performance but increase the size of thefirm (the argument being that such strategiesbest satisfy a stylized managerial utility functionthat contains income, status, and power as centralelements—see Aoki, 1984 for a justification). Incontrast, when governance mechanisms are strongwe might expect to see less diversification andhigher performance. This gives rise to a modelof the following form:

where TT is the performance of the firm, S isfirm strategy, and G is the governance stmcture.Since all of the elements in this model areobservable, testing is relatively straightforward(for examples, see Bethel and Liebeskind, 1993;Hill and Snell, 1988). Again, a realist wouldargue that the results from such an empiricalstrategy give us information about an unobserv-able construct—in this case the utility functions ofprincipals (stockholders) and managers (agents),since differences in utility will be refiected instrategic choices and subsequent firm perform-ance.

The resource-based view

In contrast to transaction cost and agency theory,advocates of the resource-based view have yetto solve the empirical problem posed by the

Page 12: Godfrey and Hill

530 P. C. Godfrey and C. W. L. Hill

inclusion of unobservables in the theory. Forexample, a key proposition in the RBV is thatthe persistence of profit rates, IT, is a functionof the barriers to imitating rare and valuableresources. In turn, barriers to imitation areargued to be a function of the degree ofobservability (or unobservability) of thoseresources, 4). Thus:

The problem with this formulation is that itis currently untestable. It is by constructionimpossible to assess the degree of unobservabilityof an unobservable, since by definition inimitableresources are unobservable (Barney, 1991). Oneway out of this trap is for scholars to focus onobservable variables that determine the degreeof unobservability of a rare and valuable resource.Work by Reed and DeFillippi (1990) is anexample of a move in this direction. If a set ofvariables, Xu X2, . . ., Xn can be identified thatdetermines <|), such that <}> = f(Xi, X-i., . . ., X^then one can test the following model:

and doing so will give us knowledge about therole of <t) in determining IT. TO achieve thisgoal, however, scholars must first identify theobservable conditions Xx,Xi., . . .., Xn which canbe used to proxy the height of barriers toimitation. While this is a daunting task, the workof Dierickx and Cool (1989) represents a positivefirst step in this endeavor. Having done this,scholars must generate a set of refutable predic-tions which link IT and Zi, X2, . . ., Xn. Thedevelopment of observables will facilitate thetesting of the RBV; clear predictions about ITwill enable the strategic management communityto adjudge the ultimate explanatory power ofthe RBV.

More generally, it should be noted thatultimately the RBV will stand or fall not on thebasis if whether its key constructs can be verified,but upon whether its predictions correspond toreality observed for populations of firms. Whatscholars need to do is to theoretically identify whatthe observable consequences of unobservableresources are likely to be, and then go out seewhether such predictions have a correspondencein the empirical world. The analogy here is with

quantum mechanics, which has been confirmednot by observing subatomic entities (since theyare unobservable) but by observing the trail leftby subatomic entities in the cloud chambers oflinear accelerators. Good models for the kind oflarge-sample econometric work that needs to bedone can be found in empirical studies lookingat the persistence of abnormal returns over time(Cubbin and Geroski, 1987; Jacobsen, 1988) andat the relative contribution of industry and firm-specific (resource-based) factors in explainingperformance variations across firms (Hansen andWernerfelt, 1989; Rumelt, 1991). Cubbin andGeroski and Jacobsen control for the influenceof unobservable variables, while Wernerfelt andRumelt utilize viable proxies for firm-specificresources in their empirical tests.

A final point is that the description of the firmfound in the RBV is complex, deep, andhistorical. Since each firm is viewed as a uniqueentity, explaining the cause of superior (orinferior) performance at the level ofthe individualfirm calls for clinical work of the type thatis currently unfashionable in the managementliterature. We are not advocating a return hereto the type of unstructured and atheoretical casestudy work that characterized the early strategicmanagement literature. Rather, we are arguingthat there is value to be had, in terms ofexplanation, in viewing the firm as a naturallaboratory in which the theoretical propositionsof the RBV are already being tested. Thechallenge facing researchers is to take a collectionof firms that face a similar environment (e.g.,firms in the same industry), to establish howthese firms differ with regard to their resources,and to link these differences to barriers toimitation and the persistence of performancedifferences across time. Such work should followthe methodological ground rules for comparativeclinical work laid out by Eisenhardt (1989),Leonard-Barton (1990)—for a recent example ofsuch work see Collis (1991). If repeated clinicalstudies across a wide variety of contexts yieldempirical results that are consistent with theRBV, then following a realist philosophy ofscience we may legitimately claim that the theorycorresponds to reality. The key here, however,is the need for repeated clinical studies. For onlywith enough repetition across contexts willadvocates of the RBV be able to counter thecoincidence argument.

Page 13: Godfrey and Hill

The Problem of Unobservables 531

Observing unobservables

Realists should not use the fact that a construct isunobservable given current instrumentation as anexcuse for not trying to develop new and betterinstrumentation that can observe the formallyunobservable. The history of science containsmany instances of formerly unobservable entitiesbecoming observable due to the development ofbetter instrumentation (e.g., the scanning electronmicroscope opened up a whole new world offormerly unobservable phenomena to the 'view' ofscientists). When all is said and done, even themost diehard realist would still prefer that we beable to observe key theoretical constructs. There isno preference for theories that contain unobserv-ables— just a preference for theories that are testableand well corroborated by empirical investigation.Accordingly, the realists should try to develop newinstruments, or pursue new research methodologies,that may enable them to observe the formerlyunobservable. For example, qualitative methodolog-ies such as multiple case studies, event histories, andethnographic inquiries may represent the best wayforward in observing the effects of otherwiseunobservable, idiosyncratic effects on business strat-egy and performance, such as those predicted bythe resource-based view of the firm (Collis, 1991;Eisenhardt, 1989; Leonard-Barton, 1990).

Beware of scientific errors

Friedrich August von Hayek began his Nobelmemorial lecture with a stinging attack on thepractice of empirical economics during the late1960s and early 1970s: To quote:

it seems to me that this failure of the economiststo guide policy more successfully is closelyconnected with their propensity to imitate asclosely as possible the procedures of the brilli-antly successful physical sciences—an attemptwhich in our field may lead to outright error, , , in the physical sciences the investigator willbe able to measure what, on the basis of aprima facie theory, he thinks important, in thesocial sciences often that is treated as importantwhich happens to be accessible to measurement, . . [unobservable effects] are simply disregardedby those sworn to admit only what they regardas scientific evidence: they thereupon happilyproceed on the fiction that the factors whichthey can measure are the only ones that arerelevant (Hayek, 1989: 3).

The scientistic error equates measurability of aconstruct with its relevance in explanation.According to Hayek the scientistic error lies atthe core of the failure of economics andeconomists to make sound and constructive policyrecommendations. We worry that it may also lieat the core of the (relative) failure of strategicmanagement scholars to make sound and con-structive strategic recommendations.

A realist position breaks the connectionbetween measurability and relevance. Differingutility functions, transaction costs, opportunism,and unobservable resources are all relevant inexplanation, but they are not directly measurable(i.e., observable). Unfortunately, our reading ofthe empirical literature suggests that the scientisticerror is frequently committed in the strategicmanagement discipline. There is an obsessionwith the measurement of observable variables—witness the enormous attention devoted to thedevelopment of measures of diversification—thatis only matched by a corresponding failure todevelop empirical strategies for testing theoriesthat are based upon unobservable constructs(such as the resource-based view). We suspectthat the preoccupation with observable variablesmay be driven by issues such as data availability.Developing strategies to test for the impact ofunobservables is a far more difficult endeavor,but we believe that ultimately it is likely to proveto be far more relevant and rewarding. Moreover,within the framework of a realist philosophy ofscience, it is also the appropriate endeavor.

CONCLUSION

We have argued that unobservable constructs lieat the core of a number of influential theoriesused in the strategic management literature, wehave pointed out that realists and positivistsdiffer in the degree to which they believe thatour theories can give us knowledge about suchunobservable constructs, and we have drawn outthe implications of a realist position for strategicmanagement research. We hope that the valueof this paper lies in its abihty to inform othersin the field about the nature of the debatebetween realists and positivists, and about theimplications that this debate holds for strategicmanagement research. Most importantly, how-ever, we have tried to build a case for taking a

Page 14: Godfrey and Hill

532 P. C. Godfrey and C. W. L. Hill

realist position, for we believe that doing sooffers the only way forward for a field such asstrategic management whose ultimate raisond'etre rests upon its ability to inform managerialaction.

REFERENCES

Alchain, A, A., and H, Demsetz (1972) 'Production,information costs, and economic organization',American Economic Review., 61, pp, 777-795,

Aoki, M, (1984), The Cooperative Game Theory ofthe Firm. Oxford University Press, Oxford.

Aronson, J, L, (1984), A Realist Philosophy of Science.Macmillan, London,

Barney, J, B, (1991). 'Firm resources and sustainedcompetitive advantage'. Journal of Management,n, pp, 99-120,

Bethel, J. E, and J. Liebeskind (1993), 'The effectsof ownership structure on corporate restructuring'.Strategic Management Journal,, Summer SpecialIssue, 14, pp, 15-31.

Boyd, R, (1991a). 'Confirmation, semantics, and theinterpretation of scientific theories'. In R. Boyd,P, Gasper and J, D, Trout (eds,). The Philosophyof Science. MIT Press, Cambridge, MA, pp. 3-36.

Boyd, R, (1991b), 'Observations, explanatory power,and simplicity: Towards a non-Humean account'.In R, Boyd, P, Gasper and J, D. Trout (eds,).The Philosophy of Science. MIT Press, Cambridge,MA, pp, 349-378,

Boyd, R,, P, Gasper and J, D. Trout (eds.) (1991).The Philosophy of Science. MIT Press, Cambridge,MA,

Brown, H, I. (1970), Perception, Theory and Commit-ment. Precedent Publishing, Chicago, IL,

Camerer, C, F, (1991), 'Does strategy research needgame theory?'. Strategic Management Journal,Winter Special Issue, 12, pp, 137-152,

Cartwright, N. (1991), 'The reality of causes in aworld of instrumental laws'. In R, Boyd, P. Gasperand J. D. Trout, (eds,). The Philosophy of Science.MIT Press, Cambridge, MA, pp. 379-386.

Coase, R, H, (1937), 'The nature of the firm',Economica N.S., 4, pp. 386-405.

Collis, D, J. (1991), 'A resource based analysis ofglobal competition'. Strategic Management Journal,Summer Special Issue, 12, pp, 49-68.

Conner, K. R. (1991). 'A historical comparison ofresource based theory and five schools of thoughtwithin industrial organization economics: Do wehave a new theory of the firm?'. Journal ofManagement, 17, pp, 121-154,

Cubbin, J, and P. Geroski (1987). 'The convergenceof profits in the long run: Inter-firm and inter-industry comparisons'. Journal of Industrial Eco-nomics, 35, pp. 427-442.

Dierickx, I, and K. Cool (1989). 'Asset stockaccumulation and susteunability of competitiveadvantage'. Management Science, 35, pp. 1504-1511.

Eisenhardt, K. M. (1989). 'Building theories fromcase study research'. Academy of ManagementReview, 14, pp. 532-550,

Fama, E. F, (1980), 'Agency problems and the theoryof the firm'. Journal of Political Economy, 88,pp. 288-307.

Friedman, M. (1953). Essays in Positive Economics.University of Chicago Press, Chicago, IL.

Granovetter, M. (1985). 'Economic action and socialstructure: The problem of embeddedness', /Imen'canJournal of Sociology, 91, pp, 481-510,

Hacking, I, (1983), Representing and Intervening.Cambridge University Press, Cambridge.

Hansen, G. S, and B. Wernerfelt (1989), 'Determinantsof firm performance: The relative importance ofeconomic and organizational factors'. StrategicManagement Journal, 10(5), pp, 399-411,

Hayek, F, A. (1989), 'The pretense of knowledge',American Economic Review, 79(6), pp. 3-7.

Hempel, C. G. (1962), 'Explanation in science andhistory'. In R, Colodny (ed,). Frontiers of Scienceand Philosophy. University of Pittsburgh Press,Pittsburgh, PA, pp, 36-54.

Hill, C. W. L. (1990), 'Cooperation, opportunism,and the invisible hand'. Academy of ManagementReview, 15, pp, 500-513,

Hill, C, W, L, and S, A, Snell (1988), 'Externalcontrol, corporate strategy, and firm performance inresearch intensive industries'. Strategic ManagementJournal, 9(6), pp, 577-590,

Horwich, P, (1990), Truth. Basil Blackwell, Oxford.Hume, D, (1739), Treatise of Human Nature. Prome-

theus Books, Buffalo, NY,Jacobsen, R. (1988). 'The persistence of abnormal

returns'. Strategic Management Journal, 9(5),pp, 415-430,

Jensen, M, C, (1988), 'The takeover controversy:Analysis and evidence'. In J, C. Coffee, L.Lowenstein and S. Rose-Ackerman (eds,). Knights,Raiders and Targets. Oxford University Press, NewYork, pp, 314-354.

Kaufman, A. and E, J. Englander (1993). 'KohlbergKravis Roberts & Co. and the restructuring ofAmerican capitalism'. Business History Review, 67,pp. 52-97.

Lakatos, I. (1968). 'Criticism and the methodology ofscientific research programmes'. Proceedings of theAristotelian Society, 119, pp. 149-186.

Leonard-Barton, D, (1990). 'A dual methodology forcase studies: Synergistic use of a longitudinal singlesite with replicated multiple sites'. OrganizationScience, 1, pp, 248-265.

Mahoney, J. T. and J. R. Pandian (1992). 'Theresource based view within the conversation ofstrategic management'. Strategic Management Jour-nal, 13(5), pp, 363-380.

Miller, R, W, (1991), 'Fact and method in the socialscience'. In R. Boyd, P. Gasper and J. D. Trout(eds.). The Philosophy of Science. MIT Press,Cambridge, MA, pp. 743-762.

Mirowski, P, (1989), More Heat than Light. CambridgeUniversity Press, Cambridge,

Page 15: Godfrey and Hill

The Problem of Unobservables 533

Monteverde, K. and D. Teece (1982), 'Supplierswitching costs and vertical integration in theautomobile industry'. Bell Journal of Economics,13, pp. 206-213,

Nagel, E, (1979), The Structure of Science. HackettPublishing, Indianapolis, IN,

Nelson, R, R, and S. G. Winter (1982), An Evolution-ary Theory of Economic Change. Belknap Press,Cambridge, MA,

O'Hear, A. (1989). An Introduction to the Philosophyof Science. Oxford University Press, Oxford,

Penrose, E, T. (1959). The Theory of the Growth ofthe Firm. Wiley, New York,

Popper, K. R. (1959). The Logic of Scientific Discovery.Hutchinson, London.

Popper, K. R, (1972). Objective Knowledge: AnEvolutionary Approach. Clarendon, Oxford,

Postrel, S, (1991), 'Burning your britches behind you:Can policy scholars bank on game theory?'. StrategicManagement Journal, Winter Special Issue, 12,pp. 153-155.

Powell, W. W. (1990). 'Neither market nor hierarchy:Network forms of organization'. In B. M, Staw andL. L, Cummings (eds.). Research in OrganizationalBehavior. JAI Press, Greenwich, CT, pp. 295-336.

Putnam, H. (1990), Realism with a Human Face.Harvard University Press, Cambridge, MA.

Reed, R, and R. J. DeFillippi (1990), 'Causalambiguity, barriers to imitation, and sustainable

competitive advantage'. Academy of ManagementReview, 15, pp. 8&-102,

Rumelt, R. P. (1991), 'How much does industrymatter?'. Strategic Management Journal, 12(3),pp, 167-186,

Russell, B. (1946). A History of Western Philosophy.Simon & Schuster, New York,

Saloner, G, (1991). 'Modeling, game theory, andstrategic management'. Strategic Management Jour-nal, Winter Special Issue, 12, pp. 119-136,

Tarski, A, (1935). 'Der Wahrheitsbegriff in denformalisierten Sprachen', Studia Philosophica, 1,pp, 261^05.

Teece, D, J, (1982), 'Towards an economic theoryof the multiproduct firm'. Journal of EconomicBehavior and Organization, 3, pp, 39--(53.

Walker, G. and D, Weber (1984). 'A transaction costapproach to make or buy decisions'. AdministrativeScience Quarterly, 29, pp, 373-391,

Wernerfelt, B, (1984), 'A resource-based view ofthe firm'. Strategic Management Journal, 5(2),pp, 171-180,

Williamson, O, E, (1991), 'Comparative economicorganization: The analysis of discrete structuralalternatives'. Administrative Science Quarterly, 36,pp, 269-297,

Williamson, O. E, (1985). The Economic Institutionsof Capitalism, Free Press, New York.

Page 16: Godfrey and Hill