12

Click here to load reader

The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

Embed Size (px)

Citation preview

Page 1: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

The Qualitative Foundations of PoliticalScience MethodologyGeorge Thomas

David Collier and Henry Brady, eds., Rethinking Social Inquiry: Diverse Tools, Shared Standards (Lanham:Rowman and Littlefield, 2004).

Colin Elman and Miriam Fendius Elman, eds., Bridges and Boundaries: Historians, Political Scientists, and the Study ofInternational Relations (Cambridge: MIT Press, 2001).

Barbara Geddes, Paradigms and Sand Castles: Theory Building and Research Design in Comparative Politics (Ann Arbor:University of Michigan Press, 2003).

John Gerring, Social Science Methodology: A Criterial Framework (New York: Cambridge University Press, 2001).

Charles Ragin, Fuzzy-Set Social Science (Chicago: University of Chicago Press, 2000).

The last decade has seen a contentious dialoguebetween quantitative and qualitative scholars overthe nature of political science methodology. Even

so, there has often been a consensus that quantitative andqualitative research share a “unified logic of inference;”that the differences between these “traditions are onlystylistic and are methodologically and substantivelyunimportant.”1 All of the books under review here sharethese convictions. Yet the most remarkable feature of theseworks taken as a whole —and the focus of this reviewessay—is the more capacious view of the scientific enter-prise on display. John Gerring’s Social Science Methodol-ogy, David Collier and Henry Brady’s Rethinking SocialInquiry, and Charles Ragin’s Fuzzy-Set Social Science allfocus on aspects of the scientific process beyond the test-ing of hypotheses—science being “a systematic, rigorous,evidence-based, generalizing, nonsubjective, and cumula-tive” way of discovering the truth about the world (Ger-ring, p. xv). If science is the systematic gathering ofknowledge, testing hypotheses—the central concern of sta-tistical inference—is an important part of this. But it isonly one part. Before we can turn to testing hypotheses,we must be clear about concepts, theories, and cases. And

here both Barbara Geddes’s Paradigms and Sand Castlesand the Elmans’ Bridges and Boundaries complement theother works by attending closely to these issues even whenthe larger goal remains the testing of theory.

Each of these five books argues that social science meth-odology (1) requires systematic and continuous conceptformation and refinement, (2) employs empirical evi-dence not only to confirm but also to develop and exploretheories, and (3) must come to terms with causal com-plexity. Taken together, they provide support for a provoc-ative possibility: if social science has a unified logic, it isfound in approaches traditionally associated with qualita-tive methods rather than statistical inference.

Getting at the ArgumentsWhether scholars are engaged in quantitative or qualita-tive research, or following rational choice or modelingapproaches, we should share standards of evaluation and acommon language. But there are important disagree-ments over what these shared standards are, as well asimportant differences in language and terminology. Ged-des, while writing for comparativists of a qualitative bent,adheres to the foundational logic of statistical inference,as do a few authors in Bridges and Boundaries.2 Gerring,Collier and Brady, and Ragin reject the notion that thisquantitative template can serve as the foundation for polit-ical science as a whole, as do most authors in Bridges andBoundaries. Naturally, this division leads to important dif-ferences over what constitutes valid research and how itshould be carried out.

George Thomas is assistant professor of political science atWilliams College ([email protected]). The authorthanks the anonymous reviewers for Perspectives on Poli-tics, Mark Blyth, Brian Taylor, and, especially, CraigThomas for helpful comments.

Review Essay

December 2005 | Vol. 3/No. 4 855

Page 2: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

Research design and the scienceof political science

Geddes’s Paradigms and Sand Castles: Theory Building andResearch Design in Comparative Politics is a high-level intro-duction to research design that offers a very lucid anddetailed explication of what might be dubbed the conven-tional wisdom (for example, King, Keohane, and Verba,1994). That is, the underlying framework is drawn fromstatistical inference. Geddes’s detailed and easy-to-readchapters focus on how the questions, cases, evidence, andapproach one chooses in carrying out one’s research affectthe answers one gets. She suggests that big questions mightbe broken down into smaller questions that can morereadily be tested. And testing theory against empirical evi-dence is primarily what Geddes has in mind when shespeaks of theory building. In doing so, we should positclear and falsifiable hypotheses deduced from our theory.We should then test these hypotheses against the universeof cases to which they apply. Most importantly, we musttest our hypotheses against cases that were not part of theinductive process from which the argument was initiallyproposed. Ideally, this should include as many observa-tions against each hypothesis as possible. In fact, Geddesargues that we unfortunately cling to the details of ourknowledge and the cases we know at the expense of build-ing theory. Compartmentalization based on substantiveknowledge (and particularly based on geographic regions)has no theoretical justification. “A carefully constructedexplanatory argument built from fundamentals usually hasmultiple implications, at least some of which are testable.The research effort is not complete until empirical testshave shown that implications drawn from the argumentare consistent with reality” (p. 38).

Here Geddes thinks that rational choice approaches areparticularly powerful because they readily conform to theunderlying logic of statistical inference.3 Such approachesare easily able to be generalized and subject to clear tests(that is, easy to falsify).4 Thus, the fact that rational choiceapproaches abstract from “the specifics of particular cases”and deduce hypotheses from a precise model makes thema powerful tool in theory building (p. 206).

Coming from a different angle, the Elmans’ Bridges andBoundaries also confirms much of the conventional wis-dom about what makes political scientists scientific. Theessays in this collection agree that the biggest differencebetween historians and political scientists is politicalscience’s self-conscious use of scientific methodology.5 Thefirst section explicitly takes up these issues of philosophi-cal inquiry. The second section turns to case studies byhistorians and political scientists, illuminating differentapproaches to the same cases. Jack Levy’s chapter on“Explaining Events and Developing Theories” suggests thatthe difference between historians and political scientiststurns on the difference between the “logic of discovery”

and the “logic of confirmation.” The former is importantin constructing theory, but political science is preoccu-pied with empirically validating theory (pp. 79–81). Fol-lowing this, the Elmans suggest that political scientists arenomothetic, that is, interested in discovering generaliz-able theories, whereas historians tend to be idiographic,that is concerned with particular and discreet events(p. 13). Political scientists are not interested in a case, butin cases that enable them to test theories, discover thelinks between variables, and rule out competing theoriesof explanation.

But even with such larger goals in mind, most authorsin Bridges and Boundaries emphasize the importance ofbeginning with good questions, which themselves beginwith a particular case or puzzle (Lebow, p. 113). In thisway, the qualitative study of cases is central to developingtheory, which is not the result of a single test, but ratherpart of a larger research program that seeks to advance ourtheoretical understanding (Elman and Elman, pp. 12–20). To this end, Andrew Bennett and Alexander Georgein “Case Studies and Process Tracing in History and Polit-ical Science” and the historian John Lewis Gaddis in “InDefense of Particular Generalization” argue that contin-gent generalizations drawn from a rich understanding ofcases, rather than broad theoretical generalizations, maybe more consistent with the scientific aspirations of thediscipline. Thus these qualitatively oriented political sci-entists and historians refuse to relinquish the mantel ofscience to scholarship that tests hypotheses based uponstatistical inference alone (pp. 138, 320).

Tradeoffs and research designSituating the testing of theory as only one aspect of thescientific enterprise, Gerring, Collier and Brady, and Raginin their three books insist upon the tradeoffs inherent inany research design. In carrying out research we must bal-ance what we are doing, recognizing that not all method-ological goals are equal. Most importantly, before testing ahypothesis we want to be clear about concepts and theo-ries. After all, concepts and theories are not prefabricatedthings to be randomly picked up by the political scientisteager to test this or that hypothesis, but the very buildingblocks of social science—they are the basis of good causalinference. Thus just where we cut into the process—whether we start with propositions, cases, or concepts—depends on the particular research in which we are engaged.“There is no obvious point of entry for a work of socialscience, no easy way to navigate between what we know(or think we know) and what we would like to know ‘outthere’ in the empirical world. One might begin with ahunch, a question, a clearly formed theory, or an area ofinterest” (Gerring, p. 22).

John Gerring’s Social Science Methodology: A CriterialFramework begins from this perspective. Gerring offers a

Review Essay | The Qualitative Foundations of Political Science Methodology

856 Perspectives on Politics

Page 3: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

“criterial framework” as a unifying methodological foun-dation for social science. This framework breaks socialscience into three tasks: concept formation, propositionformation, and research design. Concepts address whatwe are talking about, propositions are formulations aboutthe world, and research design is how we will demonstratea given proposition. These tasks are interdependent: alter-ations in one will necessarily result in alterations in anoth-er.6 Finding the proper balance among these tasks dependsupon the research at hand.

Each task involves different criteria. If our task is con-cept formation, the criteria of a good concept include theconcept’s coherence, operationalization, contextual range,parsimony, analytical utility and so forth (p. 40). Gerringgives us a set of criteria for proposition formation andresearch design as well. For example, the criteria for agood proposition include its accuracy, precision, breadth,depth, and so on (p. 91). In forming concepts, positingpropositions, or framing research design, we must weighcompeting criteria. Making our proposition broader willnecessarily mean that it is less accurate and less deep. Thisrequires us to engage in tradeoffs among criteria, as well asbetween these different tasks. Just where we cut into thehermeneutical circle will depend on our research task. Ifwe are working with well-formed concepts and theories,we might proceed to proposition formation and to testingour propositions with a particular research design. But forGerring, “the hard work of social science begins when ascholar prioritizes tasks and criteria. It is this prioritizationthat defines an approach to a given subject. Consequently,the process of putting together concepts, propositions,and research design involves circularities and tradeoffs(among tasks and criteria); it is not a linear follow-the-rules procedure” (p. 23).7

David Collier and Henry Brady’s Rethinking SocialInquiry: Diverse Tools, Shared Standards shares Gerring’sinsistence on the tradeoffs inherent in social science inquiry.This is particularly true, Collier and Brady argue, in devel-oping theory. Indeed, the preoccupation in mainstreamquantitative methods with deducing a hypothesis and test-ing it against new cases may well inhibit theory buildingand development; there is an important tradeoff betweentheoretical innovation and the rigorous testing of a theory(p. 224). Making these necessary tradeoffs is the first stepof any research design. This move situates research designas part of an interactive process between theory and evi-dence, rather than treating testing as a single-shot game.Thus, for example, Collier and Brady are skeptical of theinsistence upon a determinate research design (pp. 236–38); given the many problems of causal inference, no obser-vational research design is likely to be truly determinate(p. 237). It is not simply that research questions will alwaysbe open to further investigation. More importantly, theinsistence upon determinate research design is likely tohave negative consequences on the quality of theory—

scholars are likely to redesign theory in accord with itstestability rather than looking for ways to get at “theorythat scholars actually care about” (p. 238). Testing theoryin accord with a determinate research design calls for alarger N in accord with the imperatives of statistical test-ing. This move, however, creates problems of causal infer-ence and neglects fruitful exchanges between theory andevidence. Given this tradeoff, Collier and Brady call forresearch designs that yield interpretable findings that “canplausibly be defended” (p. 238). This can be done alongmany lines, including increasing the N in some circum-stances. But it may also be the result of “a particularlyrevealing comparative design, a rich knowledge of casesand context,” or “an insightful theoretical model” (p. 238).Recognizing these tradeoffs and connecting them to inter-pretable research design recognizes “multiple sources ofinferential leverage” and may lead to better causal inference.

Charles Ragin pushes this line of thought in Fuzzy-SetSocial Science. The first section critiques mainstream quan-titative approaches in social science, which are contrastedwith case-oriented research. As he did in The ComparativeMethod, Ragin takes aim in particular at the assumptionin statistical inference of unit homogeneity and additivecausation. These, Ragin suggests, are dubious assump-tions that often mask crucial differences between casesand therefore pose problems for causal inference. Whilehe does not explicitly invoke the language of tradeoffs,Ragin emphasizes the difference between research designsaimed at testing theory and those aimed at building, de-veloping, and refining theory. Ragin sees the latter as in-volving an iterative process between theory and caseconstruction that we cannot get from testing theory. Inthe second half of the book, Ragin explains fuzzy-set meth-ods in more detail. They represent his latest innovation inconstructing categories to get at empirical evidence (datasets) that is too often taken for granted. Fuzzy sets beginwith the cases themselves, illustrating how data are con-structed by the researcher. This approach sets up a dia-logue between ideas and evidence, where concepts arerefined and theory developed based on interaction withthe data. Ragin argues that such dialogue leads to morereliable causal inference because it does not rest on thedubious assumptions of mainstream variable-driven quan-titative methods. In fact, Ragin’s book is perhaps mostvaluable as an extended discussion of the logic of inference.

These books suggest that the distinction between con-firmatory research and exploratory research is not as starkas it appears; that both are part of the overall scientificmethod (Gerring, p. 23). They also call on scholars torecognize the inherent tradeoffs in any social scienceresearch. As Gerring argues, one should justify one’sapproach based on the kind of question one is asking.This is more than a commonsense piece of advice, giventhat much of political science is preoccupied by the con-firmatory approach, which neglects the crucial stages of

December 2005 | Vol. 3/No. 4 857

Page 4: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

concept formation and refinement, as well as the develop-ment and exploration of theory. Such exploratory researchis an essential part of the scientific process, as it providesthe basis for building concepts and theories.

Sharp disagreements remain even among these scholarswho seek a unified language and a unified set of standardswithin political science. But there is important commonground: all of these books pay far more careful attentionto concept formation, theory development, and causal com-plexity than have past works. This is true even when thelarger goal remains testing theory from a statistical per-spective (as in Geddes). In general, qualitative researchattends to these concerns far more carefully than quanti-tative research, which often has treated them as prescien-tific. However, it is not simply that qualitative scholarshave been more attuned to such concerns. Concept for-mation, theory building, and research design involve, atroot, qualitative judgments. The scientific enterprise maybest be seen as a framework that necessarily entails all ofthese tasks. How we situate our research within this frame-work depends upon qualitative judgments about wherewe are in the framework and what we are trying to do.These choices should be justified, but they cannot be jus-tified in quantitative terms. As I will elaborate below, quan-titative tools are being developed to further such efforts;yet, in an interesting twist, these tools are complement-ing, even imitating the logic of qualitative analysis (Col-lier and Brady, pp. 12–13). In the end, good theory—notsimply method—is what makes for good science.

Concept Formation and Refinement:The Foundation of Social ScienceIn Social Science Methodology, Gerring suggests that “con-cept formation concerns the most basic questions of socialscience: What are we talking about? ” (p. 35). We cannotposit hypotheses, let alone have a consensus on whetherhypotheses and theories are valid, if we do not know whatwe are talking about. Too often, new definitions are prof-fered with little regard for already existing ones or for howsuch concepts fit into the world more generally. The resultis that scholars speak past one another, research does notcumulate, or ideas simply remain sloppy. Given this, Ger-ring notes, there is a temptation just to get on with it.This will not do: a “blithely empirical approach to socialscience” only adds to the trouble (p. 38).

Before we can elaborate our theoretical arguments, weneed to conceptualize what we are talking about. Thisbegins from our understanding of the world, and it is anessentially qualitative project. Indeed, the first step in con-cept formation is picking words to describe what we aretalking about. Even if we go with accepted definitions, weare relying on past qualitative judgments. Assigning casesto our conceptual definition also requires qualitative judg-ments. As we examine cases, compare and contrast them,

we refine their boundaries and gain important conceptualdistinctions. This will give us both more homogeneouscases for proper comparison and a sharper conceptualunderstanding of what we are talking about (Ragin 2004,pp. 125–28). This qualitative work is foundational to quan-titative or statistical tests of theory. It requires that wespecify concrete criteria by which to measure concepts, sothat our measurements will allow for the assignment ofcases to particular categories based upon the operational-ized concept (Ragin 2004, pp. 144–45). This is all centralto carrying out rigorous testing of theory that otherresearchers can replicate (Geddes, pp. 146–47; Collier andBrady, p. 209). Even when scholars take off-the-shelfindicators to measure a concept, or apply a well-definedconcept to new cases, such judgments are unavoidablyqualitative. In fact, relying on off-the-shelf indicators andextending concepts to a greater number of cases so theymay be properly tested raises concerns about conceptualstretching and measurement validity. Picking up conceptsand applying them to new empirical situations may leadto invalid causal inference (Collier and Brady, pp. 202–3).Collier and Brady note that statistical tools are being devel-oped to help scholars get at these issues from a quantita-tive perspective. But when to use such tools, given thetradeoffs inherent in any research, itself depends on a qual-itative judgment (pp. 204–9).

Beyond causal inference, Collier and Brady, along withseveral of their coauthors (see especially the chapters byGerardo Munck and Ragin), view the refinement of con-cepts as an end in itself; this effort is not simply a precur-sor to testing theory (although it is also that). Classifyingour knowledge, creating typologies and categories, is as animportant a contribution to science as causal inference(p. 203). As Brady argues in a chapter in Rethinking SocialInquiry, the preoccupation with causation in testing theoryrejects the notion that classification, categorization, andtypologies have any explanatory power (pp. 53–67). Thisis unwarranted. Take Max Weber’s division of authorityinto three essential types: traditional, legal-rational, andcharismatic. It orders the world for us, beginning withwhat is out there, which forms the basis of our theoriz-ing about it. Weber’s approach is inductive and descrip-tive, but, as Gerring argues, surely scientific (Gerring,pp. 118–24).8

This is because we cannot speak of questions of fact“without getting caught up in the language to describesuch questions” (p. 38). We cannot, for example, say thatdemocracy requires X, without being clear about democ-racy itself. And for Gerring, we cannot avoid abstract con-cepts like democracy and ideology. A social science thatavoids such high-order concepts would lack the power togeneralize and thus be reduced to “a series of disconnectedfacts and microtheories” (Gerring, p. 38). These higher-order concepts not only let us negotiate general prop-ositions; they provide the context in which we situate

Review Essay | The Qualitative Foundations of Political Science Methodology

858 Perspectives on Politics

Page 5: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

lower-order concepts that are more readily observable inempirical terms. It would be odd, would it not, to studyvoters and political participation if we could not situatethese lower-order concepts within democracy ? In this sense,“concept formation in the social sciences may be under-stood as an attempt to mediate between the world of lan-guage . . . and the world of things” (Gerring, p. 37). Wecannot speak of the stuff of the world without concepts,but our understanding of this “stuff out there” will help usrefine and alter our concepts.

While political scientists speak of the need for agreed-upon scientific standards—the need for generalizabilityand a common language—we often fail in this goal bytreating concept formation in haphazard manner. HereGerring’s development of a min-max definition for con-cepts is useful in providing a foundation for a commonlanguage, giving us the ability to generalize across a widerange of issues. Gerring breaks concept formation intothree steps: “sampling usages, typologizing attributes, andthe construction of minimal and ideal definitions” (p. 71).The idea is to construct a conceptual definition that fitswithin the field of existing usage, avoiding highly idiosyn-cratic views, but also organizing the conceptual definitionin such a way that allows for general application. To dothis, Gerring urges constructing a min-max definition. Itbegins with the minimal features of a concept—its core—giving us a capacious definition by “casting the semanticnet widely” (p. 78). From here, we construct an ideal typedefinition that captures the concept in its purest form.This is, of course, a purely logical construct, far less likelyto be found in the empirical world. These two poles—theminimum and maximum of a concept—allow us tobroaden and narrow the term. The minimal definition(with a smaller intension) captures more “out there” (theextension), whereas a maximum definition (with a muchlarger intension) narrows the cases to which it will apply(the extension). The result is a definition that “outlinesthe parameters of a concept,” giving us a “frame withinwhich all contextual definitions should fall.” This allowsus to move from a general definition to a more contextualdefinition while maintaining a common ground. It alsoenables our conceptual definition to travel through timeand space.

Theory Building: Developmentand ExplorationThe relationship between theory and evidence is oftenseen as the key distinction between science and non-science in the study of politics. And while these authorsoffer a variety of advice on selecting cases to bolster theory,they share an important point of agreement. Increasingthe number of cases or observations is not nearly as impor-tant as carefully selecting cases, which often depends uponsubstantive knowledge of the cases. Even if the intent is to

test a hypothesis against evidence, a small number of casesthat maximize the researcher’s leverage may be far superiorto increasing the number of cases, which risks creatingproblems of conceptual stretching and measurementvalidity.

It is almost a mantra among social scientists that atrue test of one’s theory must be separate from any evi-dence used in generating the theory. Geddes’s Paradigmsand Sand Castles extols this conventional wisdom to greateffect, urging that scholarship be carried out with more“sensitivity to the norms of science” (p. 144). To morereadily share data, test arguments, and replicate tests,case-study evidence needs to be more carefully operation-alized and measured. This does not just mean specifyingthe domain of cases to which an argument applies. Wemust carefully specify the criteria of measurement andthe classification and categorization of cases. We mustthen stick to these publicly articulated criteria across cases.This, Geddes argues, can be arduous, but it is wellrewarded: “struggling over the conceptual issues raised bytrying to measure the same causal factors in the sameway across cases often deepens the analyst’s understand-ing of the argument as well as the cases” (p. 147). Thus,even while endeavoring to test cases, Geddes illustratesthe important interplay between theory and evidence.Geddes warns against assuming an “accumulation oftheory” that leads scholars to neglect the importance ofbuilding theory.9 She also ventures that the accumula-tion of theoretical knowledge will only occur if scholarsare far more careful in using the “inadequate and fuzzyevidence” at hand to test theory. But what happens whenconfrontations between theory and evidence do not yieldstraightforward results?

Most of the authors here suggest that this is, in fact, thenorm. This leads them to suggest a continual movementbetween theory and evidence that clarifies and refinestheory. Theories from which we derive propositions areconstructed based on our interaction with the empiricalworld. This interaction is central to the scientific enter-prise—not a prescientific aspect of it. Furthermore, build-ing theory is done against the backdrop of what we alreadyknow or think we know. Thus Timothy McKeown invokesthe hermeneutic maxim: “no knowledge without fore-knowledge,” arguing that tests of theory are not “pointlikeobservations” but “iterated games” situated against exist-ing theory.10 As McKeown notes:

If the extent of one’s knowledge about political science were thetables of contents of most research methods books, one wouldconclude that the fundamental intellectual problem facing thediscipline must be a huge backlog of attractive, highly developedtheories that stand in need of testing. That the opposite is morenearly the case in the study of international relations is broughthome to me every time I am forced to read yet another attemptto “test” realism against liberalism (McKeown, 24–25; see alsoRagin 2004, 127).

December 2005 | Vol. 3/No. 4 859

Page 6: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

Substitute your field of choice and you are likely to beconfronted with the same problem.

Learning from our cases in the process of building theoryis most closely linked with qualitative tools and scholar-ship, but the distinction between theory and evidence alsotouches upon an important debate between quantitativescholars. It is a central part of Collier and Brady’s statisti-cal critique of mainstream quantitative methods, whichcontrasts Bayesians with frequentists. Mainstream regres-sion analysis, which provides Geddes’s underlying frame-work, begins from a frequentist approach that assumes weknow nothing about the world. A model is elaborated,hypotheses specified and tested against the data (cases).11

The data are thus used to “construct the world.” In thisapproach a “lack of context-specific knowledge means thatthe researcher cannot call on information from outsidethe sample being analyzed to supplement the informationgleaned from a statistical analyses” (McKeown, p. 148).By contrast, in a Bayesian approach, prior knowledge aboutthe world is assumed to be deeply important, and the dataare important only to the extent that they modify ourprior assumptions or theories. Thus cases (data) are weighedagainst prior understandings and not a blank slate (nullhypothesis). Drawing on this logic, Collier and Brady illus-trate how statistical theory incorporates such notions, imi-tating the logic of qualitative research in carefully selectingcases against existing theory (pp. 233–35). For example,when McKeown refers to qualitative researchers as folkBayesians he is drawing attention to the fact that qualita-tive researchers usually situate their specific research withina larger research framework—that is, against conventionaltheoretical understandings—and select their cases with thisin mind.12

From a Bayesian perspective, the inductive refinementof theory is widely recognized as an essential part ofresearch; it must be distinguished from data mining, wherea hypothesis is deductively posited and data are selectivelygathered to support it. In fact, Collier, Brady, and Sea-wright suggest that the indiscriminate injunction againstinductive procedures “[and] even worse, the misleadingpretense that they are not utilized” in statistical theory, isa much larger problem in social science than the legiti-mate use of induction (see also Munck pp. 119–20, Ger-ring, pp. 240–43). Such methods are widespread in moreadvanced statistical approaches (Collier, Brady, and Sea-wright, pp. 232–44). Specification searches, for example,rest on the logic of induction and are seen as a great inno-vation in statistical analysis. Moreover, rejecting these tools,particularly in qualitative research, may lead to boringpolitical science (see also Gerring, pp. 240–43). The meth-odological norms that Geddes advocates, for example, maymake it harder to develop interesting theory that scholarsactually care about.13

Case studies, typologies, and descriptive work are invalu-able in this effort (Gerring, p. 122). As we refine cases, we

refine theories—not simply to fit the evidence, but to geta better understanding of cases and concepts, allowing usto construct better theory (Ragin 2004, p. 127). We beginwith preexisting theories that will lead us to select casesbased on substantive knowledge of the field rather thanrandom selection. “[W]hat guides research is not logicbut craftsmanship, and the craft in question is implicitlyfar more substantively rich than that of ‘social scientistwithout portfolio’” (Mckeown, p. 148; see also Geddes,p. 31). It is too bad that McKeown downplays logic here.For as Ragin and others argue, the best case studies mostcertainly follow the rules of logic—a point McKeown him-self acknowledges when referring to qualitative researchersas folk Bayesians.

Consider Randall Schweller’s chapter in Bridges andBoundaries. Schweller uses World War II as a test case fora “systems theory of international relations” that focuseson the distinct features of a tripolar system (pp. 196–97).He selects World War II because “it contradicts many ofthe key behavioral predictions of realist balance of powertheory” (p. 182). Thus Schweller selects his case based onhis substantive knowledge of it and against existing theory.Recognizing that many states did not balance against threatsbetween World Wars I and II in accord with realist balance-of-power theory, Schweller puts forward new hypothesesthat suggest a modification of existing theory. As heexplains, “World War II is an important case for inter-national relations theory because it suggests ways to revise,reformulate, and amend realist balance of power theory‘for the purposes of (1) better explanations and more deter-minate predications, (2) refinement and clarification ofthe research program’s theoretical concepts, and (3) exten-sion of the research program to cover new issue areas”(p. 182).

This movement between theory and evidence even inthe course of testing theory leads Ragin to suggest that acase (or cases) is best thought of as a working hypothesisto be refined in the course of the research. We begin againstthe backdrop of prior theory. In the course of the research,however, the boundary of the set of relevant cases “is shiftedand clarified,” and “usually this sifting of cases is carriedon in conjunction with concept formation and elabora-tion” (Ragin 2000, p. 58). Schweller, for example, beginswith World War II as a breakdown of the internationalsystem, but one that did not lead to balancing amonggreat powers. His explanation turns on an emergent tri-polar system, and he ends up classifying World War II as acase of tripolarity. This is all part of the refinement, elab-oration, and development of theory. In short, while statis-tical (or variable-oriented) research turns to a predeterminedset of cases to test a hypothesis, case-oriented researchbegins with a case (or cases), “treating them as singularwhole entities purposefully selected, not as homogeneousobservations drawn from a random pool of equally plau-sible selections” (Ragin 2004, p. 125).

Review Essay | The Qualitative Foundations of Political Science Methodology

860 Perspectives on Politics

Page 7: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

Political science research too often “assumes a pre-existing population of relevant observations, embracingboth positive and negative cases, and thus ignores a cen-tral practical concern of qualitative analysis—the consti-tution of cases” (Ragin 2004, p. 127). Cases are notprefabricated. They must be constructed and refined beforetheory can be tested: “in case oriented research, the bulkof the research effort is often directed toward constituting‘the cases’ in the investigation and sharpening the con-cepts appropriate for the cases selected” (Ragin 2004,p. 127). In this sense, the statistical template is parasiticupon qualitative research that constitutes cases, or it relieson prefabricated data sets drawn from “out there” (seeCollier, Brady, and Seawright, pp. 252–64), which them-selves rest upon qualitative judgments at some level.

It is this random sampling of data to test theory thatleads to standard injunctions about case selection prof-fered from a mainstream quantitative template: never selecton the dependent variable; one cannot draw causal infer-ences from a single case study; and the like. Such advicemay well apply to standard regression analysis. But againstthe backdrop of prior knowledge and theory, a single crit-ical case can confirm or refute existing theory. Selectionon the dependent variable may be useful in testing fornecessary and sufficient causation, examining causal mech-anisms within a particular case, or refining concepts andcases themselves (Collier and Brady, McKeown, Ragin).The question in all of these instances is: What knowl-edge are we trying to gain? Where are we in the researchprocess? Seeing theory building as an iterated game(McKeown), as a dialogue between ideas and evidence(Ragin), as situated in an overall research program (Elmans),or as about tradeoffs (Gerring and Collier and Brady),these authors offer a more capacious view of the scientificprocess. This understanding of the scientific process, more-over, will lead to better causal inference, particularly in therealm of causal complexity

Getting a Handle on CausalComplexityQualitatively oriented political scientists have turned tocomplex causality as more representative of the social worldthan conventional statistical inference. These scholars areoften engaged in macro inquiry, large comparisons, situ-ating politics within time, or seeking to get at causal mech-anisms within a case. Such qualitative studies have soughtto maintain social science standards rather than simplyabandoning the enterprise and turning to historical descrip-tion. Recognizing that conventional statistical inferencedoes not apply to some of the most important questionswithin political science, these authors illustrate how qual-itative tools are invaluable in getting at causal complexity.Moreover, the most sophisticated quantitative scholars arefollowing this logic.

Standard statistical views seek to discover probable causalrelationships: if X occurs, then it is likely that Y will too.There may be other causes of Y, and X may even act withother causes, but X precedes Y more than most other causes.This view rests on two important assumptions, that thesevariables all act in an additive fashion (so that X acts thesame with A as it does with B), and that this is a linearrelationship. How X acts with A is the same at T1 as atT2. It is this view of causation that leads to Geddes’s adviceabout breaking theories down into smaller hypotheses,selecting cases, and using evidence to test theory. Mostimportantly, it is this view of causation that leads to thegeneral insistence that increasing the number of cases(observations) tested against a hypothesis is the primarymeans of causal inference.

Attempts to isolate and measure a single variable—assuming uniformity across cases—may be justified in somecircumstances, but they are unlikely to capture causal com-plexity.14 And as Ragin argues, causation is often the resultof conjunctions and interactions between variables. Thatis, X causes Y in conjunction with A, B, and C. Or Zcauses Y when A, B, and C are absent. Thus Ragin rejectsthe notion that a causal variable is likely to act the sameacross a wide range of cases. Rather, X is likely to act inconjunction with other variables or conditions. Attemptsto assess X’s impact across a wide range of cases in isola-tion (a standard regression model) is likely to miss thesignificance of X if it works in conjunction with othervariables. Getting at complex causality begins with thequalitative construction of cases, defining both positiveand negative cases and exploring the causal factors thatproduce these outcomes.

Qualitative methods are well developed for examiningcritical-juncture, path-dependent, and nonlinear causes,which are increasingly at the heart of comparative histor-ical analysis, American political development, and inter-national relations.15 Geddes, for example, recognizes theimportance of causal complexity with critical juncture andpath-dependent arguments. But these forms of causationelude her quantitative template. She argues that “the bigargument should be broken at its branching points, andhypotheses that purport to explain choices at differentnodes should be tested on additional cases that fit appro-priate initial conditions. Efforts should be made to see ifhypothesized legacies also occur in these other cases” (p. 14;see also p. 23). Geddes’s advice to break down big ques-tions into smaller questions is driven by her view of causalinference: she wants to increase the number of observa-tions against any particular hypothesis because she is work-ing from a mainstream quantitative template. This seemsa common-sense piece of advice.

But as McKeown and others argue, such a move repre-sents the limits of a statistical world view. Moving awayfrom big questions to those that can be tested within astatistical framework will not capture complex causality

December 2005 | Vol. 3/No. 4 861

Page 8: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

(McKeown, Ragin, Gerring, Collier and Brady).16 If theworld is characterized by complex multi-causality, isolat-ing individual variables and testing them across time willnot capture how they interact; indeed, it is inconsistentwith the ontology of complex causality.17

This is evident in what Robert Jervis has called systemeffects.18 How each piece of the puzzle is situated in aparticular system matters greatly—far more than any indi-vidual piece. “The importance of interactions means thatthe common habit of seeing the impact of variables asadditive can be misleading: the effect of A and B togetheris very different from the effect that A would have inisolation and added to the impact of B alone.”19 In Poli-tics in Time, Paul Pierson focuses on the temporal contextthat variables act within, arguing that it matters whensomething happens. Different outcomes may be the resultof the conjunction of forces at a particular time, giving uscritical junctures. How X, Y, and Z interact at T1 may bevery different from how they interact at T2. A specificconjunction combined with other forces may lead to diver-gent outcomes, giving us branching causes. X combinedwith AB may lead to one pathway, while X combinedwith DE may lead to another. Or we may get path-dependent causes. The fact that X occurred at T1 makes itmuch more probable that Y and not Z will occur after-ward. Wresting these variables from their temporal con-text means that we will miss these interactions. Moreover,a snapshot approach that focuses on micro-causes may failto capture slow-building causal processes where X triggersA, B, and C, which results in Y. This process will onlybecome evident over an extended period of time. In thissense, standard variable-centered analysis is the equivalentof a snapshot of the political world. Situating variables intheir temporal context—getting a handle on the processand sequence of events—provides a moving picture ratherthan a snapshot.20

Process tracing, as detailed by George and Bennett inBridges and Boundaries, is another powerful tool that helpsillustrate complex causal mechanisms. By tracing thesequence of events, process tracing can explain how Xcauses Y or, far more likely, how Y is brought about byvarious combinations of Xs. Process tracing is rooted incase studies and midlevel theory—often engaging intypologies—that identify contingency and variation, bal-ancing rich investigation with more expansive theorizing.Collier and Brady similarly suggest that thick analysis andcausal process observations may be more fruitful in get-ting at causal mechanisms than thin analysis and data setobservations (pp. 248–49, 252–64). As with other quali-tative tools, process tracing can inductively get at causalmechanisms, compare causal pathways, and even permitcausal inference on the basis of a few cases. Probabilisticviews of causation, while making a nod to the value ofprocess tracing under certain conditions, suggest that wedo not need to get at the specifics of how X causes Y; it is

enough to see that a general relationship holds. This leadsKKV, for example, to suggest that causal effects are bydefinition prior to causal mechanisms.21 Such thinking isespecially evident in the search for covering laws and gen-eral theories that discount tracing the links between eachindividual X and Y. Yet, even if such covering laws are truein probabilistic terms, they often operate at such a level ofabstraction that they are not particularly insightful.

Consider the presidential election of 2000. Almost allpolitical science theories predicted that a vice presidentrunning on peace and prosperity with a sound economywould win in a landslide.22 And, indeed, as a probabilisticlaw this would almost certainly hold true.23 It doesn’t tellus much, though, does it? In the idiom of statistics, out-liers and anomalous findings may well be the most inter-esting cases. Thus we are far more interested in theparticular circumstances of the 2000 election: what vari-ables were at work, in conjunction with what circum-stances, to bring about this result? To get at these moreinteresting questions—why did the 2000 presidential elec-tion result in a perfect tie?—we must recognize particularcircumstances and combinations of variables, not broadcovering laws.

This is true even when a model is more accurate inpredicting results.24 Even if we predict results based uponthe scores of a wide variety of variables we have notexplained why this happened. Such data set observationsmay well predict that the scores of a set of variables—forexample, incumbency, duration of the party in power,peace, economic growth, good news, inflation—is likelyto result in, say, a small margin of victory for a sittingvice-president.25 But this does not explain how these vari-ables work to bring about a given result. This is, however,what causal process observations attempt to do. Take anilluminating example. In the middle years of the Clintonadministration, Stephen Skowronek ventured that Presi-dent Clinton, given the conjunction of forces that alignedin the Republican takeover of Congress in 1994 and hisreelection, was a likely candidate for impeachment.26 Skow-ronek was theorizing based on similar past conjunctions—atypology of presidential regimes—that, to put it bluntly,looked preposterous in 1997. Of course by 1998, suchcontingent theorizing looked extraordinarily impressive,far more so than the statistical models that generally pre-dicted a smashing Gore victory in 2000.27

The statistical view of causation is being challengedfrom a quantitative angle as well. Collier and Brady, forexample, note that a probabilistic view of causation is illapplied when we are dealing with deterministic views ofcausation (where X is either a necessary or sufficient cause).Yet deterministic views of causation are significant in polit-ical science research. If one is working with a determinis-tic view of causation, there is not a problem with selectingon the dependent variable or using a critical case to falsifya theory. If X is deemed necessary to bring about Y, the

Review Essay | The Qualitative Foundations of Political Science Methodology

862 Perspectives on Politics

Page 9: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

cases that provide the test for the hypotheses will neces-sarily include Y—giving us no variation on the dependentvariable, but allowing for causal inference. Or if we find Xbut no Y in a crucial case, that is grounds for rejecting thehypothesis that X (as necessary and sufficient) causes Y,for we have X but not Y. As Geddes argues, “the analystshould understand what can and cannot be accomplishedwith cases selected” (p. 90).

Thus Collier and Brady insist that the contributions oftraditional qualitative methods and qualitative critiquesof statistical methods can be justified from the perspectiveof statistical theory. This terminology can be confusingand may belie the notion that we are moving closer to ashared language in political science. Collier and Brady, forexample, argue that advanced statistical theory illustratesthat statistical inference (Geddes) cannot provide an ade-quate foundation for political science. What Geddes andmost of the authors here call statistics Collier and Bradyrefer to as mainstream quantitative analysis (one branchof statistics based largely on regression analysis). Collierand Brady make a powerful case, noting that attempts toimpose a mainstream quantitative template upon qualita-tive research (for example, Geddes and KKV) neglect cru-cial insights and developments from more quantitativeapproaches.

Collier, Brady, and Seawright note how statistical toolsare being developed to better address these problems ofcausal inference and causal complexity—Bayesian statisti-cal analysis being a leading example. Similarly, Ragin hasused tools like Boolean algebra which compare differentcombinations of variables to get at similar outcomes inmultiple-conjunctural causation. Fuzzy-set methods areanother tool that illuminates necessary and sufficient cau-sation and combinations of variables, using probabilisticcriteria. These methods often employ statistics and quan-titative analysis, so they do not easily fit the quantitative-qualitative divide. Indeed, Collier, Brady, and Seawrightinsist that in assessing causation and “evaluating rival expla-nations, the most fundamental divide in methodology isneither between qualitative and quantitative approaches,nor between a small N and a large N. Rather, it is betweenexperimental and observational data” (p. 230). In this,they offer a rather damning indictment of attempts toimpose a mainstream quantitative template upon all polit-ical science research. They take aim, in particular, at KKV(and, by implication, Geddes). For while KKV purportto offer advice to researchers engaged in observationalstudies—political science is the business of observing thepolitical world—their view of causation “takes as a pointof departure an experimental model” (p. 232). Thus ifCollier and Brady are correct, those engaged in small Nqualitative studies cannot solve the problem of causal infer-ence by seeking to imitate large N quantitative studies,as these problems are inherent in large N studies as well(p. 233).

This is what leads Collier and Brady to call for inter-pretable research design. Can the findings and inferencesof a particular research design plausibly be defended? Sucha defense must be justified in qualitative terms; as thefailed quantitative template reveals, there is no method-ological Rosetta stone that will solve all our problems. Yetit is fitting that much of the advice that these books prof-fer is of the nuts and bolts variety. They seek to illuminatethe different tools at our disposal in seeking to understandpolitics. But which tools we use, when, and how we usethem, depend on the judgments of each scholar given theresearch he or she is engaged in. In the end, there is noth-ing to do but justify our research and defend the results.Good scholarship is based, if I may repair to the words ofan earlier political scientist, upon “reflection and choice.”28

ConclusionThis review essay has argued that these five books give usa better understanding of the scientific process, especiallyconcept formation and refinement, theory building andexploration, and issues of causal complexity. In taking upthese issues in great detail, these works rescue importantaspects of the vocation of political science that are oftentreated as prescientific. In restoring these features to theirproper place in the scientific enterprise, these works chal-lenge the dominant view of science within the discipline.That is, political science’s quest for a common vocabularyand set of standards cannot be found in statistical infer-ence; if political science is to have a unified methodology,that methodology has qualitative foundations. This con-clusion does not simply replicate the quantitative-qualitativedivide. On the contrary, as I have attempted to show,many of the concerns associated with qualitative methodsincreasingly find expression in quantitative forms. Con-versely, concept formation and theory building are just asimportant to quantitative researchers as to qualitativeresearchers. And while testing theory against evidence isone dimension of this process, it is not the whole of (polit-ical) science. Nor is it the truly scientific part.

By situating the empirical testing of theory as one partof an ongoing process, these works give us a more thor-ough, capacious, and practical view of science itself. Whatquestions does our research seek to illuminate? The key isto recognize where we are in a given research program andbegin with the tasks at hand. Quantitative tools may helpus in these tasks, but forming concepts and building theo-ries rely upon qualitative judgments. If we think of thescientific process as one that inherently involves tradeoffs,the choices we make need to be justified. Indeed, justwhere we cut into the hermeneutical circle is a qualitativejudgment.

The exhortation to a more systematic and rigorous viewof science has led political scientists to turn to quantifica-tion, rational choice theory, and ever more sophisticated

December 2005 | Vol. 3/No. 4 863

Page 10: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

mathematical modeling. Thus the dominant view of sci-ence within political science has been drawn from physics,chemistry, and, more and more, math.29 Yet this exhorta-tion to be more scientific has now led to a rebellion againstthis narrow conception of science; these books illuminatehow science itself has qualitative foundations. What wasonce effect has now become cause, perhaps revealing thatpolitical science itself moves in a complex and nonlinearway. Why should we expect the empirical world out thereto be any different?

Notes1 King, Keohane, and Verba 1994, 4.2 Albeit in a “softened” form.3 Thus Geddes’ rational choice approach rests upon

the conventional quantitative template.4 This point is heavily debated as many criticize ratio-

nal choice theories as tautological. See Green andShapiro 1996.

5 Though see the excellent chapter by John LewisGaddis, who doubts that political scientists are infact more scientific. Elman and Elman 2003a,301–26.

6 Alternatively, these may be viewed as hierarchical;propositions are derivative of concepts, while re-search designs, going farther down the scale, arederivative of propositions. But even if this is philo-sophically so (as with questions of ontology andepistemology), it is not necessarily how we delveinto our research; nor does it capture the triangularand interrelated relationship among these tasks.

7 See also Sartori 1984.8 This also has the virtue of returning political theory

to the fold of political science. Indeed, it takes seri-ously the claim that Aristotle is the first politicalscientist.

9 While KKV and Geddes insist on a Popperian viewof “testing” theory, they miss the fact that KarlPopper’s scientific method began with an inductivecore of observation, from which theory and knowl-edge were built.

10 To Geddes and some of the authors in Bridges andBoundaries, theory building is really the fruit oftesting or confirming theory. Cases are important inconstructing theory and in providing the evidenceagainst which a theory may be tested. But these aretreated as prescientific or subordinate to the scien-tific goal of testing theory.

11 Geddes does note the importance of Bayesian ap-proaches, 114–17.

12 It might be more accurate to describe Bayesians asqualitative-quantifiers, as qualitative scholars wereutilizing such approaches long before Bayesiananalysis.

13 See especially Rogowski’s (1995) exchange withKKV in the APSR symposium, which is extended inRogowski’s chapter in Rethinking Social Inquiry. Seealso the exchange between Gerring 2003, Skow-ronek 2003, and Smith 2003a.

14 These assumptions were warranted in the study ofAmerican electoral behavior, where they were firstused to great effect.

15 Mahoney 2003; Skowronek and Orren 2004; Jervis1998 and 2000.

16 See also Mahoney and Rueschemeyer 2003 andShapiro 2004.

17 Hall 2003. At a deeper level, scholars like KKVwant to insist on a transcendent methodology with-out theories of knowing or assumptions about whatthe world is composed of. But just as concepts arerelated to propositions and particular research de-signs, methodology is derivative of epistemology,which is derivative of ontology. One cannot insiston a methodology without having epistemologicaland ontological assumptions.

18 Jervis 1998.19 Jervis 2000, 95.20 Pierson 2000a and 2004.21 This becomes an argument between positivism and

realism in the philosophy of science, which is be-yond the scope of this essay.

22 See the various post election ruminations and ad-justments to models in PS: Political Science & Politics34 (1), 1–44.

23 Though this would also miss the election of 1960.24 See, Ray Fair, http://fairmodel.econ.yale.edu/

vote2004/index2.htm, which came closer to predict-ing the 2000 election. Yet his prediction illustratesthe relevance of context. He predicted Gore winningthe popular vote by a small margin and doesn’t takeup the Electoral College (but presuming that apopular win nationwide will translate in an electoralvictory). Yet, in an interesting way, there is no popu-lar vote nationally. Yes, the total number of votescast nationwide is counted and reported, but that isnot how we actually vote as a nation. A point vividlybrought home in the 2000 election. Rather thanvoting in a national plebiscite, voting is conductedon a state-by-state basis. These institutional rules notonly influence how candidates campaign, but have aprofound impact on the elections—particularly closeones. Thus one cannot assume in a close electionthat more votes nationwide will necessarily result inan Electoral College victory. Attending to this con-textual issue may have led to a more accurateprediction.

25 This is what Ray Fair’s model essentially predictedfor the 2000 election. But Fair’s methodology ismore akin to Collier and Brady’s discussion of

Review Essay | The Qualitative Foundations of Political Science Methodology

864 Perspectives on Politics

Page 11: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

statistical theory than to the standard quantitativetemplate of KKV. Indeed, Fair’s model seems a greatexample of Collier and Brady’s insistence that ad-vanced quantitative methods often use inductiveprocedures in terms of specification searches and thelike (thus imitating traditional qualitative methods).Moreover, assigning weights to these different vari-ables, particularly over time in refining the modelbased upon an interaction with the evidence,depends upon qualitative judgments. For Fair’sdiscussion, see http://fairmodel.econ.yale.edu/RAYFAIR/PDF/2002DHTM.HTM.

26 Skowronek 1997.27 The Fair model is, again, a general exception, al-

though not necessarily very illuminating in explain-ing why this occurred.

28 See Hamilton, Madison, and Jay 1999, 1.29 This is odd. It treats such established hard sciences

as biology and geology as almost unscientific, as theydo not follow the logic of physics and math. This isall the more peculiar in that mathematics, unlikebiology, is not even concerned with the empiricalworld—the fundamental preoccupation of politicalscience. The methodology on display in these sci-ences might be better suited to social scientists thanare physics or chemistry.

ReferencesAdcock, Robert and David Collier. 2001. Measurement

validity: A shared standard for qualitative and quanti-tative research. American Political Science Review 95(3): 529–46.

Bennett, Andrew. 2003. A Lakatosian reading of Laka-tos: What can we salvage from the hard core? InProgress in international relations theory, ed. ColinElman and Mirhiam Fendius Elman, 455–94. Cam-bridge: MIT Press.

Bennett, Andrew, Aharon Barth, and Kenneth Ruther-ford. 2003. Do we preach what we practice? A surveyof methods in political science journals and curricula.PS: Political Science and Politics 36 (3): 373–78.

Blyth, Mark. 2002a. Great transformations: Economicideas in the twentieth century. New York: CambridgeUniversity Press._. 2002b. Institutions and ideas. In Theory and

methods in political science, ed. David Marsh and GaryStoker, 293–310. New York: Palgrave Macmillan._. 2003. Structures do not come with an instruc-

tion sheet: Interests, ideas, and progress in politicalscience. Perspectives on Politics 1 (4): 695–706.

Bridges, Amy. 2000. Path dependence, sequence, his-tory, theory. Studies in American Political Development14 (1): 109–12.

Caporaso, James A. 1995. Research design, falsification,and the qualitative-quantitative divide. AmericanPolitical Science Review 89 (2): 457–60.

Ceaser, James W. 1990. Liberal democracy and politicalscience. Baltimore: Johns Hopkins University Press.

Collier, David. 1995. Translating quantitative methodsfor qualitative researchers: The case of selectionbias. American Political Science Review 89 (2):461–66.

Eckstein, Harry. 1975. Case study and theory in politi-cal science. In Handbook of political science, vol. 7, ed.Fred I. Greenstein and Nelson W. Polsby, 79–138.Reading, MA: Addison-Wesley.

Elman, Colin, and Miriam Fendius Elman. 2003a.Lessons from Lakatos. In Progress in internationalrelations theory, ed. Colin Elman and Miriam FendiusElman, 21–68. Cambridge: MIT Press.

Elman, Colin and Miriam Fendius Elman, ed. 2003b.Progress in international relations theory: Appraising thefield. Cambridge: MIT Press.

Gerring, John. 2003. APD from a methodological pointof view. Studies in American Political Development 17(1): 82–102.

Green, Donald P. and Ian Shapiro. 1996. The pathologiesof rational choice theory: A critique of applications inpolitical science. New Haven: Yale University Press.

Hall, Peter. 2003. Aligning ontology and methodologyin comparative research. In Comparative historicalanalysis in the social sciences, ed. Mahoney andRueschemeyer, 373–404. New York: CambridgeUniversity Press.

Hamilton, Alexander, James Madison, and John Jay.1999. The Federalist papers. New York: Mentor Books.

Jervis, Robert. 1998. System effects: Complexity in politi-cal and social life. Princeton: Princeton UniversityPress._. 2000. Timing and interaction in politics: A

comment on Pierson. Studies in American PoliticalDevelopment 14 (2): 93–100.

King, Gary. 1995. Replication, replication. PS: PoliticalScience and Politics 28 (3): 443–99.

King, Gary, Robert O. Keohane, and Sidney Verba.1994. Designing social inquiry: Scientific inference inqualitative research. Princeton: Princeton UniversityPress._. 1995. The importance of research design in

political science. American Political Science Review 89(2): 475–81.

Laitin, David. D. 1995. Disciplining political science.American Political Science Review 89 (2): 454–56._. 2003. The Perestroikan challenge to social sci-

ence. Politics and Society 31 (1): 163–85.Lijphart, Arend. 1971. Comparative politics and the

comparative method. American Political Science Re-view 65 (3): 682–93.

December 2005 | Vol. 3/No. 4 865

Page 12: The Qualitative Foundations of Political Science Methodologypeople.bu.edu/jgerring/documents/Thomas_MethodologyReview.pdf · The Qualitative Foundations of Political Science ... that

Mahoney, James. 1999. Nominal, ordinal, and narrativeappraisal in macrocausal analysis. American Journal ofSociology 104 (4): 1154–96._. 2003. Strategies of causal assessment in compara-

tive historical analysis. In Comparative historical analy-sis in the social sciences, eds., James Mahoney andDietrich Rueschemeyer, 337–72. New York: Cam-bridge University Press.

Mahoney, James, and Dietrich Rueschemeyer, eds.2003. Comparative historical analysis in the socialsciences. New York: Cambridge University Press.

Mansfield, Harvey, Jr. 1990. Social science versus theConstitution. In Confronting the Constitution, ed.Allan Bloom, 411–36. Washington, DC: AmericanEnterprise Institute Press.

Pierson, Paul. 2000a. Not just what, but when: Timingand sequence in political processes. Studies in Ameri-can Political Development 14 (2): 72–92._. 2000b. Increasing returns, path dependence and

the study of politics. American Political Science Review94 (2): 251–67._. 2003. Big, slow-moving, and . . . invisible. In

Comparative historical analysis in the social sciences, ed.,James Mahoney and Dietrich Rueschemeyer, 177–207. New York: Cambridge University Press._. 2004. Politics in time: history, institutions, and

social analysis. Princeton: Princeton University Press.Ragin, Charles C. 1987. The comparative method: Mov-

ing beyond qualitative and quantitative strategies.Berkeley: University of California Press._. 2004. Turning the tables: How case-oriented

research challenges variable-oriented research. InRethinking social inquiry: Diverse tools, shared stan-dards, eds. David Collier and Henry Brady. Lanham:Rowman and Littlefield.

Rogowski, Ronald. 1995. The role of theory and anom-aly in social-scientific inference. American PoliticalScience Review 89 (2): 467–70.

Sartori, Giovanni. 1984. Guidelines for concept analy-sis. In Social science concepts: A systematic analysis, ed.Giovanni Sartori, 15–85. Beverly Hills, CA: SagePublications.

Shapiro, Ian. 2004. Problems, methods, and theories inthe study of politics, or: What’s wrong with politicalscience and what to do about it. In Problems andmethods in the study of politics, eds. Ian Shapiro, Rog-ers M. Smith, and Tarek E. Masoud, 19–41. NewYork: Cambridge University Press.

Skocpol, Theda, and Margaret Somers. 1980. The usesof comparative history in macrosocial inquiry. Com-parative Studies in Society and History 22 (2):174–97.

Skowronek, Stephen. 1997. The politics presidents make:Leadership from John Adams to George Bush. Cam-bridge: Harvard University Press._. 2003. What’s wrong with APD? Studies in Ameri-

can Political Development 17 (1): 107–10.Skowronek, Stephen, and Karen Orren. 2004. The

search for American political development. New York:Cambridge University Press.

Smith, Rogers M. 1996. Science, non-science, andpolitics. In The historic turn in human sciences, ed.,Terrance J. McDonald, 119–59. Ann Arbor: Univer-sity of Michigan Press._. 2003a. Substance and methods in APD research.

Studies in American Political Development 17 (1):111–15._. 2003b. Progress and poverty in political science.

PS: Political Science and Politics 36 (3): 395–96.Tarrow, Sidney. 1995. Bridging the quantitative-

qualitative divide in political science. American Politi-cal Science Review 89 (2): 471–74.

Thelen, Kathleen. 2000. Timing and temporality in theanalysis of institutional evolution and change. Studiesin American Political Development 14 (1): 101–8.

Review Essay | The Qualitative Foundations of Political Science Methodology

866 Perspectives on Politics