10
Research in Nursing & Health, 2000, 23, 246–255 246 © 2000 John Wiley & Sons, Inc. Focus on Research Methods Combining Qualitative and Quantitative Sampling, Data Collection, and Analysis Techniques in Mixed-Method Studies Margarete Sandelowski * University of North Carolina at Chapel Hill, School of Nursing, 7460 Carrington Hall, Chapel Hill, NC 27599 Received 12 July 1999; accepted 13 January 2000 Abstract: Researchers have increasingly turned to mixed-method techniques to expand the scope and improve the analytic power of their studies. Yet there is still relatively little direction on and much confusion about how to combine qualitative and quantitative techniques. These tech- niques are neither paradigm- nor method-linked; researchers’ orientations to inquiry and their methodological commitments will influence how they use them. Examples of sampling combina- tions include criterion sampling from instrument scores, random purposeful sampling, and strat- ified purposeful sampling. Examples of data collection combinations include the use of instru- ments for fuller qualitative description, for validation, as guides for purposeful sampling, and as elicitation devices in interviews. Examples of data analysis combinations include interpretively linking qualitative and quantitative data sets and the transformation processes of qualitizing and quantitizing. © 2000 John Wiley & Sons, Inc. Res Nurs & Health 23:246–255, 2000 Keywords: mixed-method studies; combination studies; qualitative and quantitative research; sampling; data collection; data analysis; triangulation The idea of mixing qualitative and quantitative methods has stimulated much interest and debate (e.g., Greene & Caracelli, 1997a; Sandelowski, 1995; Swanson, 1992; Tashakkori & Teddlie, 1998). Researchers increasingly have used mixed- method techniques to expand the scope of, and deepen their insights from, their studies. As advo- cates of mixed-method research have argued, the complexity of human phenomena mandates more complex research designs to capture them. De- spite this interest, there is still relatively little di- rection on and much confusion about how to ac- complish mixed-method studies. In this paper, I discuss the where, what, why, and especially, the how of studies combining qualitative and quanti- tative techniques. THE WHERE, WHY, AND WHAT OF COMBINATIONS Combination or mixed-method studies are con- cretely operationalized at the technique level, or the shop floor, of research: that is, at the level of sampling, data collection, and data analysis. Mixed-method studies are not mixtures of para- digms of inquiry per se, but rather paradigms are * Professor.

Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

Embed Size (px)

Citation preview

Page 1: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

Research in Nursing & Health, 2000, 23, 246–255

246 © 2000 John Wiley & Sons, Inc.

Focus on Research MethodsCombining Qualitative

and Quantitative Sampling,Data Collection,

and Analysis Techniques in Mixed-Method Studies

Margarete Sandelowski*

University of North Carolina at Chapel Hill, School of Nursing, 7460 Carrington Hall, Chapel Hill, NC 27599

Received 12 July 1999; accepted 13 January 2000

Abstract: Researchers have increasingly turned to mixed-method techniques to expand thescope and improve the analytic power of their studies. Yet there is still relatively little direction onand much confusion about how to combine qualitative and quantitative techniques. These tech-niques are neither paradigm- nor method-linked; researchers’ orientations to inquiry and theirmethodological commitments will influence how they use them. Examples of sampling combina-tions include criterion sampling from instrument scores, random purposeful sampling, and strat-ified purposeful sampling. Examples of data collection combinations include the use of instru-ments for fuller qualitative description, for validation, as guides for purposeful sampling, and aselicitation devices in interviews. Examples of data analysis combinations include interpretivelylinking qualitative and quantitative data sets and the transformation processes of qualitizing andquantitizing. © 2000 John Wiley & Sons, Inc. Res Nurs & Health 23:246–255, 2000

Keywords: mixed-method studies; combination studies; qualitative and quantitative research; sampling; data collection; data analysis; triangulation

The idea of mixing qualitative and quantitativemethods has stimulated much interest and debate(e.g., Greene & Caracelli, 1997a; Sandelowski,1995; Swanson, 1992; Tashakkori & Teddlie,1998). Researchers increasingly have used mixed-method techniques to expand the scope of, anddeepen their insights from, their studies. As advo-cates of mixed-method research have argued, thecomplexity of human phenomena mandates morecomplex research designs to capture them. De-spite this interest, there is still relatively little di-rection on and much confusion about how to ac-complish mixed-method studies. In this paper, Idiscuss the where, what, why, and especially, the

how of studies combining qualitative and quanti-tative techniques.

THE WHERE, WHY, AND WHAT OF COMBINATIONS

Combination or mixed-method studies are con-cretely operationalized at the technique level, orthe shop floor, of research: that is, at the level ofsampling, data collection, and data analysis.Mixed-method studies are not mixtures of para-digms of inquiry per se, but rather paradigms are

*Professor.

Page 2: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

reflected in what techniques researchers choose tocombine, and how and why they desire to combinethem.

At the Paradigm LevelAs many scholars (e.g., Guba & Lincoln, 1994;Heron & Reason, 1997) have variously named anddescribed them, paradigms of inquiry are world-views that signal distinctive ontological (view ofreality), epistemological (view of knowing and therelationship between knower and to-be-known),methodological (view of mode of inquiry), and ax-iological (view of what is valuable) positions. In-deed, paradigms of inquiry are best understood asviewing positions: ways, and places from which,to see. Because different paradigms, such as (neo-or post-) positivism, constructivism, critical theo-ry, and participatory inquiry, entail contradictoryviewing positions, combinations at the paradigmlevel are not true combinations, mergers, or rec-onciliations of worldviews, but rather the explicitframing of inquiry in two or more worldviews,each of which remains distinct from the other. Thatis, for example, it is not possible to combine, merge,or reconcile a view of reality as singular and ob-jective (positivist) with views of it as multiple andindividually or culturally constructed (construc-tivist), or as historically contingent (critical theo-ry). A critical theorist will frame the issues aroundhormone replacement for midlife women differ-ently from a neopositivist.1 That is, they will eachsee different things and, therefore, ask differentquestions that will, in turn, require the use of dif-ferent methods and techniques to answer them.Such a paradigm combination may be used to elic-it two or more perspectives on hormone therapy,or for the purpose of “initiation” (Greene, Cara-celli, & Graham, 1989, p. 259), to surface para-doxes and contradictions that surround hormonetherapy. Accordingly, two or more paradigms ofinquiry can be used to frame the same target phe-nomenon (in this case, hormone replacement).Yet, arguably, the positivist and the critical theo-rist may not really be studying the same phenom-enon, because to see a phenomenon in a certainway is to change that phenomenon. As Pearce(1971, p. 2) observed, “a change of world view canchange the world viewed.” Moreover, it is debat-able whether one researcher can hold two differ-ent viewing positions, albeit at different times.

That is, it is uncertain whether a researcher canconduct a project framed first, for example, in pos-itivism and then later, in critical theory. Suchworldviews, like all strongly held belief systems,are not easily exchanged.

As data collection and analysis techniques arenot linked to paradigms (Berman, Ford-Gilboe, &Campbell, 1998; Sandelowski 1995), both the re-searcher in a positivist viewing position and the re-searcher in a critical theory viewing position mayuse interviews and even the very same standard-ized measures to answer their questions, but theywill employ these techniques and, more important-ly, analytically treat their results differently. In-deed, arguably the key difference between “quali-tative” and “quantitative” (when these words areused to signal different inquiry paradigms or re-search traditions) researchers is in their attitude to-ward and treatment of data. Accordingly, althoughtechniques can be mixed, the resulting mix will re-veal the researcher’s viewing position (or, in casesof mixed-up research, the researcher’s futile effortto mix the research equivalent of oil and water).

At the Method LevelCombinations at the method level can be used to ex-pand the scope of a study as researchers seek to cap-ture method-linked dimensions of a target phenom-enon (Greene et al., 1989, p. 259). As Wolfer (1993)proposed, different aspects of reality lend them-selves to different methods of inquiry. For example,the physiology of hormone replacement therapy forwomen lends itself to study via physiologic mea-sures, while the debate around the benefits and lia-bilities of hormone replacement therapy lends itselfto discourse, semiotic, and other cultural analyses.In the study I conducted with my colleagues of thetransition to parenthood of infertile couples (Sande-lowski, Holditch-Davis, & Harris, 1992), we usedgrounded theory methodology with naturalistic,ethological observation to capture different featuresof this phenomenon, including couples’ under-standings of their struggle to become parents andtheir interaction with their babies. Grounded theo-ry and ethological observation share a common nat-uralist imperative (that is, not to manipulate, or im-pose a priori conceptualizations on, the targetphenomenon) that make them compatible for use inone study. But grounded theory is particularly well-suited to theorizing human understandings, whileethological observation is especially well-suited totheorizing human behavior.

Although the various dimensions of phenome-na may be method-linked in that different dimen-sions may be best captured by different methods,

COMBINING / SANDELOWSKI 247

1I use terms like “positivist” and “positivism” to quickly com-municate complex and admittedly controversial ideas for thepurposes of this paper, not to stereotype researchers or view-ing positions.

Page 3: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

248 RESEARCH IN NURSING & HEALTH

methods, like paradigms, are not specificallylinked to techniques. Grounded theory may begenerated using an array of qualitative and quan-titative data collection techniques and sources.Moreover, methods are not uniformly linked toparadigms (Greene & Caracelli, 1997b). Groundedtheory may be conducted in a neopositivist or con-structivist paradigm (Annells, 1996). A neoposi-tivist researcher conducting grounded theory be-lieves in an external and objectively verifiablereality. In contrast, a constructivist conductinggrounded theory believes in multiple, experien-tially based, and socially constructed realities. Forthe neopositivist, concepts emerge or are discov-ered, as if they were there to be found. The act ofdiscovery is separate from that which is discov-ered. For the constructivist, concepts are made,fashioned, or invented from data. What construc-tivists find is what they made. For the construc-tivist, all human discovery is creation.

At the Technique LevelThe technique level of research is the site wherecombinations actually occur and is what is mostoften referred to in discussions of mixed-methodresearch. Such combinations entail the use of sam-pling, data collection, and data analysis techniquescommonly (although not necessarily) conceivedas qualitative or quantitative. Because techniquesare tied neither to paradigms nor to methods, com-binations at the technique level permit innovativeuses of a range of techniques for a variety of pur-poses. Three purposes include (a) triangulation, toachieve or ensure corroboration of data, or con-vergent validation; (b) complementarity, to clari-fy, explain, or otherwise more fully elaborate theresults of analyses; and (c) development, to guidethe use of additional sampling, and data collectionand analysis techniques (Greene et al., 1989, p.259). The rest of this paper is devoted to illustrat-ing how such purposes can be achieved.

THE HOW OF COMBINATIONS

Mixed-method studies entail concrete operationsat the technique level of research by which “qual-itative” and “quantitative” techniques are used to-gether and either remain distinct design compo-nents, or are explicitly integrated (Caracelli &Greene, 1997). As shown in Figure 1, either qual-itative or quantitative approaches to sampling,data collection, and data analysis may prevail orhave equal priority in a study, and both qualitative

and quantitative approaches may be used sequen-tially, concurrently and iteratively, or in a sand-wich pattern.

Combining Sampling StrategiesOne of the most important features distinguishingwhat is commonly referred to as qualitative fromquantitative inquiry is the kind of sampling used.While qualitative research typically involves pur-poseful sampling to enhance understanding of theinformation-rich case (Patton, 1990), quantitativeresearch ideally involves probability sampling topermit statistical inferences to be made. Althoughpurposeful sampling is oriented toward the devel-opment of idiographic knowledge—from gener-alizations from and about individual cases—probability sampling is oriented toward the devel-opment of nomothetic knowledge, from general-izations from samples to populations. Notwith-standing these key differences, purposeful andprobability sampling techniques can be combinedusefully.

Criterion sampling. For example, in designtemplate 2 shown in Figure 1, in which the use ofquantitative techniques precede the use of qualita-tive techniques, research participants’ scores onthe instruments used to collect data in the quanti-tative portion of the study can be used to initiate acriterion sampling strategy. Criterion sampling isa kind of purposeful sampling of cases on precon-ceived criteria, such as scores on an instrument.Cases may be chosen because they typify the av-erage score; this kind of sampling may also be re-ferred to as typical case sampling. Cases may bechosen because they exemplify extreme scores;this kind of sampling may also be called extremeor deviant case sampling. (The term deviant hererefers to any departure from a specified norm.)Such cases are highly unusual. Or, cases may bechosen because they show a variable intensely, butnot extremely; this kind of sampling may also bereferred to as intensity sampling (Patton, 1990, pp.182–183).

Researchers using scores on instruments as thecriterion for purposeful sampling may wish to col-lect more data (e.g., via interviews or observa-tions) from the chosen participants for the purposeof triangulation: that is, to discern whether a typi-cal, extreme, or intense case of something on astandardized test is also a typical, extreme, or in-tense case using other data collection techniques.Or, researchers may use the criterion of scores(typical, extreme, or intense scores) for the pur-pose of complementarity: that is, to find out moreabout what makes a case typical, extreme, or in-

Page 4: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

COMBINING / SANDELOWSKI 249

tense. In the process of sampling for complemen-tarity, researchers will inevitably also obtain in-formation on convergent validity and, thereby,also achieve the purpose of triangulation. That is,in the process of obtaining fuller information onwhy persons scored as they did, they will also ob-tain information on whether persons look the sameon interview or observation as they did on thequantitative measure of a target phenomenon. Re-searchers will sample participants in scoring cate-gories until the point of informational redundan-cy; that is, until they have collected informationfrom enough cases in each scoring category to al-low them to draw conclusions about the validity ofthe result (if they are seeking convergent validity),or to elaborate on and clarify the result. Samplingon the basis of scores is an especially useful strat-egy in clinical trials of interventions to validate orclarify their effects on different participants.

Random purposeful sampling. Another ex-ample of the combined use of probability and pur-poseful sampling is random purposeful sampling,which may also be used in design template 2 or 3.This sampling strategy is employed when there isa very large pool of potentially information-richcases and no obvious reason to choose one caseover another. For example, in a clinical trial of anintervention to reduce pain with 500 people, 300of them scored as having less pain on a standard-ized measure of pain, 150 scored as having asmuch pain as they had before the treatment, and 50scored as having more pain. These numbers aretoo large for any purposeful sampling strategy ori-ented toward the intensive study of the particularsof each case. Accordingly, cases can initially bechosen from each of these three scoring groups(the criterion for sampling here is again the scores)by assigning all the cases in each group a number

FIGURE 1. Hybrid, combination, or mixed-method design templates.

Page 5: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

from a random number table and then drawingthem in turn. Each case drawn must meet the min-imum criterion in all purposeful sampling: name-ly, that it is an information-rich case. And, only somany cases are drawn in each scoring group thatwill permit researchers to make the kinds of infer-ences they wish to make: for example, concerningconvergent validity, or fuller description or expla-nation of cases.

Stratified purposeful sampling. Another kindof combination of sampling techniques is stratifiedpurposeful sampling, where the researcher wantsto ensure that certain cases varying on preselectedparameters are included. This strategy can poten-tially be used in any of the design templates shownin Figure 1. Although this kind of sampling is—from a probability sampling standpoint—statisti-cally nonrepresentative (Trost, 1986), it is, from apurposeful sampling standpoint, informationallyrepresentative. Each case represents a prespecifiedcombination of variables, the distinctive conflu-ence of which is the focus of study. The researcherwants to explain how these variables come to-gether to make a case the case that it is.

For example, a researcher studying caregivingmay wish to ensure that s/he has 1-2 cases thatminimally to maximally vary on four parameters:type of caregiving dyad, role of each member ofdyad, mortality of illness, and transmissibility ofillness. Caregiving dyads include parents caringfor their children, adult or minor children caringfor their parents, husbands caring for their wives,wives caring for their husbands, heterosexual part-ners caring for each other, male and female ho-mosexual partners caring for each other, brotherscaring for sisters, sisters caring for brothers,grandparents caring for grandchildren, grandchil-dren caring for grandparents, friends caring forfriends, strangers caring for strangers, paid care-givers caring for patients, and so on. In each ofthese dyads, one or the other member can be thecaregiver or the cared-for; accordingly, wives carefor or are cared for by husbands. Finally, illnessescan be classified as mortal or nonmortal, andtransmissible or nontransmissible.

Accordingly, a researcher may want to have themost variation s/he can achieve on each of thesefour parameters, or limit variation on one or moreof these parameters. In a maximally varied form ofpurposeful stratified sampling, the researcher mightwant to have 1–2 cases of every combination ofvariations s/he can find. The researcher may wantto have 1–2 cases of a husband caring for his wife,who suffers from a mortal and nontransmissibleillness; 1–2 cases of a husband caring for his wife,who suffers from a nonmortal and nontransmissible

illness; 1–2 cases of a husband caring for his wife,who suffers from a mortal and transmissible ill-ness; and, 1–2 cases of a husband caring for hiswife, who suffers from a nonmortal and transmis-sible illness. The researcher may also want to havecases of wives caring for husbands, which showthe variations just itemized on illness mortalityand transmissibility. In addition, the researchermay want to include these variations for as manyof the other kinds of caregiving dyads previouslylisted as s/he can. Or, the researcher may want torestrict sampling only to one kind of caregivingdyad (eg., parent–child caregivers) varying on theparameters of member role and type of illness.

In short, in stratified purposeful sampling, re-searchers want to fill each sampling cell with 1-2cases that exemplify the kinds and degree of vari-ation they surmise are relevant to understanding atarget phenomenon, such as caregiving. This kindof sampling typically involves empirical, as op-posed to theoretical, cases that represent combina-tions of demographic (e.g., age, sex, income, andeducation) and other low-inference variables. Low-inference variables are variables concerning whichthere is likely to be consensus about what they are.That is, there is likely to be no disagreement onwhether a case represents a wife caring for her sickhusband. In contrast, an empirical case of Mrs.Jones caring for her husband, Mr. Jones, who isdying from cancer, might also represent a theoret-ical case of “reluctant caregiving,” as opposed to“engaged caregiving.” Although researchers canuse a stratified purposeful sampling strategy to fillsampling cells with such theoretical cases, theymust first have made the case for such theoreticalcases. Figure 2 illustrates a stratified purposefulsampling plan restricted to one kind of caregivingdyad (parent–child), but varying on member roleand type of illness. A total sample size of 16 wouldbe required to fill each sampling cell with onecase. Figure 2 also illustrates why certain sam-pling cells may be difficult or impossible to fill;few parent–child dyads will be comprised of a mi-nor child caring for her/his own minor ill child.

Combining Data Collection TechniquesAnother set of concrete operations at the techniquelevel of research entail the combined use of datacollection techniques that are commonly (but notnecessarily) associated with either qualitative orquantitative research, such as open-ended and un-structured interviewing and structured question-naires, respectively. (Some would argue that alldata collection techniques in human subjects re-search, including instruments, are “qualitative” in

250 RESEARCH IN NURSING & HEALTH

Page 6: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

COMBINING / SANDELOWSKI 251

that they involve verbal data, which are only latertransformed into numbers.) Researchers’ viewingpositions will influence how they use these tech-niques. For example, for many researchers in apositivist viewing position, data collection tech-niques vary in the degree to which they yield ob-jective data. Observations of behavior are gener-ally thought to be more objective than self-reportsof behavior. When a target phenomenon can be ob-served, observation is often the criterion measureagainst which self-report is judged. Accordingly,researchers in a positivist viewing position will of-ten seek “more objective” measures to evaluatethe validity of “more subjective” measures. More-over, whenever there is a discrepancy betweenwhat participants do and say they do, what obser-vers see participants doing is generally considereda more accurate reflection of reality than self-report.The self-report is typically called into question.

In contrast, from a constructivist viewing posi-tion, there is no hierarchy of data collection tech-niques whereby one technique is judged to yieldmore objective (or more accurate or truer) datathan another. If the results from two data collec-tion techniques do not converge, these results aretreated as interpretive opportunities: either toshow that no true discrepancy exists or to suggestthe phenomenon that accounts for the apparentdiscrepancy. For example, Silverman (1993) dem-onstrated how a discrepancy between parents’ ac-counts of their behavior and observations of theirbehavior in an examining room could be account-ed for without resorting to judgments about theseparents as unreliable informants. Indeed, he ad-vised against the kind of “simple-minded ‘trian-gulation’” whereby one kind of data is used sim-ply to corroborate or refute the results of anotherwithout attention to the “embedded, situated na-ture of accounts” (p. 200). Helitzer-Allen andKendall (1992) described how “discrepancies” be-tween data derived from surveys and laboratory

tests were resolved by the analysis of ethnographicinterviews, which provided them the opportunityto deepen their understanding of the use of anti-malarial chemoprophylaxis during pregnancy. Thisstudy also shows the inherently multi-techniquenature of ethnographic studies.

The Varied Uses of InstrumentsInstruments can be used in combination studies tofulfill a variety of objectives. They can be used toprovide fuller description of cases in areas sug-gested by interview or observation data, as, for ex-ample, when interview data suggest that partici-pants are depressed and the researcher decides toadminister a depression inventory. Each partici-pant’s score may be compared to the normativescore for that instrument or to the scores of otherparticipants in the study. Instruments are used hereto make case-bound or idiographic generalizations—that is, generalizations about each participant inthe study—not to make nomothetic generaliza-tions, or generalizations from the study sample topopulations. This use of instruments exemplifiesdesign template 1 shown in Figure 1, where the de-cision to use an instrument is made on analyticgrounds developed from data collected in a quali-tative study. This use also exemplifies the “develop-ment” purpose for mixed-method research, where-by the results of using one kind of data collectiontechnique informs or guides the use of anotherkind. Researchers may also use instruments fordescription (as opposed to statistical inference)concurrently with interviews or observations forthe purpose of complementarity, as exemplified indesign template 1a, or as a development bridge be-tween the qualitative exploration of a target phe-nomenon and the further explanation of this phe-nomenon, as shown in design template 6. Thedevelopment bridge here—the instrument—di-rects researchers toward whom they will sample

FIGURE 2. Illustration of stratified purposeful sampling plan.

Page 7: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

and what data they will collect in the second qual-itative portion of a study.

Another use of instruments that fulfills the purpose of development is to guide purposefulsampling. The results of instruments can direct re-searchers more precisely to the kinds of partici-pants they may wish to recruit and the nature of in-formation they will want to obtain from them. Asdescribed previously, instrument scores can be thebasis for sampling in a follow-up qualitative study.Results from a survey can direct researchers to-ward the most fruitful variables and associationsto examine further both qualitatively, as shown indesign template 3 in Figure 1, or first qualitativelyand then again, quantitatively, as shown in designtemplate 7 in Figure 1. Ornstein and his colleagues(1993) initiated focus groups after learning fromthe results of a telephone survey that there werethree groups of “nonresponders” to a reminder letter for cholesterol screening. The focus groupswere organized to elicit more information concern-ing the lack of response. This information couldthen serve as the basis for revising the reminderprotocol and then qualitatively or quantitativelyevaluating its effectiveness in reducing the num-bers of nonresponders. For example, the new re-minder protocol could be tested in a clinical trialand the scores on the instruments used to appraisethe protocol could then be used as the basis for cri-terion sampling to further explain those scores.This combined example illustrates how a quan !Qual study (survey>focus group) can lead to aQuan ! qual (experiment ! criterion sampling "interviews) study. Together, these studies can berepresented as quan!Qual! Quan!qual.

Instruments can also be used as elicitation de-vices in interviews concerning both the target phe-nomenon and the instrument itself. For example, aresearcher may use participants’ responses on adepression inventory to trigger more thoughts andfeelings about depression in individual interviewsessions. Participants often require some assis-tance to articulate inchoate thoughts or to speakthe unspeakable. The use of an instrument as anelicitation device can serve this purpose, just asthe use of projective techniques (e.g., Ornstein etal., 1993) can help participants language, and fo-cus on, a target experience.

Researchers may also ask participants to com-ment specifically on their view of each item on aninstrument in order to appraise its content and con-struct validity. In the infertility study mentionedpreviously (Sandelowski et al., 1992), we used asymptom inventory to appraise the type and in-tensity of symptoms women and men were expe-riencing as they awaited birth or adoption. One of

the symptoms listed was “headache.” One waitingfather spontaneously mentioned to me that he hadexperienced a headache from hitting his head on apipe doing a home repair, but, on the symptom in-ventory, did not mark headache as one of hissymptoms. As this man explained it, he surmisedthat we were not interested in symptoms that de-rived from home repairs or other events apparent-ly unconnected to waiting for a child. What thisman clarified for me was all of the judgments par-ticipants make when they encounter any item onan instrument of which researchers are complete-ly unaware. Accordingly, researchers may want toexplicitly use instruments as data elicitation de-vices to determine exactly what participants seeand, therefore, respond to in each item. Becauseparticipants may also offer reasons for respondingas they did, in addition to explanations for howthey and persons like them are likely or ought torespond, researchers may obtain information notonly on the content validity of an instrument, butalso on its construct validity.

Instrument scores can also be used for qualita-tive profiling. Qualitative profiling, which is dis-cussed further in the section on “qualitizing,” en-tails theoretically grouping or typing participantsaccording to their scores on two or more instru-ments. These typologies can be used to guide the-oretical sampling in grounded theory studies de-signed to develop theory.

Combining Data Analysis TechniquesQualitative and quantitative data sets can be linked,preserving the numbers and words in each data set.Or, these data can be transformed to create one dataset, with qualitative data converted into quantita-tive data, or quantitative data converted into quali-tative data (Caracelli & Greene, 1993).

Linking the results of qualitative and quantita-tive analysis techniques is accomplished by treat-ing each data set with the techniques usually usedwith that data; that is, qualitative techniques areused to analyze qualitative data and quantitativetechniques are used to analyze quantitative data.For example, constant comparison, qualitative con-tent, and narrative analysis techniques are used toanalyze interview data, whereas one or more statis-tical techniques are used to analyze data from in-struments. The results of the qualitative analysis of qualitative data and of the quantitative analysisof quantitative data are then combined at the inter-pretive level of research, but each data set remainsanalytically separate from the other.

In contrast to the process of linking data aretreatments of data that transform one kind of data

252 RESEARCH IN NURSING & HEALTH

Page 8: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

into another kind to create one data set. Tashakkoriand Teddlie (1998, p. 126) referred to the conver-sion of qualitative data into quantitative data as“quantitizing,” and to the conversion of quantita-tive data into qualitative data as “qualitizing.”

Quantitizing. Quantitizing refers to a processby which qualitative data are treated with quantita-tive techniques to transform them into quantitativedata. The researcher must first reduce verbal or vi-sual data (e.g., from interviews, observations, arti-facts, or documents) into items, constructs, or vari-ables that are intended to mean only one thing andthat can, therefore, be represented numerically. Oneof the most commonly used examples of this pro-cess (and of design template 4 in Figure 1) is the cre-ation of items for an instrument from interviewdata. Although the researcher’s intention is to “pre-serve qualitative meaning” in the development ofinstruments (Fleury, 1993), any one item can onlyhave one meaning to function well psychometrical-ly. A valid item in an instrument cannot be mean-ingful. That is, it cannot be full of meaning; it can-not mean different things at the same or differenttimes to the same or different people.

Another example of a quantitative transforma-tion aimed at producing a one-meaning unit is thereduction of narrative data to a variable that can becorrelated with other variables. In a study exem-plifying mixed-method design template 4 in Figure 1, in which quantitative methods prevailbut follow and depend on the results of qualitativemethods, Borkan, Quirk, and Sullivan (1991) eli-cited narrative data from elders concerning howthey viewed the hip fractures they had suffered.Using a form of narrative analysis, they determinedthat there were two major narrative emplotmentsof hip fracture: the mechanical and the organic.They conducted a series of reliability tests to en-sure that each narrative was consistently classifiedas mechanical or organic. They then grouped el-ders according to whether they had told a largelymechanical or organic story about their hip frac-ture, and then conducted a correlation study to de-termine whether and in what direction the emplot-ments-turned-into-variables—that is, mechanicalnarrative and organic narrative—predicted func-tional outcomes. They found that the organic sto-ry predicted poorer outcomes than the mechanicalstory.

This study is an especially good illustration ofthe axiom that methods are not uniformly linkedto paradigms. Borkan and his colleagues used anarrative methodology for positivist ends. Theysought to create a predictor variable by reducingthe interview data they had obtained to units withonly one meaning and then appropriately used re-

liability measures to validate them. In contrast, aconstructivist using narrative methodology wouldemphasize the meaningfulness of the data andwould treat those data as inherently revisionistand, thus not amenable to reliability testing.

Quantitative treatments of qualitative data canalso be used to extract more information fromqualitative data, and to confirm researchers’ im-pressions from these data. For example, from inter-view data with infertile couples, we suspected anassociation between physicians’ encouragementto have an amniocentesis performed and couples’decision to undergo the procedure (Sandelowski,Harris, & Holditch-Davis, 1991). Accordingly, inorder to validate our impression, we created a datadisplay to discern the congruence between deci-sion and advice by counting the numbers of cou-ples who were: (a) encouraged to have the proce-dure and had it (n # 7); (b) encouraged to have theprocedure but did not have it (n # 3); (c) not en-couraged or discouraged to have the procedureand did not have it (n # 7); and, (d) not encour-aged or discouraged to have the procedure but didhave it (n # 3). The display visually confirmed ourimpression of an association between the decisionto have the procedure and the physician’s advice.We then analyzed these data using Fisher’s exactprobability test, which showed a nonsignificantstatistical relationship (N # 20; p # .078), whichmight reach significance in another study with alarger sample size.

Qualitizing. Qualitizing refers to a processby which quantitative data are transformed intoqualitative data. As in quantitizing, qualitizing canbe used to extract more information from quanti-tative data, or to confirm interpretations of it.

An example of this process is the use of scoreson instruments to profile participants—to createverbal portraits or typologies of them—aroundtarget phenomena. Tashakkori and Teddlie (1998,pp. 130–133) described five kinds of narrative orqualitative profiling: modal, average, comparative,normative, and holistic. A modal profile is a verbaldescription of a group of participants around themost frequently occurring attributes. For example,if most of the participants in a group are in their50s, the group can be described as middle-aged. Ifmost of the participants in a group score in a cer-tain range on a depression inventory, they can bedescribed as mildly or severely depressed. Simi-larly, an average profile is a verbal description ofa group of participants around the mean of an at-tribute. Although not usually identified in researchreports as qualitative profiling, this kind of quali-tizing is commonly done to describe samples andto interpret research results.

COMBINING / SANDELOWSKI 253

Page 9: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

A comparative profile is a verbal descriptionbased on the comparison of participants to eachother on one or more sets of scores. In contrast, anormative profile is a verbal description based onthe comparison of participants’ scores to the nor-mative scores for one or more instruments. Bothof these kinds of qualitative profiling depend onthe results of quantitative cluster analyses, whichentail a set of statistical techniques to identify ho-mogeneous groups of subjects, or distinctive sub-groups in which subjects may be placed becausethey are more similar to each other than to subjectsin other subgroups. For example, Rothert and hercolleagues (1990) identified four groups of womenon the basis of their responses to eight scenariosconcerning hormone replacement therapy. That is,they created a typology of information use thatshowed how differently people can respond to thesame sets of information, and that these differencescould be clustered in ways that distinguishedwomen from each other. After they quantitativelyidentified these groups, they interviewed threewomen from each of these groups for further clar-ification and to confirm their typology. Their workis an example of design template 2 in Figure 1.

Qualitizing entails further studying these theo-retical groups to explicate the features of membersin each group that make them like each other and ofthe groups themselves that make them differentfrom each other, and to choose a name that will cap-ture these features. This kind of investigation lendsitself especially well to subsequent grounded theorystudy, as the clusters can provide the basis for theo-retical sampling and for typology development. Researchers can then validate these groupings withfurther qualitative study or quantitative study, asexemplified in design templates 5 or 7.

Finally, a holistic profile is a verbal descriptionbased on impressions rather than specific attribut-es or scores. Holistic profiles may also be com-prised of various combinations of modal, mean,comparative, and normative profiles.

CONCLUSION

Mixed-method research is a dynamic option forexpanding the scope and improving the analyticpower of studies. When done well, mixed-methodstudies dramatize the artfulness and versatility ofresearch design. Mixed-method research opera-tionally includes an almost limitless array of com-binations of sampling, and data collection andanalysis techniques, only a few of which could bedescribed here. Indeed, closer examination of re-search reports will show researchers have long

been using combinations of qualitative and quan-titative techniques, but the qualitative techniqueshave not been featured, have been inappropriatelyor inadequately used or explained, or have beenexplicitly minimized. Sutton (1997) lauded the“virtues of closet qualitative research” wherebyresearchers “conceal or downplay” (p. 97) the useof qualitative techniques in order to have their re-search reports published!

The combined use of qualitative and quantita-tive techniques will inevitably be informed by theresearcher’s viewing position, which shapes whattechniques will be combined, and how and whythey are combined. Accordingly, if researcherswant to combine “stories and numbers. . . withoutcompromise” (Ford-Gilboe, Campbell, & Berman,1995), use “numbers and words” in a “shamelesslyeclectic” manner (Rossman & Wilson, 1994), andease the “uneasy alliance” (Buchanan, 1992) be-tween qualitative and quantitative methods, theymust have a clear view of their viewing positionsand what dynamic mixes they suggest or permit.

Researchers must also resist the “‘mix andmatch syndrome’” (Leininger, 1994, p. 103) thatcan result in a “qualitative quagmire” (Barbour,1998) and that mandates that all research bemixed-method research. As Chen (1997) suggest-ed, mixed-method research is in danger of becom-ing the new ideology. Mixed-method researchshould never be used because of the misguided as-sumptions that more is better, that it is the fash-ionable thing to do, or, most importantly, that qual-itative research is incomplete without quantitativeresearch (Morse, 1996). Indeed, qualitative tech-niques have been used to “salvage” quantitativestudies (Weinholtz, Kacer, & Rocklin, 1995). The“completeness” of any individual study, no matterwhat kind it is, must be judged without resortingto methodological fads or fetishes.

REFERENCES

Annells, M. (1996). Grounded theory method: Philo-sophical perspectives, paradigm of inquiry, and post-modernism. Qualitative Health Research, 6, 379–393.

Barbour, R.S. (1998). Mixing qualitative methods:Quality assurance or qualitative quagmire? Qualita-tive Health Research, 8, 352–361.

Berman, H., Ford-Gilboe, M., & Campbell, J.C. (1998).Combining stories and numbers: A methodologic ap-proach for a critical nursing science. ANS: Advancesin Nursing Science, 21(1), 1–15.

Borkan, J.M., Quirk, M., & Sullivan, M. (1991). Find-ing meaning after the fall: Injury narratives from el-derly hip fracture patients. Social Science & Medi-cine, 33, 947–957.

254 RESEARCH IN NURSING & HEALTH

Page 10: Combining Qualitative and Quantitative Sampling, Data ... · PDF fileCombining Qualitative and Quantitative Sampling, ... arguably the key difference between “quali- ... While qualitative

COMBINING / SANDELOWSKI 255

Buchanan, D.R. (1992). An uneasy alliance: Combiningqualitative and quantitative research methods. HealthEducation Quarterly, 19, 117–135.

Caracelli, V.J., & Greene, J.C. (1993). Data analysisstrategies for mixed-method evaluation designs. Edu-cational Evaluation and Policy Analysis, 15, 195–207.

Caracelli, V.J., & Greene, J.C. (1997). Crafting mixed-method evaluation designs. In J.C. Greene & V.J. Cara-celli (Eds.), Advances in mixed-method evaluation:The challenges and benefits of integrating diverse par-adigms (pp. 19–32). San Francisco: Jossey-Bass.

Chen, H. (1997). Applying mixed methods under theframework of theory-driven evaluations. In J.C.Greene & V.J. Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of in-tegrating diverse paradigms (pp. 61–72). San Fran-cisco: Jossey-Bass.

Fleury, J. (1993). Preserving qualitative meaning in in-strument development. Journal of Nursing Measure-ment, 1, 135–144.

Ford-Gilboe, M., Campbell, J., & Berman, H. (1995).Stories and numbers: Coexistence without compro-mise. ANS: Advances in Nursing Science, 18, 14–26.

Greene, J.C., & Caracelli, V.J., (Eds.). (1997a). Ad-vances in mixed-method evaluation: The challengesand benefits of integrating diverse paradigms. SanFrancisco: Jossey-Bass.

Greene, J.C., & Caracelli, V.J. (1997b). Defining anddescribing the paradigm issue in mixed-method eval-uation. In J.C. Greene & V.J. Caracelli (Eds.), Ad-vances in mixed-method evaluation: The challengesand benefits of integrating diverse paradigms (pp. 5–17). San Francisco: Jossey-Bass.

Greene, J.C., Caracelli, V.J., & Graham, W.F. (1989).Toward a conceptual framework for mixed-methodevaluation designs. Educational Evaluation and Pol-icy Analysis, 11, 255–274.

Guba, E.G., & Lincoln, Y.S. (1994). Competing para-digms in qualitative research. In N.K. Denzin & Y.S.Lincoln (Eds.), Handbook of qualitative research(pp. 105–117). Thousand Oaks, CA: Sage.

Helitzer-Allen, D.L., & Kendall, C. (1992). Explainingdifferences between qualitative and quantitative data:A study of chemoprophylaxis during pregnancy.Health Education Quarterly 19, 41–54.

Heron, J., & Reason, P. (1997). A participatory inquiryparadigm. Qualitative Inquiry, 3, 274–294.

Leininger, M. (1994). Evaluation criteria and critique inqualitative research studies. In J.M. Morse (Ed.),Critical issues in qualitative research methods(pp. 95–115). Thousand Oaks, CA: Sage.

Miles, M.B., & Huberman, A.M. (1994). Qualitativedata analysis: An expanded sourcebook (2nd ed.).Thousand Oaks, CA: Sage.

Morgan, D.L. (1998). Practical strategies for combiningqualitative and quantitative methods: Applications tohealth research. Qualitative Health Research, 8, 362–376.

Morse, J.M. (1991). Approaches to qualitative-quanti-tative methodological triangulation. Nursing Re-search, 40, 120–123.

Morse, J.M. (1996). Is qualitative research complete?Qualitative Health Research, 6, 3–5.

Ornstein, S.M., Musham, C., Reid, A., Jenkins, R.G.,Zemp, L.D., & Garr, D.R. (1993). Barriers to adher-ence to preventive services reminder letters: The pa-tient’s perspective. Journal of Family Practice, 36,195–200.

Patton, M.Q. (1990). Qualitative evaluation and re-search methods (2nd ed.). Newbury Park, CA: Sage.

Pearce, J.C. (1971). The crack in the cosmic egg: Chal-lenging constructs of mind and reality. New York:Washington Square Press.

Rossman, G.B., & Wilson, B.L. (1994). Numbers andwords revisited: Being “shamelessly eclectic.” Qual-ity & Quantity, 28, 315–327.

Rothert, M., Rovner, D., Holmes, M., Schmitt, N., Ta-larczyk, G., Kroll, J., & Gogate, J. (1990). Women’suse of information regarding hormone replacementtherapy. Research in Nursing & Health, 13, 355–366.

Sandelowski, M. (1995). Triangles and crystals: On thegeometry of qualitative research. Research in Nurs-ing & Health, 18, 569–574.

Sandelowski, M., Harris, B.G., & Holditch-Davis, D.(1991). Amniocentesis in the context of infertility.Health Care for Women International, 12, 167–178.

Sandelowski, M., Holditch-Davis, D., & Harris, B.G.(1992). Using qualitative and quantitative methods:The transition to parenthood of infertile couples. InJ.F. Gilgun, K. Daly, & G. Handel (Eds.), Qualitativemethods in family research (pp. 301–322). NewburyPark, CA: Sage.

Silverman, D. (1993). Interpreting qualitative data:Methods for analyzing talk, text and interaction. Lon-don: Sage.

Sutton, R.I. (1997). The virtues of closet qualitative re-search. Organization Science, 8, 97–106.

Swanson, S.C. (1992). A cross-disciplinary applicationof Greene, Caracelli and Grahams’s conceptualframework for mixed method evaluation. Unpub-lished doctoral dissertation, Georgia State Universi-ty, Atlanta, GA.

Tashakkori, A., & Teddlie, C. (1998). Mixed methodol-ogy: Combining qualitative and quantitative ap-proaches. Thousand Oaks, CA: Sage.

Trost, J.E. (1986). Statistically non-representative strat-ified sampling: A sampling technique for qualitativestudies. Qualitative Sociology, 9, 54–57.

Weinholtz, D., Kacer, B., & Rocklin, T. (1995). Sal-vaging quantitative research with qualitative data.Qualitative Health Research, 5, 388–397.

Wolfer, J. (1993). Aspects of “reality” and ways ofknowing in nursing: In search of an integrating para-digm. Image: Journal of Nursing Scholarship, 25,141–146.