14
Journal of Operations Management 30 (2012) 467–480 Contents lists available at SciVerse ScienceDirect Journal of Operations Management jo ur nal home page: www.elsevier.com/locate/jom Technical note Using partial least squares in operations management research: A practical guideline and summary of past research David Xiaosong Peng a,1 , Fujun Lai b,a Department of Information & Operations Management, Mays Business School at Texas A&M University, 320 Wehner Building 4217 TAMU, College Station, TX 77843-4217, United States b Department of Management and International Business, College of Business, University of Southern Mississippi, 730 E. Beach Blvd, Long Beach, MS 39503, United States a r t i c l e i n f o Article history: Received 1 June 2011 Received in revised form 13 June 2012 Accepted 18 June 2012 Available online 26 June 2012 Keywords: Partial least squares (PLS) Structural equation modeling (SEM) Empirical research methods Operations management a b s t r a c t The partial least squares (PLS) approach to structural equation modeling (SEM) has been widely adopted in business research fields such as information systems, consumer behavior, and marketing. The use of PLS in the field of operations management is also growing. However, questions still exist among some operations management researchers regarding whether and how PLS should be used. To address these questions, our study provides a practical guideline for using PLS and uses examples from the operations management literature to demonstrate how the specific points in this guideline can be applied. In addition, our study reviews and summarizes the use of PLS in the recent operations management literature according to our guideline. The main contribution of this study is to present a practical guideline for evaluating and using PLS that is tailored to the operations management field. © 2012 Elsevier B.V. All rights reserved. 1. Introduction Structural equation modeling (SEM) has been widely adopted in social and psychological research. Operations management (OM) researchers have also used SEM to a great extent (Shah and Goldstein, 2006). To date, OM researchers have mainly adopted covariance-based SEM (CBSEM) methods, as exemplified by soft- ware such as LISREL, AMOS, and EQS. A less widespread technique known as partial least squares (PLS) has started to receive attention from OM researchers, as evidenced by the steady growth of PLS use in the OM field. As an SEM method, PLS has been subjected to much debate with respect to its pros and cons and under what circumstances it should be adopted, if at all. Advocates of PLS claim that it has the ability to estimate research models using small samples with no strict distribution assumptions and can model both reflective and formative constructs within the same research model. PLS also sup- posedly avoids the inadmissible solutions and factor indeterminacy of CBSEM (Chin, 1998b). Researchers who oppose using PLS cite The work of Professor Fujun Lai was supported by a Major Program Fund of National Natural Science Foundation of China (Project No. 71090403/71090400) and the Institute of Supply Chain Integration and Service Innovation. Corresponding author. Tel.: +1 228 214 3446; fax: +1 228 214 3475. E-mail addresses: [email protected] (D.X. Peng), [email protected] (F. Lai). 1 Tel.: +1 979 845 6996; fax: +1 979 845 5653. reasons such as bias in parameter estimates, its inability to model measurement errors, and its piecemeal approach to estimating the overall research model. Despite the controversies and debate surrounding PLS, interest in PLS among OM researchers seems to be growing. Although a number of articles and book chapters have summarized PLS algo- rithms, reviewed the use of PLS in a research field, or discussed specific aspects of PLS applications such as sample size require- ments and specifying formative constructs, we are not aware of any guideline for evaluating and using PLS that is tailored to the OM audience. Empirical OM researchers face some unique challenges such as relatively less developed empirical knowledge (Wacker, 1998), a lack of standardized measurement scales (Roth et al., 2007), and the difficulty of obtaining large samples because OM researchers typically examine phenomena at the firm or the sup- ply chain level. These challenges may limit the applicability of CBSEM. Consequently, OM researchers should evaluate different analysis techniques, particularly PLS if SEM is preferred. To help OM researchers evaluate and use PLS, this study provides a practi- cal guideline that outlines some of the important issues in using PLS. We make this guideline specific to the OM field by using illustrative examples from the OM literature. We also summarize studies that use PLS to examine OM top- ics in the fields of operations management, strategic management, and organization theory from 2000 to 2011. We review these arti- cles with respect to their rationales for using PLS, sample sizes, the use and assessment of formative constructs, bootstrapping proce- dures, and the presentation of results. Our review provides a mixed 0272-6963/$ see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.jom.2012.06.002

Using PLS in Operations Management Research-A Practical Guideline and Summary of Past Research

Embed Size (px)

DESCRIPTION

PLS operation management

Citation preview

  • Journal of Operations Management 30 (2012) 467480

    Contents lists available at SciVerse ScienceDirect

    Journal of Operations Management

    jo ur nal home page: www.elsev ier .co

    Technical note

    Using partial least squares in operations manageguideline and summary of past research

    David Xia Department o ity, 32Statesb Department o ississ

    a r t i c l

    Article history:Received 1 JunReceived in reAccepted 18 JuAvailable onlin

    Keywords:Partial least sqStructural equEmpirical reseOperations ma

    h to sationalso ghetheor usiecicS in tis stuana

    1. Introduction

    Structural equation modeling (SEM) has been widely adopted insocial and psychological research. Operations management (OM)researchersGoldstein, 2covariance-ware such aknown as pfrom OM rein the OM

    As an SEwith respecit should beability to esstrict distribformative cposedly avoof CBSEM (

    The work National Naturthe Institute o

    CorresponE-mail add

    (F. Lai).1 Tel.: +1 97

    reasons such as bias in parameter estimates, its inability to modelmeasurement errors, and its piecemeal approach to estimating theoverall research model.

    Despite the controversies and debate surrounding PLS, interest

    0272-6963/$ http://dx.doi.o have also used SEM to a great extent (Shah and006). To date, OM researchers have mainly adoptedbased SEM (CBSEM) methods, as exemplied by soft-s LISREL, AMOS, and EQS. A less widespread techniqueartial least squares (PLS) has started to receive attentionsearchers, as evidenced by the steady growth of PLS useeld.M method, PLS has been subjected to much debatet to its pros and cons and under what circumstances

    adopted, if at all. Advocates of PLS claim that it has thetimate research models using small samples with noution assumptions and can model both reective and

    onstructs within the same research model. PLS also sup-ids the inadmissible solutions and factor indeterminacyChin, 1998b). Researchers who oppose using PLS cite

    of Professor Fujun Lai was supported by a Major Program Fund ofal Science Foundation of China (Project No. 71090403/71090400) andf Supply Chain Integration and Service Innovation.ding author. Tel.: +1 228 214 3446; fax: +1 228 214 3475.resses: [email protected] (D.X. Peng), [email protected]

    9 845 6996; fax: +1 979 845 5653.

    in PLS among OM researchers seems to be growing. Although anumber of articles and book chapters have summarized PLS algo-rithms, reviewed the use of PLS in a research eld, or discussedspecic aspects of PLS applications such as sample size require-ments and specifying formative constructs, we are not aware ofany guideline for evaluating and using PLS that is tailored to the OMaudience. Empirical OM researchers face some unique challengessuch as relatively less developed empirical knowledge (Wacker,1998), a lack of standardized measurement scales (Roth et al.,2007), and the difculty of obtaining large samples because OMresearchers typically examine phenomena at the rm or the sup-ply chain level. These challenges may limit the applicability ofCBSEM. Consequently, OM researchers should evaluate differentanalysis techniques, particularly PLS if SEM is preferred. To helpOM researchers evaluate and use PLS, this study provides a practi-cal guideline that outlines some of the important issues in using PLS.We make this guideline specic to the OM eld by using illustrativeexamples from the OM literature.

    We also summarize studies that use PLS to examine OM top-ics in the elds of operations management, strategic management,and organization theory from 2000 to 2011. We review these arti-cles with respect to their rationales for using PLS, sample sizes, theuse and assessment of formative constructs, bootstrapping proce-dures, and the presentation of results. Our review provides a mixed

    see front matter 2012 Elsevier B.V. All rights reserved.rg/10.1016/j.jom.2012.06.002aosong Penga,1, Fujun Laib,

    f Information & Operations Management, Mays Business School at Texas A&M Univers

    f Management and International Business, College of Business, University of Southern M

    e i n f o

    e 2011vised form 13 June 2012ne 2012e 26 June 2012

    uares (PLS)ation modeling (SEM)arch methodsnagement

    a b s t r a c t

    The partial least squares (PLS) approacbusiness research elds such as informthe eld of operations management is management researchers regarding wstudy provides a practical guideline fliterature to demonstrate how the spreviews and summarizes the use of PLguideline. The main contribution of thPLS that is tailored to the operations mm/locate / jom

    ment research: A practical

    0 Wehner Building 4217 TAMU, College Station, TX 77843-4217, United

    ippi, 730 E. Beach Blvd, Long Beach, MS 39503, United States

    tructural equation modeling (SEM) has been widely adopted in systems, consumer behavior, and marketing. The use of PLS inrowing. However, questions still exist among some operationsr and how PLS should be used. To address these questions, ourng PLS and uses examples from the operations management

    points in this guideline can be applied. In addition, our studyhe recent operations management literature according to ourdy is to present a practical guideline for evaluating and usinggement eld.

    2012 Elsevier B.V. All rights reserved.

  • 468 D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480

    picture of PLS use in the OM eld, with some studies exhibitingdeciencies or lack of familiarity with certain aspects of PLS andothers demonstrating a reasonably good understanding of the PLSmethod.

    To the best of our knowledge, this study is the rst to provide apractical guideline for using PLS that includes illustrative examplesfrom the OM literature. This guideline can serve as a useful check-list for OM researchers in their evaluations regarding whether PLScan meet their data analysis needs given their research objectives,research model characteristics, sample sizes, and sample distribu-tion. In addition, our study performs a thorough review of the use ofPLS in the OM literature. This review highlights the common prob-lems of using PLS and thus can help OM researchers avoid similarmistakes in future studies.

    2. A guideline for evaluating and using PLS

    2.1. PLS overview

    PLS, originally introduced by Wold in the 1960s (Wold, 1966),was recently revitalized by Chin in the information systems (IS)eld (Chin, 1998a,b; Chin et al., 2003). In addition to OM, PLShas been used in management (e.g., Cording et al., 2008), market-ing (e.g., Hennig-Thurau et al., 2006; White et al., 2003), strategicmanagement (Hulland, 1999), and other business research elds.Representative PLS software tools include PLS-Graph and SmartPLS,among others. Appendix 1 provides a non-technical introductionto the PLS algorithm used by the most popular PLS software: PLS-Graph. In-deand Newste

    One majmer focuseboth commdifference bmon factor CBSEM specin PLS, the indicator va

    PLS is alsthe extent testimators estimation

    predictions. When the multivariate normality assumption is met,CBSEM estimates are efcient in large samples and support analyt-ical estimates of asymptotic standard errors. In contrast, becausethe construct scores of the latent variables in PLS are created byaggregating indicator items that involve measurement errors, PLSestimates of construct scores are biased and are only consistentunder the conditions of consistency at large, which refer to a largenumber of items per construct, high communality, and large samplesizes (Wold, 1982, p. 25). Because PLS lacks a classical parametricinferential framework, parameters are estimated using resamplingprocedures such as bootstrap and jackknife.

    We suggest that OM researchers use CBSEM if its assumptionsare met. However, when the conditions for using CBSEM are notmet, researchers should evaluate the pros and cons of CBSEM andPLS and should only use PLS if doing so proves more appropriateoverall. We summarize our guideline for evaluating and using PLS inTable 1 and discuss its specic points in detail in the rest of Section2.

    2.2. Issues to consider during the pre-analysis stage

    Considerations of construct formulation and analysis tech-niques should begin in the research design stage. To choosebetween CBSEM and PLS, researchers should carefully consider theobjectives of their study, the state of the existing knowledge aboutthe research model to be tested, the characteristics of the researchmodel (i.e., is the research model extremely complex?), and theconceptualization and formulation of the constructs (i.e., are con-

    form

    Resea)

    aim prednse, Pet alh mpiri

    screpspeceter-e clo

    Table 1A guideline fo

    Issues to conShould PL

    1. Resea2. Samp dels (23. Data p4. Does

    If PLS is used5. If form

    Cons equen Cons

    6. Consi

    Issues to con1. Check2. Struct

    Prop eter es3. Asses

    Chec Perf

    4. Repor Repo Clea d data Repo Repo Repopth coverage of this PLS algorithm can be found in Chind (1999).or difference between CBSEM and PLS is that the for-s on common factor variances and the latter considerson and unique variances (i.e., overall variances). Theetween CBSEM and PLS is similar to that between com-analysis and principle component analysis (Chin, 1995).ies the residual structure of latent variables, whereaslatent variables are weighted composite scores of theriables and lead directly to explicit factor scores.o less well grounded in statistical theory than CBSEM tohat it is considered statistically inferior (Chin, 1995). PLSdo not have the precision of maximum likelihood (ML)(as used in CBSEM, such as LISREL) in achieving optimal

    structs

    2.2.1. studies

    PLSmodelthis seVinzi researcthe emthe dithose paramthat ar

    r evaluating and using PLS.

    sider in the pre-analysis stage (2.2)S be used as a data analysis method?rch objectivesexploratory study (2.2.1)le size and model complexitySmall sample sizes and highly complex research moropertydata does not follow a multivariate normal distribution (2.2.3)

    the research model include formative constructs? (2.2.4) later in the data analysis stage:ative constructs are involved:ider using items that summarize the meaning of the formative constructs for subsider using reective items that capture the essence of the formative constructder increasing the number of items per construct for reective constructs (2.2.5)

    sider in the analysis stage (2.3) the validity of formative constructs (2.3.1)ural model estimation (2.3.2)erly set up bootstrapping procedures that generate the signicance level of params the research model (2.3.2)k the models explanatory power and predictive validity (R2 , f2 , and Q2)orm power analysis and robustness check of the resultst results (2.3.3)rt software used to perform PLS analysisrly state the rationales for using PLS (nature of the study, construct formulation anrt item weights of formative indicators and item loading of reective indicatorsrt statistical power of the analysisrt statistical signicance and condence interval of structural pathsative or reective?).

    rch objectives (conrmatory versus exploratory

    s to assess the extent to which one part of the researchicts values in other parts of the research model. InLS is prediction-oriented (Fornell and Bookstein, 1982;

    ., 2010). In contrast, CBSEM estimates the completeodel and produces t statistics that explain how wellcal data ts the theoretical model (i.e., minimizingancy between the covariances of sample data andied by the theoretical model). As such, CBSEM isoriented because it seeks to create parameter estimatesse to population parameters. This difference suggests

    .2.2)

    t construct validity analysis (2.2.4)

    timates

    characteristics)

  • D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480 469

    that CBSEM is more appropriate when there are well-establishedtheories underlying the proposed research model. In such a circum-stance, researchers can use CBSEM to obtain population parameterestimates that explain covariances with the assumption that theunderlying network hato explore assess the pcan be cons

    An illustis the theoragement m(1994). Thewell acceptRungtusanaamong the literature, aa Delphi stexperts in been subjecevaluate whical standpothe theory establishedto nd out environmenfor this endsample size

    An examresearch moof their stuing is associand the supseldom beeno well-estretical founmain objective power of relationaperformanc

    2.2.2. Samp2.2.2.1. Samin SEM becamodel t, a2006). The for CBSEM asuggest exaof parametePLS usuallythe most co

    Commonadequacy inhaving a cehaving a cer(Bentler an[a] power a2006, p. 15the 10 timimum sampPLS only rerelationshipship is the lnumber of in the reseaand (2) the

    of independent LVs inuencing it (i.e., the largest structural equa-tion (LSE)). Researchers have suggested that the 10 times rule ofthumb for determining sample size adequacy in PLS analyses onlyapplies when certain conditions, such as strong effect sizes and high

    lity oearchequa

    use nt thlain tcy wriateh mlationden. Thu(10 ing cntly

    ucts) sizeto 20stim

    usinparam= 370ermiherseseaine .oint . Becay to

    ucts iitemsticip

    ed thd quper chelestru

    . Mo has t nees, mes, anmetnveris whe betution

    is tyted binvoulti

    thus can

    et aike Clockucturtes fahe esmatemodel is correct. However, if the overall nomologicals not been well understood and researchers are tryingrelationships among the theoretical constructs and toredictive validity of the exogenous variables, then PLSidered.rative research model that can be tested using CBSEMy of quality management underlying the Deming man-ethod, as described in Anderson and Rungtusanatham

    main tenets of Demings management methods areed by both scholars and practitioners. Anderson andtham (1994) articulate the theoretical relationshipsconstructs in the research model based on the relevantn observation of industry practices, and the results ofudy that assembled a panel of industry and academicquality management. Their research model has sinceted to empirical validation (Anderson et al., 1995). Toether their research model still holds from a theoret-int, a study should be conrmatory in nature becauseunderlying the research model to be tested is well-. Thus, a main objective of the data analysis should behow well the data collected from the current businesst t the research model. CBSEM would be appropriate

    , assuming that the other requirements for CBSEM (e.g.,s and sample distribution) are met.ple of when PLS might be more appropriate for testing adel can be found in Cheung et al. (2010). The objectivedy is to explore the extent to which relational learn-ated with the relational performance of both the buyerplier in a supply chain dyad. These relationships hadn examined in the literature at the time, and there wasablished theory that could directly serve as the theo-dation of their hypothesized relationships. As such, ative of the analysis should be to identify the predic-of the exogenous variables (a list of proposed driversl performance) on the endogenous variables (relationale), making PLS a potentially appropriate analysis tool.

    le sizes and model complexityple sizes. Sample sizes are an important considerationuse it can affect the reliability of parameter estimates,nd the statistical power of SEM (Shah and Goldstein,literature proposes different sample size requirementsnd PLS. Common sample size rules of thumb for CBSEMmining the ratio of the sample size to the total numberrs estimated, whereas sample size rules of thumb for

    only suggest examining the ratio of the sample size tomplex relationship in the research model.ly used rules of thumb for determining sample size

    CBSEM include establishing a minimum (e.g., 200),rtain number of observations per measurement item,tain number of observations per parameters estimatedd Chou, 1987; Bollen, 1989), and through conductingnalysis (MacCallum et al., 1992) (Shah and Goldstein,4). With respect to PLS, the literature frequently useses rule of thumb as the guide for estimating the min-le size requirement. This rule of thumb suggests thatquires a sample size of 10 times the most complex

    within the research model. The most complex relation-arger value between (1) the construct with the largestformative indicators if there are formative constructsrch model (i.e., largest measurement equation (LME))dependent latent variable (LV) with the largest number

    reliabifor ressize ad

    Weagemeto expadequaappropresearcplex reindepemodelas 20 assumsufcieconstrsample1987) eters emodelber of 74 5 for detresearcmate. RdetermCBSEM

    A pdesignone waconstrber of they anbe notrate anitems Nevertper con

    2.2.2.2modelbut noanalysanalysof paraand coanalyslow, thble solin SEMcompustruct cross-mcators,effects(Kenny

    Unlsolve bthe strestimasuch, tto estif measurement items, are met. Thus, the literature callsers to calculate statistical power to determine samplecy (Marcoulides and Saunders, 2006).the theoretical framework underlying Demings man-eory (Anderson et al., 1995) as an illustrative examplehe 10 times rule of thumb for evaluating sample sizehen using PLS. We are not suggesting that PLS is more

    for testing the above theoretical model. Because theodel includes only reective constructs, the most com-nship is the dependent LV with the largest number oft LVs inuencing it, which would be 2 in this researchs, the minimum sample size requirement can be as low2 = 20) when PLS is used to test the research model,ertain conditions are met (e.g., adequate effect sizes, a

    large number of items per construct, and highly reliable. However, if we follow the rules of thumb for CBSEM

    requirements, which typically range from 5 (Tanaka, (Bentler and Chou, 1987) times the number of param-ated, the sample size requirement for testing the sameg CBSEM would be 3701480 observations (the num-eters estimated is 74 in the research model, such that

    and 74 20 = 1480). We note that the above methodsning sample size requirements are rules of thumb that

    can use in the pre-analysis stage to make a rough esti-rchers still should perform a power analysis to formallywhether the sample size is adequate for using PLS or

    related to the sample size issue is the questionnaireuse increasing the number of indicators per construct isreduce the bias in the parameter estimate for reectiven PLS, researchers can consider including a large num-

    for reective constructs in the survey questionnaire ifate that PLS may be used in the analysis stage. It shouldat researchers often face a tradeoff between responseestionnaire length, and that increasing the number ofonstruct can adversely affect a surveys response rate.ss, we suggest that researchers take the number of itemsct into consideration during the research design stage.

    del complexity. The overall complexity of the researcha direct impact on sample size adequacy in CBSEM,cessarily in PLS. Considerations such as multi-levelultiple endogeneity, mediation analyses, moderationd higher-order factors can increase the total number

    er estimates, possibly leading to model identicationgence issues in CBSEM. For instance, in a multi-levelere group size is small and intra-cluster correlation isween-group part of the model may yield an inadmissi-

    in CBSEM (Hox and Maas, 2001). A moderation effectpically tested via a new construct that uses indicatorsy cross-multiplying the standardized items of each con-lved in the moderation effect (Chin et al., 2003). Thisplying can potentially generate a large number of indi-

    increasing the model complexity. Tests for mediationalso potentially increase the sample size requirementl., 1998).BSEM, PLS uses an iterative algorithm to separately

    s of the measurement model and subsequently estimateal path coefcients. This iterative method successivelyctor loadings and structural paths subset by subset. Astimation procedure employed by PLS allows researchers

    highly complex models as long as the sample size is

  • 470 D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480

    adequate to estimate the most complex block (relationship) in themodel. The literature suggests that PLS is appropriate for testingthe magnitude of moderation effects (Helm et al., 2010) and forperforming between-group comparisons (Qureshi and Compeau,2009). PLS iCBSEM wheand exogenresearcherscomplex an

    2.2.3. Data CBSEM

    of the sammated stanCBSEM (Maened with science resdistributioncumstancesassumptiontivariate noit generallyordinary leations aboutBookstein, ordinal, and

    Thereforpotentially sider using PLS provide

    2.2.4. SpeciAlthoug

    clude the uestimate reCBSEM to rein unidentiformative iindicators, astantial num1993). Becaconsist of aidenticatiowithout feeestimating rmate researwithout inc2010). Therusing PLS wmodel. Becamodel typicifying and efor using PL

    The fundconstructs ireective cvariable forto Chin (199et al. (2007constructs.

    If the resshould caretive construaspect and ttive construnot a sampl

    repre

    Reflecti ve Formative

    ts of nd] th of rmate coucts,

    manf theucts,

    revre istive s, anev ettheras a ts thts et

    stude th

    thatationcatiotive t, the003ininctiotionicato

    use ive cly inre, o

    in s Howce a

    Jarv. Firsto thd collce rahersionalerfoe meratems measuring manufacturing exibility cannot be replaced bymeasuring cost, quality, or delivery, and vice versa. Third, a

    in one performance indicator is not necessarily associatedanges in other indicators. For instance, conceptually, an itemring exibility does not have to correlate with an item mea-

    manufacturing costs. Fourth, with respect to nomologicalrk, one cannot expect that different operational performances more likely to detect between-group differences thann data are normally distributed, sample size is small,ous variables are correlated. Thus, we suggest that

    consider PLS when the research model is extremelyd may lead to estimation problems in CBSEM.

    propertiesgenerally requires a multivariate normal distributionple data. Non-normal data may lead to underesti-dard errors and inated goodness-of-t statistics incCallum et al., 1992), although these effects are less-larger sample sizes (Lei and Lomax, 2005). In socialearch, data often do not follow a multivariate normal, thus limiting the applicability of CBSEM in some cir-. Compared with CBSEM, PLS generally places less stricts on data distribution. PLS also does not require a mul-rmal data distribution. Because PLS is regression-based,

    only requires the data distribution assumptions of thest squares (OLS) regression. PLS involves no assump-

    the population or scale of measurement (Fornell and1982, p. 443) and consequently works with nominal,

    interval scaled variables.e, if violations of data distribution assumptions couldundermine CBSEM estimation, researchers should con-PLS. A close examination the results of both CBSEM ands a useful robustness check of the analysis.

    fying formative constructsh the presence of formative constructs does not pre-se of CBSEM, CBSEM generally lacks the ability tosearch models with formative constructs. Applyingsearch models with formative constructs often results

    ed models (Jarvis et al., 2003). This is because usingndicators in CBSEM implies zero covariance amongnd the model can only be solved when it includes a sub-ber of additional parameters (MacCallum and Browne,

    use the algorithms performed in a PLS analysis generally series of ordinary least squares analyses (Chin, 1998b),n is not a problem for recursive models (i.e., modelsdback loops). This feature gives PLS an advantage inesearch models with formative constructs. PLS can esti-ch models with both reective and formative constructsreasing model complexity (Chin, 1998a; Vinzi et al.,efore, Diamantopoulos and Winklhofer (2001) suggesthen formative indicators are present in the researchuse the presence of formative constructs in the researchally leads researchers to consider PLS, we include spec-valuating formative constructs as a part of our guidelineS.amental difference between reective and formatives that the latent variable determines the indicators foronstructs whereas the indicators determine the latent

    formative constructs (see Fig. 1). Researchers can refer8b), Diamantopoulos and Winklhofer (2001) and Petter) for in-depth coverage of reective versus formative

    earch model includes formative constructs, researchersfully consider the conceptual domain of each forma-ct and make sure that measurement items capture eachhe entire scope of the conceptual domain. Unlike reec-cts, formative constructs need a census of indicators,e (Bollen and Lennox, 1991, p. 307). Failure to consider

    Boxes

    all facetors [abreadt[i.e., foBecausconstrtests apart oconstreratureliteratuqualitacussion(Andre

    Anostruct sugges(Rober(3%) ofthe trureportInformspecia reeca resulet al., 2determ(1) direcovariathe ind

    Weformattypicalliteratustructs2011).formanset by(2001)cators deneformanresearcoperatbility pthe samular opwith ititems items changewith chmeasusuringnetwosent measurement items

    ... ...

    Fig. 1. Reective and formative constructs.

    the construct will lead to an exclusion of relevant indica-hus exclude part of the construction itself, [therefore],denition is extremely important to causal indicatorsive indicators] (Nunnally and Bernstein, 1994, p. 484).ntent validity is particularly important for formativePetter et al. (2007) suggest making content validitydatory practice for assessing formative constructs. As

    effort to establish the content validity of formativewe recommend that researchers conduct a thorough lit-iew related to the constructs conceptual domain. When

    not available or does not support the construct validity,research methods such as expert interviews, panel dis-d Q-sorting should be used to ensure content validity

    al., 2009). potential problem is misspecifying a formative con-reective construct. A review of SEM in OM researchat 97% of all studies model latent constructs as reective

    al., 2010). The authors argue that the small proportionies that model formative constructs under-representseoretical nature of OM constructs. Petter et al. (2007)

    29% of the studies published in MIS Quarterly and Systems Research, two leading IS journals, have mis-n problems. When a formative construct is specied asconstruct, it may lead to either Type I or Type II errors. As

    structural model tends to be inated or deated (Jarvis). Jarvis et al. (2003) provide a four-point guideline forg whether a construct should be reective or formative:n of causality, (2) interchangeability of the indicators, (3)

    among the indicators, and (4) nomological network ofrs.operational performance as an illustrative example of aonstruct because it is a multi-dimensional concept thatcludes cost, quality, delivery, and exibility. In the OMperational performance is modeled as reective con-ome studies (e.g., Cao and Zhang, 2011; Inman et al.,ever, it is more appropriate to model operational per-s a formative construct if one follows the guidelinesis et al. (2003) and Diamantopoulos and Winklhofert, the direction of causality should be from the indi-e construct because a rms operational performance isectively by its cost, quality, delivery, and exibility per-ther than the opposite (Jarvis et al., 2003). Conceptually,

    cannot expect that an underlying latent construct of performance causes cost, quality, delivery, and exi-rmance to all change in the same direction and withagnitude. Second, the measurement items of a partic-ional performance dimension are not interchangeablemeasuring other performance dimensions. For instance,

  • D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480 471

    items will be impacted by the same set of antecedents or lead tothe same set of consequences. Empirical evidence suggests that dif-ferent antecedents may impact various operational performancedimensions to different extents (Swink et al., 2007). Similarly,the effect of various operational performance dimensions on out-come variables such as business performance can vary considerably(White, 1996).

    Because a formative construct by itself is under-identied,researchers should consider including two or more reective indi-cators in each formative construct. These reective indicators arenot usually a part of the research model to be tested, but ratherare used as an external criterion to assess the formative constructvalidity (Diamantopoulos and Winklhofer, 2001). The additionalreective indicators and the set of formative items together allowresearchers to estimate a multiple indicators and multiple causes(MIMIC) model (Bollen and Davis, 2009; Diamantopoulos andWinklhofer, 2001) to evaluate the external validity of formativeconstructs. More details about estimating a MIMIC model are pro-vided in Section 2.3.1.

    2.3. Issues to consider in the analysis stage

    2.3.1. Measurement validity assessmentCBSEM has a set of well-established procedures for evaluating

    reective constructs. Researchers can examine item loadings andcross-loadings and assess various measures of construct reliabilityand validity. Typical measures of construct reliability include Cron-bachs alpha and composite reliability. Convergent validity can beassessed by checking whether the average variance extracted (AVE)

    of the construct is greater than 0.50 (at the construct level) andthe item loadings are greater than 0.70 and statistically signicant(at the item level). Discriminant validity is usually examined bycomparing the square root of AVE with the correlations betweenthe focal construct and all other constructs. In PLS, researchers canuse similar procedures to evaluate the reliability and validity ofreective constructs. Chin (1998b) recommends that researchersexamine Cronbachs alpha, composite reliability, and AVE to assessreective construct properties. Because OM researchers who haveused CBSEM are generally familiar with techniques for assessingmeasurement models that involve only reective constructs, ourdiscussion below focuses on techniques for assessing formativeconstructs.

    Although widely accepted standard procedures for evaluatingformative construct properties have yet to emerge, researchersgenerally agree that the criteria used to evaluate reective con-structs should not apply to formative constructs (Diamantopoulosand Winklhofer, 2001). As Bollen (1989, p. 222) notes, Unfortu-nately, traditional validity assessments and classical test theory donot cover cause [formative] indicators. Likewise, Hair et al. (2006,p. 788) suggest that because formative indicators do not have tobe highly correlated, internal consistency is not a useful validationcriterion for formative indicators.

    We summarize various procedures for evaluating formativeconstructs in Table 2. First, researchers should check multi-collinearity of formative indicators (items). High multicollinearitysuggests that some items may be redundant. To detect multi-collinearity, researchers can examine the correlation matrix, thecondition index, and the variance ination factor (VIF). Examining

    Table 2Validity tests of formative constructs.

    Aspects of validity Description Test Recommended criterion Note

    Item-level tests The contribution ofeach item to theformativeconstruct

    Formative itemweights should belarge and signicant

    Check the sign, magnitude,signicance, range, andaverage of formative itemweights (Klein and Rai,2009)

    When N orthogonalformative items arespecied, the ceiling ontheir average weight is sqrt(1/N) the average weightsshould not be too far belowthe ceiling

    The weight, rather thanthe loading of theformative items shouldbe examined (Chin,1998b)

    Multicolinearity A high multicolinearity Check variance ination A VIF below 3.3 indicates Researchers should be

    Construct-le tural ated ttruct

    ctive i

    tiplemultip) modvis, 20

    s shotheirstructent thosite sucts

    a This meth stabli(2009) do not between items suggests that someindicators may beredundant

    factor (VIF)

    vel tests Nomologicalvalidity

    The relationshipbetween the formativeconstruct and othertheoretically relatedconstructs in theresearch model shouldbe strong

    Check the struccoefcients relformative cons

    External validity The formative indexshould explain thevariance of alternativereective items of thefocal construct to alarge extent(Diamantopoulos andWinklhofer, 2001)

    Check the reefactor loadingsEstimate a mulindicators and causes (MIMIC(Bollen and Da

    Discriminantvalidity

    Compare item-to-own-construct-correlationswith item-to-other-construct-correlationsa

    (Klein and Rai, 2009)

    Formative itemcorrelate with composite conto a greater extwith the compof other constr

    od was recently proposed in the literature (Klein and Rai, 2009) and is not as well-eprovide detailed guidance on how to apply this test.the absence ofmulticollinearity(Diamantopoulos andSiguaw, 2006)

    careful about deletingitems because doing socan change theconceptual domain ofthe construct

    patho the

    tem

    leel09)

    The reective indicatorsshould have a signicantand large factor loadingThe MIMIC model shouldhave a good model t

    Researchers need toshould developreective items for theformative construct,mainly for checkingconstruct validity.MIMIC should be ttedusing CBSEM(Diamantopoulos andWinklhofer, 2001)

    uld

    scoreancore

    shed as the other validity tests listed in the above table. Klein and Rai

  • 472 D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480

    the VIF is aGeneral stcern if the Vmulticolline641). Diamvative criteprovide VIFan OLS regrdent variabvariables. Gstruct scoreof reporting

    Petter etexhibit highstruct withcorrelated icorrelated composite ivert the cohighly corrlatent varianal construvery carefultual domainitems with theoreticallhigh multiclines providamong form

    Second, tribution orconstruct scgating the foweights. Thweight, signresearchersing. The itethe item weand the ma(Andreev et

    Third, retive construtypically as

    To coms t2a (sativeow

    capt can uct (

    Not for eow

    l moduct a

    plehe molo

    of thconsng al aut

    conhiche.g., Bt anFig. 2. MIMIC tests.

    frequently used means of detecting multicollinearity.atistics theory suggests that multicollinearity is a con-IF is higher than 10; however, with formative measures,arity poses more of a problem (Petter et al., 2007, p.

    antopoulos and Siguaw (2006) suggest a more conser-rion of VIF at 3.3. Most PLS software packages do not

    outputs. Calculating the VIF of formative items involvesession with the formative construct score as the depen-le and all of its formative items as the independentefen and Straub (2005) demonstrate how to obtain con-s and Mathieson et al. (2001) provide a useful example

    multicollinearity. al. (2007) suggest that if some of the formative items

    multicollinearity, researchers can (1) model the con- both formative and reective items in which highlytems are specied as reective, (2) remove the highlyitems, (3) collapse the highly correlated items into andex (e.g., Boow-Thies and Albers, 2010), or (4) con-nstruct into a multidimensional construct, in whichelated items are specied as reective indicators of able that serves as a formative indicator of the origi-ct. Regarding the second method, researchers should be

    in deleting formative items and ensure that the concep- of the formative construct will not change if they delete

    2001).tive itein Fig. Alterna shadshouldmodelconstr2009).CBSEMor shadoveralconstrthe comPLS if t

    Nomicanceother be stroSeveramativewith wlated (Ruekerhigh multicollinearity. We suggest that OM researchersy and semantically assess whether the items exhibitingollinearity are redundant, and then follow the guide-ed by Petter et al. (2007) to deal with multicollinearityative items.

    researchers should evaluate each formative items con- importance to the formative index (i.e., the formativeore). A formative index is a composite created by aggre-rmative items of a construct using their respective itemis assessment involves examining each formative items, and magnitude (Gtz et al., 2010). For formative items,

    should examine item weight rather than item load-m weight should be statistically signicant, the sign ofight should be consistent with the underlying theory,

    gnitude of the item weight should be no less than 0.10 al., 2009).searchers should check the external validity of forma-cts. To establish external validity, researchers shouldsess a MIMIC model (Diamantopoulos and Winklhofer,

    examples oFinally,

    formative cmative congreater thaformative iposite consthat these mmative contherefore sh

    2.3.2. StrucBecause

    tion, traditiare inappromate stand(Chin, 1998PLS-Graph 3strapping renduct MIMIC, researchers should use at least two reec-hat capture the essence of the formative index, as shownee example in Diamantopoulos and Winklhofer, 2001).ly, they can create a reective construct that serves as

    of the formative construct (i.e., the reective constructure the essence of the formative construct). The MIMICthen be estimated using the formative and the shadowFig. 2b, and see example in Cenfetelli and Bassellier,e that the MIMIC model should be estimated usingach formative construct and its related reective items

    constructs. This is because researchers should examineel t statistics to determine the validity of the formativend such statistics are only available in CBSEM. However,te research model may still need to be estimated usingodel is under-identied in CBSEM.gical validity is manifested in the magnitude and signif-e relationships between the formative construct and

    tructs in the research model, which are expected tond signicant based on theory and previous research.hors suggest testing the nomological validity of a for-struct by correlating its formative items with variables

    the formative construct should theoretically be corre-agozzi, 1994; Diamantopoulos and Winklhofer, 2001).

    d Churchill (1984) and McKnight et al. (2002) provide

    f nomological validity analysis.researchers can examine the discriminant validity of aonstruct. Klein and Rai (2009) propose that for a for-struct, the intra-construct item correlations should ben the inter-construct item correlations. Furthermore,tems should have stronger correlations with their com-truct score than with that of other constructs. We noteethods for establishing the discriminant validity of for-

    structs are not yet well-established in the literature, andould be adopted at researchers discretion.

    tural model estimation and assessment PLS does not assume a multivariate normal distribu-onal parametric-based techniques for signicance testspriate. PLS uses a bootstrapping procedure to esti-ard errors and the signicance of parameter estimatesb). The default setting in the most popular PLS software,.0, is to resample 100 times. The default setting for boot-sampling in another popular PLS software, SmartPLS, is

  • D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480 473

    to resample 200 times. The number of bootstrap samples recom-mended in the literature has increased. For instance, Chin (1998b)recommends resampling 500 times. Given the computing poweravailable today, as many bootstrapping samples as possible (>500)should be gping samploriginal datmay arise frto bootstrapling. The sequal the sstrap samplargue that ican be smawhen the o

    Researchdifferent re(2003) and the authorstimes in a Pand 500 timtechniques

    First, rescance of eactheory. To researchersendogenouconsistent win the endoues of 0.67respectively

    Second, constructs uputed as theremains unto Cohen (1large, medi

    Third, re(1998b) argables is of moften artic(Geisser, 19vance and cis available is viewed a

    Fourth, pif the powesize, reliabithe statistictimes rulesample sizesizes and loapplying thmal power (Goodhue eon the resuthat the 10should onlyhighly reliaand Saundeto achieve tor loadingsOM studiesGoodale, 19

    Fifth, alresearchers

    when using PLS (Tenenhaus et al., 2005), which considers thequality of the complete measurement model in terms of averagecommunality (i.e., AVE) and the quality of the complete structuralmodel in terms of average R2. The average of communality is com-

    as as as tt twolly,

    es toh hais to

    the inar, andum

    mailpful

    withsumyed by goo

    Repot, restherhersallen

    s ina alsoe abis thaond, ortinus rehile

    rd, ress thsizesnoud, 1

    itemon toes] won thseare co

    Hypbecaute (Hpothged searcriateestimlly, wr stute re

    shoequas.

    illus

    his smateenerated. Although increasing the number of bootstrap-es does not increase the amount of information in thea, it reduces the effect of random sampling errors thatom the bootstrap procedure. Another issue pertainingpping is the sample size of each bootstrapped resam-ample size of bootstrap resampling is usually set toample size of the original data from which the boot-es are drawn (Chung and Lee, 2001). Some researchersn certain circumstances the bootstrapping sample sizeller than the sample size of the original data, especiallyriginal sample is large (Andreev et al., 2009, p. 8).ers should consider performing bootstrapping using

    sampling schemes to verify the results, as in Ahuja et al.Rosenzweig (2009). For instance, in Ahuja et al. (2003),

    used a default bootstrapping resampling setting of 100LS-Graph and veried the results using settings of 250es. After performing bootstrapping procedures, severalare available for assessing the structural model in PLS.earchers should check the sign, magnitude, and signi-h path coefcient, all of which should be consistent withevaluate the predictive power of the research model,

    should examine the explained variance (R2) of thes constructs. Using R2 to assess the structural model isith the objective of PLS to maximize variance explainedgenous variables. The literature suggests that R2 val-, 0.33, and 0.19 are substantial, moderate, and weak,

    (Chin, 1998b).researchers can evaluate the effect size of the predictorsing Cohens f2 (Cohen, 1988). The effect size is com-

    increase in R2 relative to the proportion of variance thatexplained in the endogenous latent variable. According988), f2 values of 0.35, 0.15, and 0.02 are considered

    um, and small, respectively.searchers can assess the predictive relevance. Chinues the prediction of observables or potential observ-uch greater relevance than the estimator of what areial construct-parameters (p.320). StoneGeissers Q2

    75; Stone, 1974) is often used to assess predictive rele-an be calculated using the blindfolding procedure, whichin most PLS software packages. If Q2 > 0, then the models having predictive relevance.ost hoc power analyses should be conducted to checkr of the research study is acceptable (>0.80). The effectlity, the number of indicators, or other factors may affectal power of a hypothesis test. Simply applying the 10

    of thumb may lead researchers to underestimate the requirement in certain situations, such as small effectw reliability of measurement items. In other words,e 10 times rule of thumb without performing a for-analysis may lead to hypothesis tests with low powert al., 2006; Marcoulides and Saunders, 2006). Basedlts of a simulation study, Goodhue et al. (2006) argue

    times rule of thumb for PLS sample size requirement be used when effect sizes are large and constructs areble. Another Monte Carlo simulation by Marcoulidesrs (2006) also shows that the sample size requirementa 0.80 statistical power increases substantially as fac-

    and item inter-correlations decrease. Considering that tend to have relatively small effect sizes (Verma and95), a power analysis is particularly needed.though PLS does not provide overall t statistics,

    have recently begun to compute Goodness of Fit (GoF)

    puted weightat leas

    Finaanalysresearcanalysated bythe ordresultsmaximchosenit is heCBSEMtion asemplosonabl

    2.3.3. Firs

    PLS. Raresearccic chsuch ashoulding thsample

    Secitly repprevioworthw

    Thito asseemphaendoge(Hullanreporttributivariablbased that realso th2010).geous estimathe hybe gauthat reappropcients

    Finaof theiestimashouldare adsample

    3. An

    In tto esti weighted average of all of the communalities usinghe number of manifest variables in each construct with

    manifest variables.we recommend that researchers conduct alternative

    check the robustness of the results. Previous empiricals compared the parameter estimates of the alternativeevaluate whether the results are similar to those gener-PLS analysis. For instance, Klein and Rai (2009) comparey least squares (OLS) path analysis results with the PLS

    Barroso et al. (2010) and Vilares et al. (2010) comparelikelihood CBSEM results with the PLS results. If PLS isnly because the data distribution assumption is not met,for researchers to run CBSEM and compare the results of

    those of PLS. Even with the violation of data distribu-ptions, the maximum likelihood estimation procedurey CBSEM can be quite robust and may still produce rea-d estimates of the population parameters (Chin, 1995).

    rting and interpreting resultsearchers should explain in detail their reasons for using

    than present the potential advantages of PLS in general, should explain how PLS can help them overcome spe-ges they face that may render CBSEM inappropriate,

    dequate sample sizes or non-normal data. Researchers be careful not to make generalized statements regard-lity of PLS to estimate research models using smallt may violate the multivariate normality assumption.researchers should report the PLS software used. Explic-g the PLS software used enables researchers to replicatesearch, which is important for providing support to

    theories (Tsang and Kwan, 1999).searchers should adequately report the results needede predictive power of the research model. Because PLS

    predictive ability, the explained variance (R2) for alls constructs in the research model should be reported999). For the formative constructs, researchers should

    weights, which represent each formative items con- the formative index. The interpretation of LVs [latentith formative indicators in any PLS analysis should bee weights (Chin, 1998b, p. 307). We also recommendchers report not only the statistical signicance, butndence interval of structural paths (Streukens et al.,othesis tests using condence intervals are advanta-se they provide more information about the parameterenseler et al., 2009). Shaffer (1995, p. 575) notes, Ifesis is not rejected, the power of the procedure canby the width of the interval. The literature suggestshers can use bias-corrected condence intervals as an

    means for testing the signicance of the path coef-ated by PLS (Gudergan et al., 2008).e suggest that researchers report the statistical power

    dies. Although PLS is believed to have the ability tosearch models with a smaller sample, researchers stillw that the statistical power of the hypothesis testste, which is typically a concern for studies with small

    trative example of using PLS

    ection, we provide an illustrative example of using PLS a research model that includes both reective and

  • 474 D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480

    Cross-Functiona lIntegratio n

    OperationalPerfor mance

    MarketShare

    Customer

    Satisf action

    Int1 Int2 Int3 Int4

    Opf1 Opf 2 Opf3 Opf 4Mrkt

    Sat1

    Sat2

    Sat3

    Trust with Supplier s

    Tst1 Tst Ts t32

    Fig. 3. The illustrative research model.

    formative constructs. The research model is presented in Fig. 3,in which operational performance is modeled as a formative con-struct, cross-functional integration and trust with suppliers are theantecedents, and customer satisfaction and market share are the out-comes. We use data from the third round of the High PerformanceManufacturing (HPM) study (Schroeder and Flynn, 2001) to test theresearch model. The sample size is 266. The measurement items arepresented in Tables 3 and 4.

    We use SmartPLS 2.0.M3 to estimate our research model. Becausethe criteria ferent, we aloadings, co(AVE) of thloadings arindicating care greaterstruct level.reliability. in Table 5) (shown offmatrix, indreective co

    Regarding the formative construct, we examine the formativeitem weights, multicolinearity between items, discriminant valid-ity, and nomological validity of the formative construct. For eachformative item, we examine its weight (rather than its item load-ing), sign, and magnitude. Each item weight is greater than 0.10(Andreev et al., 2009) and the sign of the item weight is consis-tent with the underlying theory (see Table 4). With the exceptionof unit cost of manufacturing (Opf1), all other items are signif-icant at the 0.01 level. In addition, all VIF values are less than

    iamaarityvel, te coanc

    y of te thuct aen thintra-con

    are uct b

    Table 3Measurement

    Construct g

    Trust withsuppliers

    Cross-functionalintegration

    Customersatisfaction

    Market sharfor assessing reective and formative constructs are dif-ssess the two types of constructs separately. The itemmposite reliability (CR), and average variance extractede reective constructs are shown in Table 3. All iteme greater than 0.70 and signicant at the 0.001 level,onvergent validity at the indicator level. All AVE values

    than 0.50, suggesting convergent validity at the con- All CR values are greater than 0.70, indicting acceptableThe square root of each AVE (shown on the diagonalis greater than the related inter-construct correlations

    the diagonal in Table 5) in the construct correlationicating adequate discriminant validity for all of thenstructs.

    3.3 (Dcolline0.01 lebecausperformvaliditcompuconstrbetweage of of intra

    Weconstr

    properties of reective constructs.

    Indicator (label) Item loadin

    We are comfortable sharing problems with oursuppliers (Tst1)

    0.8678

    In dealing with our suppliers, we are willing to changeassumptions in order to nd more effective solutions

    0.7851 (Tst2)We emphasize openness of communications incollaborating with our suppliers (Tst3)

    0.7911

    The functions in our plant work well together (Int1) 0.8829 The functions in our plant cooperate to solve conictsbetween them, when they arise (Int2)

    0.8450

    Our plants functions coordinate their activities (Int3) 0.8550 Our plants functions work interactively with eachother (Int4)

    0.8458

    Our customers are pleased with the products andservices we provide for them (Sat1)

    0.9273

    Our customers seem happy with our responsiveness totheir problems (Sat2)

    0.7522

    Our customers have been well satised with thequality of our products, over the past three years (Sat3)

    0.9072

    e How large is the plants market share, relative to thenext largest competitor? For example, a response of200% indicates your market share is twice that of thenext largest competitor (Mrkt)

    1.0000 ntopoulos and Siguaw, 2006), indicating that multi- is not severe. Although Opf1 is not signicant at thehis item should be included in the measurement modelnceptually it is an indispensable aspect of operationale (Petter et al., 2007). To examine the discriminantthe formative construct operational performance, wee average of intra-construct item correlations for thisnd the average of intra-construct item correlationsis construct and other constructs. We nd that the aver--construct item correlations is greater than the averagestruct item correlations.unable to assess the external validity of the formativey performing the MIMIC because the research design

    T-Stat. Composite reliability Communality (AVE)

    28.3379 0.8592 0.6709

    16.453616.5213

    45.7463 0.9180 0.736731.0738

    38.105129.9594

    78.6283 0.8998 0.7511

    14.1533

    44.6741

  • D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480 475

    Table 4Measurement properties of formative constructs.

    Construct Indicator Item weight T-stat. VIF

    Operationalperformance

    Unit cost of manufacturing (Opf1) 0.1494 1.2695 1.078Conformance to product specications (Opf2) 0.3651 3.7948 1.152On time delivery performance (Opf3) 0.5886 5.6621 1.208Flexibility to change product mix (Opf4) 0.3269 2.8544 1.118

    of the HPM project does not include additional reective itemsor shadow reective constructs that capture the overall opera-tional performance. However, we are able to assess the nomologicalvalidity of the operational performance construct by examiningthe structural paths of its antecedents and outcomes. As Table 6shows, our results indicate positive and highly signicant relation-ships between operational performance and its two antecedentsand two outcomes, indicating the nomological validity of opera-tional performance measures.

    The results of the structural model estimate are shown inTables 6 and 7. We run the structural model using the bootstrapprocedure with 200, 500 and 1000 times of resampling and themagnitude and signicance of the structural paths are consistent.

    As the t-statistics and 95% condence intervals indicate, all pathcoefcients are signicant at the 0.01 level. The R2 of endogenousconstructs amarket sharappear to beformance ctrust and innous constrR2excluded)/(1and 0.064, rsizes (Coheare 0.0416, share and stive relevan

    Regardinputed the GThe GOF is

    GOF =

    c

    Our samsize requirethumb. Thestruct operaAlthough th

    rule of thumb, a statistical power analysis is needed to formallydetermine if the sample size is adequate. We run a power analy-sis for each structural path and for the largest structural equation(LSE), which is the dependent latent variable (LV) with the largestnumber of independent LVs inuencing it. As shown in Table 6,the power of each path is much greater than 0.80. In our researchmodel, the LSE is the latent construct operational performance withtwo predictors (i.e., trust and integration) in which the smallesteffect size (f2) is 0.034 (see Table 7). For this effect size, our samplesize of 266 can achieve a power of 0.768 at the signicance level of0.05 (), which is only slightly smaller than 0.80.

    Finally, we check the robustness of the PLS results. Becauseour research model includes both reective and formative con-structs, we are unable to run CBSEM and compare PLS results withCBSEM results. Instead, we calculate the average of the items within

    nstrhe OL

    (see

    mm

    s sect us toon tone foearchlishinationiencel (POanagics

    and Iited h exMalh

    Our

    Table 5Construct corr

    Trust with suCross-functiOperational Market sharCustomer sa

    Note. The squa tion m

    Table 6Structural esti

    Path

    Trust with suCross-functiOperational Operational re 0.129, 0.050 and 0.095 for operational performance,e, and customer satisfaction, respectively, which do not

    very strong (Chin, 1998b). Because the operational per-onstruct has more than one exogenous construct (i.e.,tegration), the relative effect sizes (f2) of the exoge-ucts are calculated using the equation f 2 = (R2included

    R2included). The f2 of trust and integration are 0.034espectively, which are considered relatively small effectn, 1988). StoneGeissers Q2 for endogenous constructs0.0316, and 0.0563 for operational performance, marketatisfaction, respectively, indicating acceptable predic-ce.g the overall quality of the research model, we com-oodness of Fit (GoF) following Tenenhaus et al. (2005).calculated as:

    ommunality R2 =

    0.0916 0.5793 = 0.2303ple size of 266 is well above the minimum samplement of 40 as determined by the 10 times rule of

    most complex block in our model is the formative con-tional performance, which has 4 formative indicators.e sample size is deemed adequate using the 10 times

    each cosion. Tresults

    4. A su

    Thiallowsattentiguidelical resas pubof Opersion ScJournation MEconom(IJPR), been cresearc1997; 1996).

    elations.

    X1 X2

    ppliers (X1) 0.8191onal integration (X2) 0.2846 0.8583performance (X3) 0.2659 0.3077 e (X4) 0.0435 -0.1039 tisfaction (X5) 0.2793 0.2583

    re root of average variance extracted (AVE) is shown on the diagonal of the correla

    mates.

    PLS result Coefcient T-stat. 95%

    ppliers operational performance 0.2043 3.3447 (0.2onal integration operational performance 0.2611 4.2546 (0.2performance market share 0.2235 3.6089 (0.2performance customer satisfaction 0.3199 5.6398 (0.3uct and subject these average values to the OLS regres-S regression results are largely consistent with the PLS

    Table 6).

    ary of PLS use in the OM literature

    ion reviews PLS use in recent OM literature. This review identity which aspects of PLS researchers should pay

    and also serves as the starting point for creating ourr evaluating and using PLS. Because PLS is an empiri-

    method, we consider OM journals that are recognizedg relevant and rigorous empirical research. The Journals Management (JOM), Management Science (MS), Deci-s Journal (DSJ), Production and Operations ManagementMS), the International Journal of Operations and Produc-ement (IJOPM), the International Journal of Production(IJPE), the International Journal of Production ResearchEEE Transactions on Engineering Management (IEEE) haveas those whose missions involve publishing empiricalamining OM topics (Barman et al., 2001; Goh et al.,otra and Grover, 1998; Soteriou et al., 1998; Vokurka,review also covers several major journals in strategy,

    X3 X4 X5

    0.2237 0.3088 0.0746 0.8667

    atrix and inter-construct correlations are shown off the diagonal.

    OLS regression result Power Condence interval Coefcient T-stat.

    007, 0.2079) 0.194 2.727 0.9238574, 0.2648) 0.252 3.546 0.9926196, 0.2273) 0.224 3.148 0.9613165, 0.3233) 0.309 4.451 0.9998

  • 476 D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480

    Table 7R2 , communality, and redundancy.

    Construct R2 Communality (AVE) Redundancy Q2 f2

    Trust with suppliers 0.6709 0.034Cross-functional integration 0.7367Operational performance 0.1293 0.4115 Market share 0.0501 N/A Customer satisfaction 0.0953 0.7511 Average 0.0916 0.5793a

    a The average of communality is computed as a weighted average of all of the communalities using wleast two manifest indicators.

    management, and organization science that sometimes publishresearch related to operations management, including StrategicManagement Journal (SMJ), Academy of Management Journal (AMJ),and Organization Science. Because the use of PLS among businessresearch communities is a relatively recent phenomenon and wewant to focus on issues commonly observed in recent OM research,we review articles published from 2001 to 2011. Because MS, DSJand IEEE Transactions are multi-disciplinary journals with a largeOM componnals that ex

    We perfoand full texlowing keyPLS, formfocus our seempirical rethe search ridentied aIn total, wewithin the erature revto examinezation Sciethese threeand year isOM articlessince 2007.

    We sum42 articles,ever, the reNot unexpereason for udictive natu(n = 8), non-Although a for using PLThe median13 articles (

    Table 8Distribution o

    Year 200200200200200200200200200201201

    Total

    The list of PLS

    The presfor using PLstructs, onlis the reasoconstructs, properties oevaluating ite reliabili

    ered ade. Sevaluamostcrimternaes adive c

    nd LS-Giclesl six One ar

    revirappampon nurformling

    Roseness h ress theientstanddel nounou

    tive v) ande revrtingport

    notetstraing reechaent, we only review the PLS articles in these three jour-amined OM-related topics.rm a key-word search of the titles, key words, abstractsts of the articles in the targeted journals using the fol-words: partial least squares, partial-least-squares,ative, PLS Graph, PLS-Graph, and SmartPLS. Wearch on papers that use PLS as an SEM approach to testsearch models. Next, each author individually examineesults to ensure that the articles using PLS are correctlynd those not using PLS are not included in our review.

    found 42 OM-related articles that use the PLS methodscope of our journal selection and time frame. Our lit-iew indicates that no articles using the PLS method

    OM topics were published in POM, AMJ, and Organi-nce from 2001 to 2011. Thus, our summary excludes

    journals. The distribution of these articles by journal presented in Table 8. It appears that the number of

    using PLS has increased in recent years, particularly

    marize the papers we review in Table 9. Among the 30 explicitly provide a rationale for using PLS. How-maining 12 articles do not explain why PLS was chosen.ctedly, small sample size is the most frequently citedsing PLS (n = 14), followed by the exploratory or pre-re of the study (n = 11), the use of formative constructsnormal data (n = 6), and high model complexity (n = 4).small sample size is cited most frequently as the reasonS, only two of the 42 articles perform a power analysis.

    sample size is 126, with a range from 35 to 3926. Only31%) have a sample size greater than 200.

    f empirical OM articles that use PLS.

    DSJ IEEE IJOPM IJPE IJPR JOM MS SMJ Total

    0 0 1 0 0 0 0 0 0 11 0 1 0 0 0 0 0 0 13 0 0 0 0 0 0 1 0 14 1 0 0 0 0 1 1 0 3

    considdo notstructsfour evitems, ine disthe exincludformat

    Weused. Pthe artthat alOnly o

    Ourbootststrap scommcles peresamp2003; robust

    Witreportcoefcundersall moendogeendogepredicsize (f2

    cles wof repothey re

    Weto booreporttrol m5 0 0 1 0 0 0 0 0 16 1 0 0 0 0 0 0 0 17 2 1 0 0 0 2 1 0 68 0 0 2 0 0 0 1 0 39 1 0 0 2 0 2 0 0 50 3 2 2 2 3 1 0 1 141 0 0 0 2 3 0 0 1 6

    8 5 5 6 6 6 4 2 42

    articles we reviewed is available upon request.

    the reviewrequest conusing PLS to

    5. Discussi

    Our studresearchers 0.0640.0403 0.0416 0.0501 0.0316 0.0678 0.0563 0.0527 0.0432

    eights as the number of manifest variables in each construct with at

    ence of formative constructs is a commonly cited reasonS. Interestingly, although 19 articles use formative con-y eight articles state that the use of formative constructsn for using PLS. Among the 19 articles that use formativethree do not perform any analysis on the measurementf the formative constructs, and ve use techniques forreective constructs (e.g., Cronbachs alpha, compos-ty, and AVE) to assess formative constructs, which is

    inappropriate. Overall, many of the articles we reviewquately assess the properties of the formative con-en articles examine formative construct item weights;te the multicolinearity of the formative measuremently using the variance ination factor (VIF); three exam-inant validity. None of the articles we review evaluatesl validity of the formative construct because no studyditional reective items or constructs to capture theonstructs.that 26 out of the 42 articles report which PLS software israph is the most popular PLS software, adopted by 19 of. SmartPLS, however, is gaining popularity, consideringM articles that use SmartPLS were published after 2009.ticle adopts Visual PLS.ew identies 22 articles that report the details of theiring procedures. We observe that the number of boot-les generated ranges from 100 to 1500, with the mostmber of resampling being 500 (n = 11). Two of the arti-

    bootstrapping procedures with different rounds of to check the robustness of the results (Ahuja et al.,nzweig, 2009). This is a good practice for checking theof the signicance of path coefcients.pect to reporting results, each of the articles we review

    sign, magnitude, and statistical signicance of path. In general, all of the reviewed articles exhibit a gooding that the objective of PLS is not to estimate over-t, but rather to maximize the variance explained of thes variables. Thirty-six of the 42 articles report R2 of thes variables. However, other techniques for evaluatingalidity are underused. Only six articles report the effect

    four report predictive relevance (Q2). Among the arti-iew, Mller and Gaudig (2010) provide a good example

    the predictive validity of the research model because R2, f2 and Q2.

    that some of the problems, particularly those relatedpping procedures, evaluating formative constructs andsults, could have been avoided if stricter quality con-nisms related to the use of PLS had been enforced during process. We recommend that editors and reviewerstributing authors to follow rigorous standards when

    help improve the rigor of PLS use.

    on and conclusion

    y aims to provide a practical guideline that helps OM evaluate and use PLS. Our study also reviews PLS use

  • D.X.

    Peng, F.

    Lai /

    Journal of

    Operations

    Managem

    ent 30

    (2012) 467480

    477

    Table 9Summary of the OM articles that use PLS (n = 42).

    Rationales for using PLS30 articles specify the rationale forusing PLS

    Breakdown of rationalesfor using PLSb

    Exploratory or predictivenature of the study

    Small sample size Model complexity Formativeconstructs used

    Non-normal data No rationale forusing PLS specied

    11a 14 4 8 6 12

    Sample size Sample size summary (n)c

    Mean = 246 Median = 126 Min = 35 Max = 3926Sample size distributionn < 50 50 < n 100 100 < n 150 150 < n 200 200 < n 300 300 < n 500 n > 5001 13 11 4 8 4 1

    Formative constructs19 articles use formative constructs

    Assessment of formativeconstructsContribution of items to theconstruct

    Multicolinearitybetween items

    Nomologicalvalidityd

    External validity Discriminantvalidity

    Formativeconstructs notassessed

    Formativeconstructs assessedas reectiveconstructs

    7 4 N/A 0 3 3 5

    Bootstrapping22 articles report details ofbootstrapping procedures

    Number of bootstrappingsamplesn = 100 n = 200 200 < n < 500 n = 500 n > 5001 4 2 11 5

    PLS software used26 articles report PLS software used

    PLS Graph SmartPLS Visual PLS19 6 1

    Report results Statistical power analysisperformed

    Structural pathcondence intervalreported

    R2 reported f2 reported Q2 reported Formative itemweights reported

    2 0 36 6 4 14

    a Each number in the above table indicates the number of articles that are classied in a given category.b Some articles provide more than one rationale for using PLS.c In cases where sample sizes can be counted differently depending on the level of observation, we use the smaller sample size to be conservative.d We did not summarize nomological validity because each article reports some statistically signicant structural paths, which to some extent demonstrates the nomological validity of formative constructs.

  • 478 D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480

    in the recent empirical OM literature, which points to the need fora practical guideline for using PLS tailored to the OM audience.

    The use of PLS has been growing in the OM literature and willlikely gain more popularity. Given the specic challenges empiri-cal OM researchers face, such as the difculties of obtaining largesamples and a lack of well-established scales, PLS can be a poten-tially useful approach to SEM. Because many OM researchers areunfamiliar with PLS, an OM-specic guideline that focuses on prac-tical applications rather than the technical details of PLS will beparticularly helpful.

    The main contribution of our study is to provide a practicalguideline for using PLS with detailed illustrative examples fromthe OM literature. This guideline is expected to help improve themethodology rigor of PLS use in the OM eld. A second contributionis that our study presents a review and summary of PLS use in theOM and related elds. Our review helps OM researchers learn frompast PLS use and subsequently improve future PLS use.

    Although PLS has been used in a variety of research elds, theextent to which it has been used is far less than that of CBSEM inmost research elds. Goodhue et al. (2006) assert that it is onlyin the IS eld where PLS has become the dominant approach to

    SEM. The somewhat limited use of PLS relative to CBSEM in manyresearch elds seems to reect researchers general concerns aboutthe weaknesses of the PLS method. Indeed, statistically, CBSEM issuperior to PLS in the sense that parameter estimates are unbi-ased (Chin, 1995). Thus, if CBSEM assumptions are met, researchersshould strongly consider using CBSEM.

    However, we suggest that concerns about PLS should not pre-clude it as a potential analysis technique because no empiricalmethodology is perfect. If the assumptions of the PLS method aremet and it is used appropriately, it can be a useful data analysistechnique. Our position is that OM researchers should consider PLSwhen CBSEM is unobtainable due to the violations of some keyCBSEM assumptions (e.g., sample sizes and sample distribution)or model identication problems. PLS is not a competing methodto CBSEM. Depending upon the researchers objective and epis-temic view of data to theory, properties of the data at hand, or levelof theoretical knowledge and measurement development, the PLSapproach may be more appropriate in some circumstances (Chin,1998b, p. 295). In fact, CBSEM and PLS are considered as comple-mentary rather than competitive methods, and both have a rigorousrationale of their own (Barroso et al., 2010, p. 432).

    d as t

    res of ed on

    Initi al ou tside

    app roxim ationInitial LV scor es are

    estimate d as the aggr egate

    of the indicator (item)

    scor es wit h equal weigh ts

    Inside

    app roxim ation

    Using initial LV scores to estimate structural

    path weights

    Using the abov e structural path weights to

    estimate new LV scores

    Outside

    app roxim ation

    Using the abov e LV scores to estimate new

    item weights

    Using the abov e item weights to estimate new

    LV scores

    Inside

    app roxim ation

    Using the abov e LV scores to estimate

    structural path weights

    Using the abov e structural path weights to

    estimate new LV scores

    Outside

    app roxim ation

    Using the abov e LV scores to estimate new

    item weights

    Using the abov e item weights to estimate new

    LV scores

    Iteration 1

    Iteration 2

    .The structural path weight between tw o con nected latent variabl es is estima te scores of the tw o LVs. Item weights are estimat ed by simpl e regr ession for reflec tiv e items (item scoscores) and by multiple regressi on for formative items (LV scores are regress

    Iteration n

    Model converges

    Inside

    app roxim ation

    Outside

    app roxim ation

    Fig. 4. PLS algorithmshe correlatio n betw een the late nt variab le (LV)

    each refl ective item are regressed on LV the se t of formative items) .

    ...

    .

    .

  • D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480 479

    Although we argue that OM researchers should not preclude thepossibility of using PLS, we oppose accepting PLS as the preferredapproach to SEM without a careful assessment of its applicabil-ity. OM researchers should be cautious in assessing their modelassumptionrequiremenPLS. Becausof any sizeshould conto determinstatistical in

    As empiPLS, we expPLS as a poa useful guiuse PLS.

    Appendix A

    The basiward. First,item weighSecond, oncable (LV) scits items. Hweights forLV scores juto estimateLVs) (Forne

    Central twhich usesa stable setitem weighestimation.side approxLV scores aweight estiweighted avthe structurtion rst useapproximatpath weighthe two LV sotherwise. Na new set ofgenerated Lthese item that will being item wefor reectivare regressefor formativformative itcentage charelative to item weighas the weighgenerated tleast squarLV in the rrespective i

    The struables is est(LV) scores

    Item weights are estimated by simple regression for reectiveitems (item scores of each reective item are regressed on LVscores) and by multiple regression for formative items (LV scoresare regressed on the set of formative items).

    nces

    .K., Gce in V8.n, J.C.,

    the D472.n, J.Cytic magem658., P., Ht squa2009

    R.P., 1s. In: rd, pp

    S., Hanals: a

    C., Caiffereel. In:east Sany,

    P.M., hods a.A., 1

    York., Lention p.A., Dtestin522.

    Thies,e intertial L

    in, Ger, Zhanntage180.li, R.Trmatio

    M.-S.ionshs Ma

    .W., 19 comm.W., 19

    22 (1.W., 1eling.rence W., Meling lationems R.W., Nples ull Sam.-H.,

    entileJ., 198rence , M., C

    integisition

    topoulnizatish Joutopoulrs: an2692C., Boied to452.., Strarial ans and data requirements, especially the sample sizet because it is often cited as the main reason for usinge PLS is not a silver bullet to be used with samples

    (Marcoulides and Saunders, 2006, p. VIII), researcherssider a variety of factors and perform power analysese whether the sample size is adequate to support theference.

    rical OM researchers start to recognize the potential ofect that more OM researchers will seriously considertential SEM method. We hope our study can serve asdeline to help empirical OM researchers evaluate and

    . PLS algorithms (PLS-graph)

    c idea behind the PLS algorithm is relatively straightfor- the PLS algorithm uses an iterative process to estimatets that link the items to their respective latent variable.e the nal item weights are obtained, the latent vari-ores of each LV are calculated as a weighted average ofere the item weights estimated earlier are used as the

    aggregating item scores into the LV scores. Finally, thest estimated are used in a set of regression equations

    the structural path weights (i.e., relationships betweenll and Bookstein, 1982).o the PLS algorithm is the estimation of item weights,

    an iterative process that almost always converges to of item weight estimates. The procedure for obtainingts is shown in Fig. 4. Each iteration involves a two-step

    The two steps are called inside approximation and out-imation, respectively. Inside approximation generatess a weighted average of item scores based on the itemmates. Outside approximation generates LV scores as aerage of the LV scores of the neighboring LVs based onal path weights. In each iteration, the inside approxima-s LV score estimates from the previous round of outsideion to calculate structural path weights. The structuralt between two LVs is equal to the correlation betweencores if the two LVs are structurally connected, and zeroext, PLS uses these structural path weights to compute

    LV scores. In the inside approximation, PLS uses the justV scores to estimate a new set of item weights. Finally,weights are used to generate another set of LV scores

    used in the next iteration. The method for determin-ights using factor scores is similar to simple regressione constructs (i.e., the item scores of each reective itemd on the LV scores) and similar to multiple regressione constructs (i.e., LV scores are regressed on all of theems of the LV). PLS repeats the iteration until the per-nges of each outside approximation of items weights

    the previous round are less than 0.001. Once the nalts are obtained, PLS calculates the LV scores of each LVted average of its items. PLS then uses the LV scores justo estimate the structural path weights using ordinaryes (OLS) regression. The LV scores of each dependentesearch model are regressed on the LV scores of thendependent LVs.ctural path weight between two connected latent vari-imated as the correlation between the latent variableof the two LVs.

    Refere

    Ahuja, Mman213

    Andersolying(3),

    Andersoanalman637

    AndreevleasICIS

    Bagozzi,cipleOxfo

    Barman,jour

    Barroso,on dmodtial LGerm

    Bentler, Met

    Bollen, KNew

    Bollen, Kequa

    Bollen, Kand 498

    Boow-on thof PaBerl

    Cao, M.adva163

    Cenfetelinfo

    Cheung,relatation

    Chin, Wis to

    Chin, Wterly

    Chin, WmodLaw

    Chin, W.modsimuSyst

    Chin, WsamSma

    Chung, Kperc

    Cohen, Law

    Cordingtionacqu

    DiamanorgaBriti

    Diamancato(2),

    Fornell, appl440

    Gefen, Dtutoalletta, D.F., Carley, K.M., 2003. Individual Centrality and Perfor-irtual R&D Groups: an Empirical Study. Management Science 49 (1),

    Rungtusanatham, M., 1994. A theory of quality management under-eming management method. Academy of Management Review 19

    ., Rungtusanatham, M., Schroeder, R.O., Devaraj, S., 1995. A pathodel of a theory of quality management underlying the Deming

    ent method: preliminary empirical ndings. Decision Sciences 26 (5),

    earty, T., Maozz, H., Pliskin, N., 2009. Validating formative partialres (PLS) models: methodological review and empirical illustrationProceedings.994. Structural equation models in marketing research: basic prin-Bagozzi, R.P. (Ed.), Basic Principles of Marketing Research. Blackwell,. 317385.nna, M.D., LaForge, R.L., 2001. Perceived relevance and quality of POM

    decade later. Journal of Operations Management 19 (3), 367385.rrin, G.C., Roldn, J.L., 2010. Applying maximum likelihood and PLSnt sample sizes: studies on Servqual model and employee behavior

    Vinci, V.E., Chin, W.W., Henseler, J., Wang, H. (Eds.), Handbook of Par-quares: Concepts, Methods and Applications. Springer-Verlag, Berlin,pp. 427447.Chou, C.P., 1987. Practical issues in structural modeling. Sociologicalnd Research 16 (1), 78117.989. Structural Equations with Latent Variables. Wiley-Interscience,, NY.nox, R., 1991. Conventional wisdom on measurement: a structuralerspective. Psychological Bulletin 110 (2), 305314.avis, W.R., 2009. Causal indicator models: identication, estimation,g. Structural Equation Modeling: A Multidisciplinary Journal 16 (3),

    S., Albers, S., 2010. Application of PLS in marketing: content strategiesrnet. In: Vinci, V.E., Chin, W.W., Henseler, J., Wang, H. (Eds.), Handbookeast Squares: Concepts, Methods and Applications. Springer-Verlag,many, pp. 589604.g, Q., 2011. Supply chain collaboration: impact on collaborative

    and rm performance. Journal of Operations Management 29 (3),

    ., Bassellier, G., 2009. Interpretation of formative measurement inn systems research. MIS Quarterly 33 (4), 689707., Myers, M.B., Mentzer, J.T., 2010. Does relationship learning lead toip value? A cross-national supply chain investigation. Journal of Oper-nagement 28 (6), 472487.95. Partial least squares is to lisrel as principal components analysison factor analysis. Technology Studies 2, 315319.98a. Issues and opinion on structural equation modeling. MIS Quar-), 716.998b. The partial least squares approach to structural equation

    In: Marcoulides, G.A. (Ed.), Modern Methods for Business Research.Brlbaum Associates, Mahwah, NJ, pp. 295336.arcolin, B.L., Newsted, P.R., 2003. A partial least squares latent variableapproach for measuring interaction effects: results from a Monte Carlo

    study and an electronic-mail emotion/adoption study. Informationesearch 14 (2), 189217.ewsted, P.R., 1999. Structural equation modeling: analysis with smallsing partial least squares. In: Hoyle, R. (Ed.), Statistical Strategies forple Research. Sage, Thousand Oaks, CA.Lee, S.M.S., 2001. Optimal bootstrap sample size in construction of

    condence bounds. Scandinavian Journal of Statistics 28 (1), 225239.8. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.Erlbaum, Hillside, NJ.hristmann, P., King, D.R., 2008. Reducing causal ambiguity in acquisi-ration: intermediate goals as mediators of integration decisions and

    performance. Academy of Management Journal 51 (4), 744767.os, A., Siguaw, J.A., 2006. Formative versus reective indicators inonal measure development: A comparison and empirical illustration.rnal of Management 17 (4), 263282.os, A., Winklhofer, H.M., 2001. Index construction with formative indi-

    alternative to scale development. Journal of Marketing Research 3877.okstein, F.L., 1982. Two structural equation models: Lisrel and PLS

    consumer exit-voice theory. Journal of Marketing Research 19 (4),

    ub, D., 2005. A practical guide to factorial validity using PLS-Graph:d annotated example. Communications of AIS 16, 91109.

  • 480 D.X. Peng, F. Lai / Journal of Operations Management 30 (2012) 467480

    Geisser, S., 1975. The predictive sample reuse method with applications. Journal ofthe American Statistical Association 70 (350), 320328.

    Goh, C., Holsapple, C.W., Johnson, L.E., Tanner, J.R., 1997. Evaluating and classifyingPOM journals. Journal of Operations Management 15 (2), 123138.

    Goodhue, D., Lewis, W., Thompson, R., 2006. PLS small sample size and statisticalpower in MIS research the 39th Annual Hawaii International Conference onSystem Sciences , Kauai, Hawaii.

    Gtz, O., Liehr-Gobbers, K., Krafft, M., 2010. Evaluation of structural equation mod-els using the partial least squares (PLS) approach. In: Vinci, V.E., Chin, W.W.,Henseler, J., Wang, H. (Eds.), Handbook of Partial Least Squares: Concepts, Meth-ods and Applications. Springer-Verlag, Berlin, Germany, pp. 691711.

    Gudergan, S.P., Ringle, C.M., Wende, S., Will, A., 2008. Conrmatory tetrad analysisin PLS path modeling. Journal of Business Research 61 (12), 12381249.

    Hair, J.F., Black, B., Babin, B., Anderson, R.E., Tatham, R.L., 2006. Multivariate DataAnalysis. Prentice Hall, Upper Saddle River, NJ.

    Helm, S., Eggert, A., Gardefeld, I., 2010. Modeling the impact of corporate rep-utation on customer relationship and loyalty using partial least squares. In:Vinci, V.E., Chin, W.W., Henseler, J., Wang, H. (Eds.), Handbook of Partial LeastSquares: Concepts, Methods and Applications. Springer-Verlag, Berlin, Germany,pp. 515534.

    Hennig-Thurau, T., Groth, M., Paul, M., Gremler, D.D., 2006. Are all smiles createdequal? How emotional contagion and emotional labor affect service relation-ships. Jour

    Henseler, J., Rimodeling Advances

    Hox, J.J., Maas,with pseudA Multidis

    Hulland, J., 1research: A195204.

    Inman, R.A., Sation to JIT, Managem

    Jarvis, C.B., Maical reviewin market199218.

    Kenny, D.A., KGilbert, D.1, 4th ed. M

    Klein, R., Rai, Arelationsh

    Lei, M., Lomaxequation m

    MacCallum, R.ance strucBulletin 11

    MacCallum, Rance struc533541.

    Malhotra, M.Kfrom con407425.

    Marcoulides, GMathieson, K

    tance mo86112.

    McKnight, D.Htrust measResearch 1

    Mller, M., Gauexchange 1531155

    Nunnally, J.C., York.

    Petter, S., Strasystems re

    Qureshi, I., Compeau, D., 2009. Assessing between-group differences in informationsystems research: a comparison of covariance- and component-based sem. MISQuarterly 33 (1), 197214.

    Roberts, N., Thatcher, J.B., Grover, V., 2010. Advancing operations management the-ory using exploratory structural equation modeling techniques. InternationalJournal of Production Research 48 (15), 43294353.

    Rosenzweig, E.D., 2009. A contingent view of e-collaboration and performance inmanufacturing. Journal of Operations Management 27 (6), 462478.

    Roth, A.V., Schroeder, R., Huang, X., Kristal, M.M., 2007. Handbook of Metrics forOperations Management: Multi-item Measurement Scales and Objective Items.Sage Publications, Newbury Park, CA.

    Ruekert, R.W., Churchill Jr., G.A., 1984. Reliability and validity of alternative mea-sures of channel member satisfaction. Journal of Marketing Research 21 (2),226233.

    Schroeder, R.G., Flynn, B.B., 2001. High Performance Manufacturing: Global Perspec-tives. Wiley, New York.

    Shaffer, J.P., 1995. Multiple hypothesis testing. Annual Review of Psychology 46 (1),561584.

    Shah, R., Goldstein, S.M., 2006. Use of structural equation modeling in operationsmanagement research: Looking back and forward. Journal of Operations Man-agement 24 (2), 148169.

    Soteriou, A.C., Hadijinicola, G.C., Patsia, K., 1998. Assessing production and oper-ns maration., 197

    nal of s, S., Wg PLS:., Hen

    hods a., Na

    ts of ce. JouJ.S., 19l equaus, Mtional.W.K.,nal sew 24., Goo

    nal of M.J., Aators

    tion ddbookng and., Trinnt devinzi, Eres: C

    nger, p, R.J., 1t rese355.

    J.G., 1ding ragem.P., 19

    ration.C., Vaonse:nal of ., 1966t squalysis. A., 1982, Woldictionnal of Marketing 70 (3), 5873.ngle, C.M., Sinkovics, R.R., 2009. The use of partial least squares pathin international marketing. In: Sinkovics, R.R., Ghauri, P.N. (Eds.),in International Marketing. Emerald Bingley, pp. 277320.

    C.J.M., 2001. The Accuracy of multilevel structural equation modelingobalanced groups and small samples. Structural Equation Modeling:ciplinary Journal 8 (2), 157174.999. Use of partial least squares (PLS) in strategic management

    review of four recent studies. Strategic Management Journal 20 (2),

    le, R.S., Green Jr., K.W., Whitten, D., 2011. Agile manufacturing: rela-operational performance and rm performance. Journal of Operationsent 29 (4), 343355.ckenzie, S.B., Podsakoff, P.M., Mick, D.G., Bearden, W.O., 2003. A crit-

    of construct indicators and measurement model misspecicationing and consumer research. Journal of Consumer Research 30 (2),

    ashy, D.A., Bolger, N., 1998. Data analysis in social psychology. In:, Fiske, S., Lindzey, G. (Eds.), The Handbook of Social Psychology, vol.cGraw-Hill, Boston, MA, pp. 233265.

    ., 2009. Interrm strategic information ows in logistics supply chainips. MIS Quarterly 33 (4), 735762., R.G., 2005. The effect of varying degrees of nonnormality in structuralodeling. Structural Equation Modeling 12 (1), 127.

    C., Roznowski, M., Necowitz, L.B., 1992. Model modications in covari-ture analysis: the problem of capitalization on chance. Psychological1 (3), 490504..C., Browne, M.W., 1993. The use of causal indicators in covari-ture models: some practical issues. Psychological Bulletin 114 (3),

    ., Grover, V., 1998. An assessment of survey research in POM:structs to theory. Journal of Operations Management 16 (4),

    .A., Saunders, C., 2006. PLS: a silver bullet? MIS Quarterly 30 (2), IIIIX.., Peacock, E., Chin, W., 2001. Extending the technology accep-del: the inuence of perceived user resources. Database 32 (3),

    ., Choudhury, V., Kacmar, C., 2002. Developing and validatingures for e-commerce: an integrative typology. Information Systems3 (3), 334359.dig, S., 2010. An empirical investigation of antecedents to information

    in supply chains. International Journal of Production Research 49 (6),5.Bernstein, I.H., 1994. Psychometric Theory, 3rd ed. McGraw-Hill, New

    ub, D., Rai, A., 2007. Specifying formative constructs in informationsearch. MIS Quarterly 31 (4), 623656.

    atioOpe

    Stone, MJour

    StreukenusinW.WMet

    Swink, Meffecman

    Tanaka, tura

    Tenenhaputa

    Tsang, EzatioRevi

    Verma, RJour

    Vilares, estimisfacHanketi

    Vinzi, EreceIn: VSquaSpri

    Vokurkamen345

    Wacker,builMan

    White, GOpe

    White, JrespJour

    Wold, HleasAna

    Wold, HK.G.Prednagement related journals: the European perspective. Journal ofs Management 17 (2), 225238.4. Cross-validatory choice and assessment of statistical predictions.the Royal Statistical Society, Series B 36 (2), 111133.etzels, M., Daryanto, A., Ruyter, K.D., 2010. Aanalyzing factorial data

    application in an online complaining context. In: Vinci, V.E., Chin,seler, J., Wang, H. (Eds.), Handbook of Partial Least Squares: Concepts,nd Applications. Springer-Verlag, Berlin, Germany, pp. 567587.rasimhan, R., Wang, C., 2007. Managing beyond the factory walls:four types of strategic integration on manufacturing pla