27
Social interactions of information systems development teams: a performance perspective Steve Sawyer,* Patricia J. Guinan & Jay Cooprider *School of Information Studies, Syracuse University, Syracuse, NY 13244-4100, USA, email: [email protected], Babson College, Babson Park, MA 02157, USA, email: [email protected], and Bentley College, 175 Forest Street, Waltham, MA 02154, USA, email: [email protected] Abstract. We report results from a longitudinal study of information systems development (ISD) teams. We use data drawn from 60 ISD teams at 22 sites of 15 Fortune 500 organizations to explore variations in performance relative to these teams’ social interactions. To do this, we characterize ISD as a form of new product development and focus on team-level social interactions with external stakeholders. Drawing on cluster analysis, we identify five patterns of team-level social interactions and the relationships of these patterns to a suite of objective and subjective measures of ISD performance. Analysis leads us to report three findings. First, data indicate that no one of the five identified patterns maximizes all performance measures. Second, data make clear that the most common approach to ISD is the least effective relative to our suite of performance measures. Third, data from this study show that early indications of ISD project success do not predict actual outcomes. These findings suggest two issues for research and practice. First, these findings indicate that varying patterns of social interactions lead to differences in ISD team performance. Second, the findings illustrate that singular measures of ISD performance are an oversimplification and that multiple measures of ISD performance are unlikely to agree. Keywords: information systems development, teams, performance, boundary- spanning, cluster analysis, social informatics INTRODUCTION Through this paper we extend current theorizing regarding information systems development (ISD) by exploring the performance implications of various patterns of social interactions among those who develop information systems (IS). We note that social interactions have long been seen as a dominant aspect of ISD (e.g. Weinberg, 1971; Brooks, 1974; Walz et al., 1993; doi:10.1111/j.1365-2575.2008.00311.x Info Systems J (2010) 20, 81–107 81 © 2008 The Authors Journal compilation © 2008 Blackwell Publishing Ltd

Social interactions of information systems …sawyer.syr.edu/publications/2010/ISJ 2010 boundary...Social interactions of information systems development teams: a performance perspective

Embed Size (px)

Citation preview

Social interactions of information systemsdevelopment teams: a performanceperspectiveSteve Sawyer,* Patricia J. Guinan† & Jay Cooprider‡

*School of Information Studies, Syracuse University, Syracuse, NY 13244-4100, USA,email: [email protected], †Babson College, Babson Park, MA 02157, USA, email:[email protected], and ‡Bentley College, 175 Forest Street, Waltham, MA 02154, USA,email: [email protected]

Abstract. We report results from a longitudinal study of information systemsdevelopment (ISD) teams. We use data drawn from 60 ISD teams at 22 sites of 15Fortune 500 organizations to explore variations in performance relative to theseteams’ social interactions. To do this, we characterize ISD as a form of newproduct development and focus on team-level social interactions with externalstakeholders. Drawing on cluster analysis, we identify five patterns of team-levelsocial interactions and the relationships of these patterns to a suite of objectiveand subjective measures of ISD performance. Analysis leads us to report threefindings. First, data indicate that no one of the five identified patterns maximizes allperformance measures. Second, data make clear that the most common approachto ISD is the least effective relative to our suite of performance measures. Third,data from this study show that early indications of ISD project success do notpredict actual outcomes. These findings suggest two issues for research andpractice. First, these findings indicate that varying patterns of social interactionslead to differences in ISD team performance. Second, the findings illustrate thatsingular measures of ISD performance are an oversimplification and that multiplemeasures of ISD performance are unlikely to agree.

Keywords: information systems development, teams, performance, boundary-spanning, cluster analysis, social informatics

INTRODUCTION

Through this paper we extend current theorizing regarding information systems development(ISD) by exploring the performance implications of various patterns of social interactionsamong those who develop information systems (IS). We note that social interactions have longbeen seen as a dominant aspect of ISD (e.g. Weinberg, 1971; Brooks, 1974; Walz et al., 1993;

doi:10.1111/j.1365-2575.2008.00311.x

Info Systems J (2010) 20, 81–107 81

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd

Russo & Stolterman, 2000; Kautz & Nielsen, 2004), and variations in these social interactionsare able to explain performance differences among ISD teams (e.g. Curtis et al., 1988;Hirschheim et al., 1991; Agerfalk & Erikkson, 2006). Some have even argued that the socialaspects of ISD often overshadow the substantial technical complexities (e.g. Newman &Robey, 1992; Keil, 1995; Robey & Newman, 1996). The details of the relationships amongsocial interactions and ISD performance are still not well understood empirically or theoreti-cally. Our intent is to close this gap, and here we report findings in support of this researchquestion: How do specific patterns of social interactions during ISD affect performance?

We draw on data from 60 ISD teams at 22 sites of 15 Fortune 500 companies. These datawere gathered by using a set of five surveys drawn from each ISD team at three points in time:following requirements determination, at implementation and 3–6 months after implementa-tion. To help us characterize and explore relationships among social interactions and ISDperformance, we use cluster analysis (as have other scholars in the field of IS, see Sabherwal& Robey, 1993; 1995. As noted below in more detail, cluster analysis is a set of techniques thatcan be used to explore underlying patterns of relationships based on pre-defined attributes.In our case, these clustering attributes are some of the social activities and performancemeasures of ISD teams.

The paper continues with the following section containing our conceptualization of ISD,focusing on the roles of external-to-the-team social interactions. This is followed by sections inwhich we describe our research model, research design and data collection; report a summaryof findings; and present a discussion of the implications of this work for research and practice.

CONCEPTUALIZ ING ISD

Developing an IS requires broad knowledge of the intended domain, exacting knowledge ofdata structures and processing logic, and disciplined knowledge of how best to developsoftware (Iivari et al., 2004). This is typically carried out via teams. For purposes of theresearch reported here, an ISD team is two or more software developers who build a definedproduct to be delivered within a certain time frame. By ‘software developer’ we mean here arange of roles, such as programmer, analyst, system tester, database administrator, domainexpert seconded to the ISD team and the team’s technical leads and project managers.

An ISD team relies on the collective skills of its members because the inherent complexityand scope of the effort needed to develop software normally exceed the ability of any oneperson. These tasks range from the semi-structured efforts used to gather requirements, thedetailed and exacting steps of logical design and programming, and the disciplined steps oftesting and integrating the software modules (Walz et al., 1993).

A social perspective on ISD

Our perspective on ISD is social (e.g. Hirschheim et al., 1991; Newman & Robey, 1992; Robey& Newman, 1996; Russo & Stolterman, 2000; Kautz & Nielsen, 2004). A social perspective

82 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

focuses attention to how IS developers work together to produce software. Social aspectsof ISD include informal communication among members and intra-group activities, such asdiscussing how activities will be performed, finding – and taking – the time to talk with otherteam members, sharing of ideas and information, and resolving the conflicts that arise in thecourse of working together (Kirsch, 1996). The social aspects of ISD also include coordinatingwith non-team members, managing the flow of information across the semi-permeable bound-aries that define who is – and is not – on the ISD team, and making allies with and managingresources shared with other teams (Guinan et al., 1998).

Other perspectives on ISD include production, individual, political and contextual. Theproduction perspective is the most common, and its focus highlights the roles of particularmethods, techniques and tools (e.g Thayer, 2000). The individual perspective focuses on theunique contributions of individuals to the team (e.g. Weinberg, 1971; Sheil, 1981; Curtis et al.,1995). The political perspective engages power relations among stakeholders (e.g. Kling &Iacono, 1984; Markus & Bjorn-Andersen, 1987). A contextualist perspective engages issuessuch as organizational competitiveness, the industrial milieu in which the company operates,the degree of managerial skill, the level of resources and other extra-organizational factors(e.g. Gillette & McCollom, 1990; Nambisan & Wilemon, 2000). Focusing on ISD from a socialperspective may mean de-emphasizing (but not devaluing) those factors highlighted in con-textual, production, political and individual approaches.

The social activities of ISD

For three reasons, we draw on the small group research that is focused on new productdevelopment (NPD) to help conceptualize the social activities of ISD. First, as in NPD, an ISDteam’s intent is to produce a product for use by others. In the case of ISD, the product is thesoftware-based IS delivered to users (Nambisan, 2003). Second, both NPD and ISD teamsare characterized by boundary-spanning behaviours (Hansen, 1999; Nambisan, 2003). Byboundary-spanning behaviours we mean actions that members of the team make to connectto external stakeholders: people who are outside of the team, such as sponsors, customersand specialists (Ancona & Caldwell, 1990; Nambisan & Wilemon, 2000). Third, both NPD andISD teams engage in similar tasks, and this similarity in tasks makes the team’s structure andactivities comparable (Hansen, 1999; MacCormack et al., 2001; Sole & Edmondson, 2002;Nambisan, 2003).

Social activities of a team’s members are either internally or externally focused. The ISDteam literature typically highlights the importance of internal social interactions (Walz et al.,1993). However, in NPD, the external-to-the-team (or boundary-spanning) activities of theteam’s members are often highlighted as these connect the team’s members to others in theorganization (and other larger social structures) in which the team exists (Ancona & Caldwell,1990; Ancona & Caldwell, 1992; Cross et al., 2000; Joshi, 2006).

We focus on the boundary-spanning aspects of the ISD team’s social activities for threereasons. First, we highlight that team members must make a choice to spend their time andenergy on either internal or external issues (see also Curtis et al., 1988; Guinan et al., 1998).

Social ISD 83

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

It is a zero-sum choice because a team’s resources are finite and both aspects of the socialinteractions must be attended to (e.g. Ancona & Caldwell, 1998; Joshi, 2006). Second, thecurrent wisdom regarding ISD is that external-to-the-team activities, such as keeping usersinvolved and managing stakeholder expectations, will lead to higher levels of user satisfactionand better IS (Ginzberg, 1981; Lakhanpal, 1993; Cavaye, 1995). Third, evidence from the NPDliteratures indicates that teams who better manage their external dependencies perform betterthan those who only manage their internal dynamics (Ancona & Caldwell, 1992; Sole &Edmondson, 2002).

One means of encouraging communication across boundaries is to develop specialboundary-spanning roles (Thompson, 1967; Aldrich & Herker, 1977; Gladstein, 1984; Bartel,2001). Ancona & Caldwell (1990) and Ancona & Caldwell (1992), in their work on NPD teams,identified five boundary-spanning activities: ambassador, scout, guard, sentry and coordinator.These roles are rarely exclusive to a person or fixed to a person over time. Rather, theyrepresent activities that members of the team take on as needed, and these roles may beexplicitly or tacitly assigned.

Ambassadorial activities are those aimed at representing the team to its external constitu-ents. Individuals performing ambassadorial activities serve as key nodes in the organization’sformal organizational hierarchy and obtain feedback about team progress while negotiating foradditional time and resources from others in the organization. Ambassadors build support fortheir team by providing favourable reports to outsiders; report the team’s progress to thosehigher in the organization; and intercede in the face of external opposition.

Scouting is the activity of crossing the team’s borders to bring back information about whatis going on elsewhere in the organization. Scouting can include scanning about externalmarkets, searching for new technologies, identifying relevant activities outside of the team anduncovering pockets of potential competition.

Guard and sentry activities serve to protect the team by allowing team members to work withminimal distraction. Guards keep information and resources inside the group. Guards monitorrequests for information or resources by outsiders and help determine what the group willrelease in response to those demands. Sentries serve to ‘police’ the team’s boundary bycontrolling the information and resources that outsiders want to send the team. Guards andsentries essentially act as filters, deciding what will be the flow of input to and from the team’smembers.

Finally, coordinators focus on communicating across, rather than up and down, the organi-zation. Coordinator activities include discussing design problems with others, obtaining feed-back about team progress from external sources and planning events such as negotiating foradditional time or resources from other comparable work groups.

ISD performance

There have been a variety of measures proposed and/or used to evaluate the performance ofISD teams, each of which has its own strengths and weaknesses (DeLone & McLean, 1992;Lakhanpal, 1993; Delone & McLean, 2003; Espinosa et al., 2006). For our purposes, ISD

84 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

performance is multifaceted and no one measure captures the totality of this effort (Keen,1980; Melone, 1990). Thus, we use a suite of performance measures, discussed below.

One common performance measure is stakeholder assessment. As noted above, stake-holders are individuals who are not team members but who influence the developmentactivities and/or are affected by the resulting IS. Stakeholders typically assess team perfor-mance based on their knowledge of the organizational needs, experience with previous andongoing ISD projects, and their expectation of quality work (Seidler, 1974; Henderson & Lee,1992; Guinan et al., 1998).

As team members are often the most knowledgeable about the specific activities of thedevelopment, our second performance measure is their self-evaluation of the team. Perfor-mance assessments often vary across constituent groups as they have different interests andexperiences (Tsui, 1984). It may be that team members are more interested in creating a moretask-oriented environment, while stakeholders are more interested in the specific outputsgenerated by the team. In addition, team members have a constant stream of informationabout team interaction and can use that to evaluate performance. Building on the IS tradition,we include user satisfaction in our suite of ISD performance measures (Bailey & Pearson,1981; Ives et al., 1983; Doll & Torkzadeh, 1988). This provides a customer or user evaluationof the ISD team’s effort.

As a point of comparison with the previous measures, we include two more traditionalevaluations of ISD performance: labour productivity and user satisfaction. Labour productivityarises from viewing ISD as a production effort, and we measure the development teamefficiency by dividing an assessment of the total system functionality (measured in functionpoints, see IFPUG, 2002) by the total labour costs for the system development effort. Usersatisfaction is measured several months following the take-up of the newly implemented IS.

THE RESEARCH MODEL

In Figure 1, we present our conceptualization of the ISD team’s boundary-spanning activitiesand measures of performances against the data collection time line. Per both Weick’s (1995)and Sutton & Staw’s (1995) guidance, since we are not hypothesizing specific relationships,there are no arrows in the figure. Rather, we focus here on identifying some overall patternsof activities and the performance impacts of these patterns. Appendix A contains operationaldefinitions of the variables used in this study, the items used to develop these variables andeach resulting scales’ reliability estimate (using the alpha measure, see Cronbach, 1951). Inthis section, we describe the generic ISD model, map to it the boundary-spanning activities ofISD teams and detail our characterization of ISD performance.

A generic ISD production model

We use a generic model of ISD characterized by three broad types of activities: requirementsdetermination, IS development and post-implementation use (defined in more detail below).

Social ISD 85

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

These activities have traditionally been conceived as being linear (requirements precedesdevelopment, and use follows). However, these activities might be engaged iteratively or evenconcurrently depending on the particular ISD method used. The generic ISD production modelis premised on the beliefs (1) that these three activities can be identified, independent of anyone ISD method; and (2) that these activities can be identified (by the participants).

Given our focus on the social aspects of ISD, a generic model focusing on high-leveldepictions of activities is an adequate representation of ISD. Moreover, our data collectionstrategy allowed for the ISD team leaders to define the movement of its team members fromone activity to the next. Thus, it is quite likely that some development work began beforerequirements were completed and that requirements definition continued during the develop-ment phase. Depicting the myriad approaches and paths that working teams pursue to developIS into three broad activities is a justifiable, but possibly limiting, research design decision.

Some readers might see a generic ISD production model as implicitly advocating for a‘waterfall’ approach. Moreover, the ordering of these three stages serves as part of the study’sconceptual framing. Our view is that, in allowing teams to specify when they moved from oneactivity to another, we allowed for methodological variations (see also Lucas et al., 1990).Thus, while it is possible that the research design biased data collection towards traditional orwaterfall approaches, the intent was to remove a focus on any particular methodologicalapproach – as we discuss this later in this paper. We further note that these three stages aretemporal, and any one team’s movement through these stages is tied to their specific calendar(e.g. Langley, 1999). However, this generalized model, while temporal, is not tied to a specificcalendar. In contrast to ISD works such as Walz et al.’s (1993), our approach provides ameans for comparing many team’s external social activities and performance.

In the requirements determination stage, the IS’s specifications are determined and adocument is produced that contains the agreed-upon specifications. This stage is character-ized by extensive information sharing among developers and stakeholders. This information

BoundarySpanningActivities:

Ambassador Scout Coordinator Sentry Guard

BoundarySpanningActivities: Ambassador Scout Coordinator Sentry Guard

Requirements Performance: Stakeholder-

rated Effectiveness

ImplementationPerformance: Stakeholder-

rated Effectiveness Team-rated

Effectiveness Labour Productivity

Post- ImplementationPerformance:

User Satisfaction

Time

Requirements Determination Development Use

Requirements Completion

Implementation 3–6months

Figure 1. Research model and data collection time line.

86 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

sharing reduces ambiguity about the needs of the resulting IS, helps to assure the quality ofinformation sources used by the ISD team, and develops shared meanings (collective mind)among both stakeholders and the ISD team’s members (Crowston & Kammerer, 1998).

The development stage includes design, coding, testing, installation of the system andtraining of its users. In this phase, interactions among the ISD team’s members focus ontranslating needs into the design, discussing how to turn the design into executable code andensuring that the different modules of code operate together, without error, while meeting theuser’s requirements. Additional interactions with stakeholders are often needed to clarifyunresolved (or emergent) issues, to prepare and conduct the installation and training effort,and to help users understand what they will be receiving. This stage ends when implementa-tion (the installation of the IS and training of relevant personnel) is completed.

As there is no specific time point where post-implementation use transitions to a steady state(or habituated) use level, our selection of 3–6 months was pragmatic. That is, the sponsors ofthe research, and later the various teams at the sites who volunteered to participate, agreedwith our assessment that a 3- to 6-month post-implementation period would allow users to formtheir perceptions of IS use and value.

Boundary-spanning activities during requirements determination

We conceptualize requirements determination as characterized by the need for extensiveboundary-spanning, rapid changes in the information bases of both stakeholders (such as endusers) and developers, and careful attention to the movements of information across the ISDteam’s boundary. Evidence suggests that ISD teams will exhibit higher than typical levels ofambassadorial activities as these help to spread the good word about the team (Kling &Iacono, 1984; Markus & Bjorn-Andersen, 1987). Other evidence indicates that scouting activ-ities are particularly important in the early stages of NPD (Ancona & Caldwell, 1990). Wemaintain that this is also likely for ISD teams during requirements determination. It is anappropriate time to scan the environment for alternative views of the proposed system, todetermine the most appropriate computer-assisted tools to use in the project and to examinethe external influences on project success.

Contemporary research on ISD teams further suggests that high levels of guard and sentryactivity may hinder the team during requirements determination (Guinan et al., 1998). Finally,in the ISD context, coordinator activities reflect some of the critical tasks required of projectmanagers or team leaders who must negotiate for shared resources with other ISD teams.(Brooks, 1974). For instance, project managers often compare performances of one team toanother and seek help from other project leaders on different but comparable projects. Whatmay not occur is the sharing of information among team members of different projects. This isdue largely to the time constraints of the typical software project (Boehm, 1987; 1991).

Performance at requirements completion

Variations among the team’s internal and boundary-spanning activities during requirementsdetermination should affect the performance of the ISD team. A consistent theme in the ISD

Social ISD 87

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

literature is that requirements determination is a very difficult and important part of develop-ment (Boehm, 1981; Holtzblatt & Beyer, 1995). It seems reasonable to conclude that a poor jobat requirements definition will have significant and negative impact on both the effectivenessand the efficiency of the resulting system (Canning, 1977; Boehm, 1981; 1987). Currently,errors in initial system requirements are believed to be largely responsible for the cost andschedule overruns that are still frequent in software development. Boehm (1981), for example,states that the cost of correcting erroneous requirements of an operational system is at leasta hundred times greater than correcting them during requirements determination. For nearly25 years, we have known that user dissatisfaction with systems can often be traced directly topoor requirements determination (Andrews, 1983).

Boundary-spanning activities during systems development

Following requirements completion, ISD teams typically focus on developing the software.However, the project’s requirements change as the evolving IS is prototyped, explained orpresented to various stakeholders and user groups. This means that team members remaininvolve in maintaining their external-to-the-team relationships with stakeholders and users.

As the time to implement draws near, ISD team members typically increase their boundary-spanning activities. This increase is driven by the need to prepare the users to receive anduse the new IS being delivered. However, the form and manner of this increased boundary-spanning activity may be driven in part by the team’s performance during requirementsdetermination. Three scenarios reflect this temporal relationship between implementation andrequirements.

In the first scenario, stakeholders’ reactions to the project at requirements completion arefavourable. As a result, during implementation, it is likely that the stakeholders will make fewerdemands of the team. This means that the ISD team members perform less boundary-spanning during the development phase. During the pressures of implementation deadlines,members of these teams may prioritize internal needs over external needs.

A second scenario begins with stakeholders’ reactions to the project at requirementscompletion having been unfauvorable. The stakeholders are likely to be suspicious of the teammembers’ actions and performance as the ISD effort moves into development. Therefore, thestakeholders will expect more interaction with the ISD team during implementation, demandingmore boundary-spanning efforts (e.g. requiring more project review meetings and projectstatus reports). The ISD team members are forced to respond to stakeholder expectations anddemands by exhibiting more ambassador and scout activities while simultaneously trying toprotect their own interests by exhibiting more sentry and guard activities.

A third scenario, and the most plausible if much of the professional and some of theacademic literature is to be believed, is that requirements continue to evolve or even emergeduring development (e.g. Truex et al., 1999). This implies that users and developers continueto interact to pass on changes and make sense of what these changes mean for design.Further, ISD team members must also speak with other stakeholders, such as funders and

88 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

sponsors. In this, the scope-creep, scenario, and levels of ambassadorial, coordinator andscout activities will remain high (and may even increase), although guard and sentry activitiesmay drop.

Implementation, use and post-implementation performance

Post-implementation assessments of performance are considered more robust than thosemeasured at implementation, and as we noted, multiple means of assessing performance arethought to provide a better picture of ISD performance than are singular measures (Melone,1990; Guinan et al., 1997). The ISD team’s performance is assessed by both the teammembers’ self-evaluation of their effectiveness and stakeholder ratings. Stakeholder ratings ofeffectiveness at implementation draw on the same people and on the same instrument usedat requirements determination. At this time, the labour costs were collected, and along with thefunction point counts, the ISD team’s labour productivity was determined. In addition, a teamof researchers, trained in function point counting, estimated the delivered IS’s function pointssoon after implementation. Function points were chosen as a measurement because theyreflect system functionality (not size). Several of the research sponsors were also collectingfunction point data (allowing us an additional means to validate the counts). Three to sixmonths after the implementation, users were asked to rate their satisfaction with the newsystem.

RESEARCH DESIGN AND DATA COLLECTION

We use a repeated cross-sectional, field-based survey design for this research (Selltiz et al.,1959). A repeated cross-section approach means that, at each data collection time, wegathered data from different people who held similar roles. By field-based we mean that wesurveyed practising professionals working on active ISD projects. Given the turnover ofpersonnel on these teams, it was not possible to ask the same people to complete the surveysat each data collection time. Thus, this is not a panel design. Further, as the team is the levelof theory, measurement and analysis, a role-oriented collection plan is reasonable (Klein &Kozlowski, 2000).

Data collection

Data were gathered using mail-based surveys, phone-based surveys and visits by theresearch team. The six survey instruments were used to gather data from the ISD project teammembers, their stakeholders and users (see Table 1). All six surveys were both pre-tested andpilot-tested following Dillman’s (1978) guidance. The two surveys used to collect data fromdevelopers were mail-based, as was the user survey. The ISD project leader and the twostakeholder surveys were phone-based. Questions on all surveys were targeted at the teamlevel.

Social ISD 89

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Responses from the developers and users were treated as confidential and anonymous. TheISD project leader and stakeholder data could not be collected anonymously because we hadto know their names to ensure that data were collected, but their responses were confidential.To protect identities, we coded organizations, teams and individuals with random numbers. Allparticipants knew that data would be reported only as anonymized team-level aggregates.

Data collection was carried out from 1991 to 1994 (for more on this, please see Appendix A).Data collection happened three times across the ISD project for each team (as shown inFigure 1): requirements, implementation and post-implementation. As noted, data collectionwas not tied to our calendar but based on the particular chronology of each participating ISDteam. To achieve this, team leaders and researchers interacted on a frequent basis, and datacollection was initiated when the team’s leader indicated to the research team that they hadreached the end of a stage.

The first survey, focused on ISD team members, was administered at the end of each team’srequirements determination stage. For each of the 60 ISD teams in this sample, three to fivepeople completed this survey. The second ISD team member survey was administered aftereach team completed system implementation. For each of the teams in this sample, two to fourpeople completed the survey. Both of these paper-based, self-administered surveys usedidentical scales to measure the boundary-spanning constructs. Prior to this, each team’sleader was asked to provide representation of senior and junior members and a variety oftechnical expertise and project leadership roles as a form of key informant sampling (Seidler,1974; Lee et al., 1991; Henderson & Lee, 1992).

For each project, stakeholder data were gathered at the conclusion of both requirements andimplementation. One to three stakeholders provided data for 52 teams at the end of require-ments. At implementation, one to three stakeholders provided data for 47 teams. The fifthsurvey instrument was administered to the end-users of the IS after 3–6 months of active usefollowing the systems’ implementation. Typically, those who completed this survey had nodirect contact with the developers except at this time.

Following implementation, the specific IS’s function points were counted for each project.We used function points to allow a comparison among ISD projects because it allows com-

Table 1. Data collection

Survey target Time Form Variables N (individual) Individual/team

Number

of teams

Developers Requirements Mail Process 252 4.2 60

Stakeholders Requirements Phone Performance 92 1.48 52

Developers Implementation Mail Process 210 3.5 60

Stakeholders Implementation Phone Performance 90 1.5 47

IS Post-implementation See note Performance NA NA 40

End users Post-implementation Mail Performance 122 2.71 45

Function points for each IS were counted by members of the research team. A phone survey with the IS project leader was used to collect

project and labour cost data.

NA, not applicable.

90 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

parison on common measures of functions that an IS can perform. Function points werecounted using two-person teams trained to use a common standard to ensure comparability ofresults. These two-person teams were composed of graduate students trained to use theInternational Function Point User Group 3.2 standard. Graduate student members of theresearch group independently counted function points and then reconciled differences for afinal count. Further, at this time, the ISD team leader was asked via a phone survey for dataabout ISD project costs and labour rates. Only 40 teams provided us with project cost andlabour data.

Sample characteristics

The sample used in the analysis reported in this paper came from 60 ISD teams, each locatedat one of 22 sites of 15 organizations in the United States and Canada. While the sample is notrandomized, we believe it is representative of practice. It was certainly not convenient. The 15participating organizations were part of an initial pool of 30 (all in the Fortune 500 at the time).These 30 organizations were originally identified by the research team in concert with theresearch sponsors based on three criteria. First, the organization’s ISD group had to be large(more than 300 people). Second, they had to be seen (using one of three industry publicationsof the time) as a thought-leader in ISD. Thought leadership means here that the organizationwas considered one of the top IS places to work, was noted in the professional press forinnovation or had been rated as a top IS department by their peers. Third, the organizationhad to provide access to at least four teams over the duration of the project. For a variety ofreasons, 15 of the original 30 organizations identified chose to not participate. The primaryreason for declining participation was the extensive time and effort commitment required.

The 15 organizations, 22 sites and 60 teams in this sample volunteered to participate in thestudy. They were briefed on the extensive participation requirements before volunteering.These requirements included completing several lengthy surveys; participating in phone inter-views; maintaining contact (by phone or visit) with a research team member every 3–4 weeksover the course of the ISD effort; and providing access to project documents, end-users andsenior managers. We estimated (correctly) that the total resource effort for each team wouldbe more than 200 person-hours across the ISD project.

Contributing organizations represented financial services (7), manufacturing (3) and high-technology industries (5). Three organizations provided access to more than one site. Tocontrol for project scope, the ISD projects had to be 12–18 months in planned duration. Theactual length to completion of the projects in this sample ranged from 18 to 31 months. Themean time to completion was 22 months. Projects had to be traditional IS efforts with strategicrelevance to the company. The traditional focus meant that we chose to exclude projectsfocused on developing IS exclusively for commercial sale and projects where new technolo-gies or methods were being tried for the first time.

Candidate ISD teams were first identified by each site’s senior IS managers after beingbriefed by the research team on the commitment required. The ISD teams were then

Social ISD 91

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

approached to secure their voluntary participation. More than 120 ISD teams volunteered, butattrition (primarily through project cancellation) reduced the number to the 60 on which wereport.

ANALYSIS AND FINDINGS

Data collection instruments were specifically designed to be collected at the individual leveland then aggregated (by averaging responses of the team’s members) to the team level foranalysis (Jones & James, 1979; James, 1982; Klein et al., 1994; Klein & Kozlowski, 2000). InTable 2, we present the means and standard deviations for the variables measured. In Table 3,we present the correlations among the measured variables. In order to justify aggregating datafrom the individual to the group level, each variable’s within-group and between-group variancewas assessed by using one-way analysis of variance.

Correlations reported in Table 3 also illustrate the complex relationship among the ISDperformance variables. For instance, stakeholder ratings of effectiveness at requirements havea significant, positive relationship to both team and stakeholder ratings of effectiveness atimplementation. However, there is no relationship between requirements effectiveness anduser satisfaction. There is also a significant negative relationship between stakeholder ratingsof effectiveness at requirements and the labour productivity of the project. We return to thisbelow.

Interpreting the social interactions and performance patterns of ISD teams

To explore potential relationships among boundary-spanning activities and multiple measuresof ISD performance, we used cluster analysis as other possible approaches (e.g. split sample

Table 2. Variable means and standard deviations (SD)

Variable Time N Mean SD

Ambassador Requirements 60 4.22 1.12

Scout Requirements 60 4.02 0.82

Coordinator Requirements 60 4.03 1.21

Sentry Requirements 60 2.98 1.08

Guard Requirements 60 2.08 0.95

Ambassador Implementation 60 4.01 1.29

Scout Implementation 60 4.11 1.29

Coordinator Implementation 60 4.36 1.33

Sentry Implementation 60 3.53 1.48

Guard Implementation 60 2.37 1.27

Stakeholder-rated effect Requirements 52 5.23 0.84

Team-rated effectiveness Implementation 60 5.08 0.94

Stakeholder-rated effect Implementation 47 5.49 0.75

Labour productivity Post-Implement. 40 3.42 1.50

User satisfaction Post-Implement. 45 5.39 1.02

92 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Tab

le3.

Cor

rela

tions

12

34

56

78

910

1112

1314

1516

Pro

cess

mea

sure

s

At

requ

irem

ents

1A

mba

ssad

or1.

00

2S

cout

0.63

*1.

00

3C

oord

inat

or0.

76*

0.56

*1.

00

4G

uard

0.30

*0.

26*

0.01

1.00

5S

entr

y0.

78*

0.43

*0.

64*

0.34

*1.

00

At

Impl

emen

tatio

n

6A

mba

ssad

or0.

33*

0.21

0.27

*-0

.07

0.18

1.00

7S

cout

0.32

*0.

29*

0.34

*0.

060.

220.

41*

1.00

8C

oord

inat

or0.

30*

0.29

*0.

40*

-0.0

9-0

.03

0.70

*0.

60*

1.00

9G

uard

0.01

-0.0

5-0

.05

0.08

0.05

0.44

*-0

.03

0.17

1.00

10S

entr

y0.

120.

110.

12-0

.01

0.11

0.63

*0.

43*

0.54

*0.

49*

1.00

11C

ohes

ion

-0.2

0-0

.28

-0.1

1-0

.13

-0.1

10.

070.

060.

070.

150.

021.

00

Per

form

ance

mea

sure

s

At

requ

irem

ents

12S

R-e

ffect

iven

ess

0.19

-0.0

50.

19-0

.25

0.12

0.13

0.25

0.07

-0.0

7-0

.11

0.31

*1.

00

At

impl

emen

tatio

n

13T

R-e

ffect

iven

ess

0.23

0.11

0.20

-0.1

40.

090.

080.

180.

06-0

.15

-0.1

6-0

.01

0.47

*1.

00

14S

R-E

ffect

iven

ess

0.10

0.08

0.05

-0.1

00.

100.

060.

100.

03-0

.17

-0.1

60.

160.

38*

0.18

1.00

Pos

t-im

plem

enta

tion

15La

bour

/pro

duct

ivity

-0.3

0*-0

.28*

-0.3

3*0.

01-0

.31*

-0.1

5-0

.31*

-0.3

8*0.

04-0

.08

0.22

-0.3

4*-0

.15

-0.3

8*1.

00

16U

ser

Sat

isfa

ctio

n-0

.08

0.01

0.03

-0.0

3-0

.06

-0.1

70.

03-0

.18

0.18

-0.1

40.

36*

0.04

0.29

0.14

0.05

1.00

*p<

0.05

.

SR

mea

nsst

akeh

olde

r-re

port

ed;

TR

mea

nste

am-r

epor

ted.

Social ISD 93

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

analysis using either multiple regression or structural equation models) were constrained bythe relatively small sample size and the exploratory nature of this work. Cluster analysis is wellsuited to exploratory work focused on identifying potential patterns of relationships amongvariables (Morrison, 1967; Pedhauzer & Schmelkin, 1991; Hair et al., 1998). Cluster analysisis actually a suite of analytic techniques that are used to first calculate patterns of relationshipsamong identified attributes and then to group them based on the magnitude of similarity amongidentified attributes. Cluster analysis is focused on creating groupings and will always returna cluster (even if the groupings are tenuous). This means that the formation and interpretationof the clusters must be seen as exploratory. Clustering is often used in nascent attempts toidentify patterns. It is commonly used in genomics, software engineering and marketing todetect and represent patterns in large groups of data. In IS, it has been used to exploreimplementation practices (e.g. Sabherwal & Robey, 1993).

In our analysis, we used the two sets of the five boundary-spanning constructs (fromrequirements and implementation) to explore patterns of variations relative to the five mea-sures of ISD performance. As the basis for determining clusters, we used the squaredEuclidian distance between the projects as a measure of the distance (Milligan & Cooper,1986a; 1986b; Hair et al., 1998). The squared Euclidian distance between two projects iscalculated by summing the squared distances between any two projects over all 10 variables.

We used the ‘complete linkage’ or ‘furthest neighbour’ technique to determine how tocombine clusters at each step of the analysis (Morrison, 1967; Milligan & Cooper, 1986a;Pedhauzer & Schmelkin, 1991; Hair et al., 1998). In this approach, the distance between twoclusters is calculated as the distance between their two furthest points. Clusters are combinedat each step based on their being closest together. The complete linkage technique is the mostcommonly used approach, and given the relatively small number of data points, it is alsocomputationally possible (Milligan & Cooper, 1986b). The complete linkage technique high-lights the distance among clusters and maximizes the distinction of one cluster from another,which is exactly what we were seeking from the cluster analysis. In contrast, average linkageapproaches tend to produce clusters that minimize the variance across clustering variableswhile Ward’s linkage tends to minimize the differences across the means of the clusteringvariables (Hair et al., 1998, pp. 495–496). The resulting clusters are typically difficult tointerpret.

As is common in cluster analyses, we examined the fusion coefficients at each agglomera-tive stage to determine the number of clusters (Milligan & Cooper, 1986a; 1986b; Hair et al.,1998). As there is no test for this determination, we used the common heuristic that the numberof clusters is best decided by visually inspecting for large decreases in the agglomerationfusion coefficients (see Sabherwal & Robey, 1993; Hair et al., 1998, p. 499). This occurred inthe step from five to six clusters, leading us to select the five-cluster model.

Results of the five-cluster analysis are shown in Table 4. Since a cluster analysis will alwaysfind clusters, interpreting the meaning of clusters is an act of theorizing (Weick, 1995), and thisis the specific intent of this paper. In Table 4, we present the mean values of the 10 indepen-dent variables and the mean values of the performance variables for each of the fiveclusters. The last column contains the sample means (reproduced from Table 2) to allow for

94 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Tab

le4.

Clu

ster

anal

ysis

acro

ssso

cial

activ

ityva

riabl

es†

Var

iabl

e

Clu

ster

1

(n=

16)

Clu

ster

2

(n=

14)

Clu

ster

3

(n=

10)

Clu

ster

4

(n=

15)

Clu

ster

5

(n=

5)M

ean

(SD

)

Req

uire

men

ts

Am

bass

ador

4.34

2.84

****

4.2

4.08

5.63

****

4.22

(1.1

2)

Sco

ut4.

22**

*3.

23**

**3.

683.

854.

74**

*4.

02(0

.82)

Coo

rdin

ator

4.55

2.76

****

3.79

4.05

5.32

****

4.03

(1.2

1)

Sen

try

2.96

2.08

****

32.

94.

71**

**2.

98(1

.08)

Gua

rd1.

591.

712.

31.

642.

98**

2.08

(0.9

5)

Impl

emen

tatio

n

Am

bass

ador

4.25

2.79

**3.

43**

**5.

23**

*5.

24**

**4.

01(1

.29)

Sco

ut4.

84**

*3.

49**

2.82

****

4.5

6.00

****

4.11

(1.2

9)

Coo

rdin

ator

5.26

2.90

****

2.98

****

5.34

****

5.16

***

4.36

(1.3

3)

Sen

try

3.13

**2.

80**

2.65

****

5.35

****

5.37

***

3.53

(1.4

8)

Gua

rd1.

392.

012.

28**

*3.

78**

*2.

30*

2.37

(1.2

7)

Per

form

ance

Sta

keho

lder

-rat

edef

fect

iven

ess

(Req

uire

men

ts)

5.34

5.15

5.19

5.12

5.33

5.23

(0.8

4)

Sta

keho

lder

-rat

edef

fect

iven

ess

(Im

plem

enta

tion)

5.5

5.44

5.32

5.5

6.04

***

5.49

(0.7

6)

Tea

m-r

ated

effe

ctiv

enes

s(I

mpl

emen

tatio

n)5.

385.

084.

914.

894.

995.

08(0

.94)

Labo

urpr

oduc

tivity

(Pos

t-im

plem

enta

tion)

‡2.

74**

3.99

4.51

**3.

261.

87**

3.42

(1.5

0)

Use

rsa

tisfa

ctio

n(P

ost-

impl

emen

tatio

n)5.

135.

72*

5.35

5.58

5.13

5.39

(1.0

2)

Nam

egi

ven

Bo

rro

wer

sIs

ola

tio

nis

tsIn

sula

tors

Po

litic

ian

sE

xhib

itio

nis

ts

*p<

0.10

,**

p<

0.05

,**

*p<

0.01

,**

**p

<0.

001.

†All

resp

onse

sba

sed

onse

ven-

poin

tLi

kert

-like

scal

esw

here

high

ernu

mbe

rsar

era

ted

asm

ore

orbe

tter.

‡Lab

our

prod

uctiv

ityis

scal

ed.

Hig

her

num

bers

indi

cate

mor

efu

nctio

npo

ints

deve

lope

dpe

run

itco

st(h

ighe

rle

vels

ofla

bour

prod

uctiv

ity).

Dat

ain

bo

ldar

est

atis

tical

lysi

gnifi

cant

.

Social ISD 95

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

comparison. To determine the significance of the cluster’s means for each variable, weconducted a t-test comparing each variable in each cluster to the aggregate of all other teamsin the sample that are not in that cluster (Scheffe, 1959, p. 73).

Cluster 1 – borrowers

We name these teams borrowers because of their focus on scouting for other information.Borrower teams have nearly average levels for the boundary-spanning variables duringrequirements determination, with scouting the exception. Scout and coordinator activities aresignificantly higher than the mean in development, while guard and sentry activities aresignificantly lower than the mean during this period. These teams have significantly lowerlabour productivity (they are not very efficient) than the sample mean. The other performancevariables are not significantly different from the sample means.

Cluster 2 – isolationists

These teams were so named because of their limited boundary-spanning activity. Members ofthese teams do significantly less boundary-spanning than the sample average during both therequirements determination and development phases. Performance variables for these teamsare statistically undifferentiated from the sample average.

Cluster 3 – insulators

These teams’ boundary-spanning activities decline during development, suggesting a with-drawal from contact with external constituents. Insulator teams exhibit average levels ofboundary-spanning during requirements determination and significantly less than averagelevels of boundary-spanning during development. Insulators have significantly higher than themean labour productivity (they are efficient), although they are statistically non-differentiableon the other performance measures.

Cluster 4 – politicians

These teams earned the label politician because of the increasing level of boundary-spanningacross the project life. Politician teams exhibit average levels of boundary-spanning activityduring requirements determination but much higher than average levels of boundary-spanningactivity during development. For example, during development, all boundary-spanning activ-ities, but scout, are significantly higher than the overall mean. However, there are no significantdifferences (from the sample means) for the politician’s ISD performance.

Cluster 5 – exhibitionists

Exhibitionist teams exhibit significantly higher than the sample mean levels of boundary-spanning during both requirements determination and development. The exhibitionist’s per-

96 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

formance measures are mixed: labour productivity is significantly below the sample mean(meaning that these teams are the least efficient in the sample). However, the exhibitionist’sstakeholder-rated team effectiveness after implementation is the highest of the sample andsignificantly higher than the sample mean. Values for the other three performance variablesare very near the sample mean.

BOUNDARY-SPANNING ACTIVIT IES AND ISD TEAM PERFORMANCE

From this analysis, we discuss two findings relative to our research question. First, none of thefive patterns of social interactions identified had high levels of performance across all mea-sures. Second, the pattern of relations among the variables does not support several com-monly espoused relations about ISD teams.

The varying patterns of boundary-spanning interactions and performance

Each of the five clusters exhibit differences in both their pattern of social interactions and themanner in which these patterns relate to the suite of ISD performance measures. For example,the exhibitionists (cluster five) perform a great deal of boundary-spanning across the require-ments determination and systems development. While they pay a price in terms of labourproductivity, exhibitionists receive the highest ratings from stakeholders at implementation. Incontrast, the isolationists (cluster two) perform relatively little boundary-spanning during ISD,but their performance is statistically equivalent to the sample mean. Isolationist teams havethe highest labour productivity and their user satisfaction measures are among the lowest(although statistically undifferentiated from the sample mean).

These variations in performance indicate that the ways in which team members engageexternal constituents have performance implications. Greater cross-boundary engagementseems to reduce efficiency (the costs of labour productivity rises). The cluster with the highestoverall ratings for user satisfaction – the politicians (cluster four) – engaged in higher levels ofboundary-spanning during development than did teams in the other clusters. This suggeststhat continuing and extensive – boundary-spanning interactions through development leads tomore satisfied users. That is, even though it seems intuitive to ‘close’ the team’s boundariesduring development, doing so may be seen by users as counterproductive. This contrasts withthe insulators (cluster three) who are near the sample means for boundary-spanning interac-tions during requirements but are lower (at a level of statistical significance) than the samplemeans for boundary-spanning activity during development.

Evidence about borrower teams (cluster one) suggests that a ‘coding’ mentality drives them.That is, most of the high levels of boundary-spanning activity are geared towards finding newthings and coordinating resources, and there is little control of information in and out of theteam as well as little attention to ambassadorial work. These teams are not very efficient (acommon finding in ISD) and only moderately effective. However, borrower team members liketheir work.

Social ISD 97

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Performance variations

Findings indicate that a multifaceted view of ISD outcomes provides a more robust picture thando any singular or aggregated measures. Moreover, the data suggest that it is not possibleto simultaneously maximize all ISD performance indicators. For example, the correlations inTable 3 show that there is little correlation between user satisfaction and labour productivity.The correlations in Table 3 also indicate that there is a non-statistically significant relationshipbetween user satisfaction and labour productivity.

From Table 3 we note that stakeholder ratings of ISD team effectiveness following bothrequirements determination and development are significantly and negatively correlated withpost-implementation labour productivity. This suggests that there may be a trade-off betweenperceived project effectiveness and actual project cost (e.g. Glass, 1999). The absence of anystatistically significant cluster-level differences in stakeholder measures of requirements com-pletion effectiveness (see Table 4) suggests that this performance measure may not be a goodindicator of future performance. That is, what stakeholders perceive as a good start does notpredict future performance levels of an ISD team in our sample.

Further inspection of Table 3 shows that there is a statistically significant positive correlationamong stakeholder perceptions of ISD team effectiveness after requirements determinationand the team’s perceptual performance measures at implementation. However, post-implementation user satisfaction does not seem to be correlated to these three measures ofISD team performance (at a level of statistical significance). That is, the post-implementationsuccess of the ISD team effort has no relation to interim perception measures of the team’seffectiveness. This suggests that it may be problematic to predict post-implementation usersatisfaction based on either internal and/or external measures of ISD team effectiveness.

Findings from the cluster analysis (see Table 4) provide little additional insight into this lackof a relationship between user satisfaction and other measures of team effectiveness. Perhapsthe source of this missing link is a function of what is being measured? That is, it may be thatusers are assessing a system (or product) while the other measures used in this analysismeasure aspects of the ISD effort (process) that are important to developers. This suggeststhat ISD efforts may not be as tightly linked to the ISD products as is commonly espoused.Evidence of this potential dichotomy has been discussed relative to packaged softwaredevelopment (Carmel & Sawyer, 1998). The possible implications of this process/product dichotomy in the more traditional custom ISD domain demands additional focusedinvestigation.

Another concern with ISD performance measurement is that long-term measures of successmay be indirect. For example, it may be that error reports are an excellent proxy measure foruse: more error reports indicate more use. However, most of the ISD teams (and their parentorganizations) in this sample did not track error (bug) rates in a way that could be linked backto specific ISD projects. Also, as an aside, only a subset of the organizations collected data onISD labour costs at the team level (which is why this data is not reported for 20 of the 60 teamsin the sample). Moreover, analysing why more than 50% of the ISD projects attritted from thisresearch is both worthy of extended discussion and beyond the scope of the current paper.

98 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

IMPLICATIONS AND ISSUES

In this paper, we identified patterns relations among five measures of ISD performance and 10measures of the ISD team’s social activities. To do this, we developed a general model of ISDand characterized the social activities of developers as having external and internal compo-nents. Using a generic model of ISD leads to some concern that such a model may bias datacollection towards traditional or waterfall methods. This suggests that future work on boundary-spanning and performance in ISD should explicitly compare alternative development ap-proaches such as agile, open source and other methods.

We used cluster analysis to help us identify five patterns of boundary-spanning interactionthat ISD teams follow, each with its own set of relationships to multiple measures of ISDperformance. The data were collected in the early 1990s, and some may be concerned thattools and methods have progressed substantially since this time, making this data suspect.However, the social actions of developers, while clearly shaped in some part by the technolo-gies being used, reflect social norms, behaviours and actions that are more stable over time.Clearly, however, more work is needed to assess the current level of software developers’social interactions to see if there has been change.

This analysis suggests that the relationships among social activity variables and perfor-mance measures are more complex than normative characterizations represented in thecurrent literature suggest. These findings extend current theorizing on boundary-spanning inISD teams and provide additional insights into the role of time in teams, as we discuss below.

Boundary-spanning in ISD teams

Building on the work of Hansen (1999), we speculate that variations in the boundary-spanningactivities of ISD teams may be related to the search and transfer of pertinent information. Theunderlying causes of the variations in social activities are not identified through this analysis.However, both Hansen’s and Sole & Edmondson’s (2002) work highlight that knowledgetransfer is a central part of NPD. The centrality of knowledge transfer among ISD teams hasbeen observed for nearly 20 years (Guinan, 1988; Walz et al., 1993; Crowston & Kammerer,1998). A focus on knowledge-seeking and knowledge transfer seems likely to be one reasonwhy social interactions are so central to these teams. One finding of this research is thatdiffering approaches to social interactions lead to variations in ISD performance. This may bedue to variations in the level of success with searching and transferring knowledge acrossteams.

The temporality of social interactions in ISD

Results of the analyses reported here provide additional support that social activities of ISDteams change over time (see also Walz et al., 1993). For example, one stream of research onthe temporal effects of group work is that of Gersick’s model of punctuated equilibrium(Gersick, 1988; 1989; 1991). Gersick maintains that, early in a work team’s life (typically the

Social ISD 99

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

first meeting), an initial direction for the team is established. As time progresses – typically ator near the ‘midpoint’ of the group’s life – this initial strategy proves to be inadequate and acrisis ensues. The midpoint crisis leads to a new direction that is expected to lead to theproject’s deliverables. A second crisis (at the final stage/last minutes of the project life) leadsto overwhelming effort just prior to delivery. Gersick’s fieldwork showed that this pattern –inception, midpoint crisis, pre-delivery crisis – occurred in project groups whose work rangedfrom class projects to business decision-making and where the duration of these projectslasted from a few days to several months (Gersick, 1989).

Because the punctuated equilibrium model of group work is predicated on a fixed end-date,it is difficult to directly extend the reasoning of this temporal model of group work to ISDbecause most ISD efforts do not hold to a fixed end-date. A more common scenario incontemporary ISD is that, as the midpoint crisis occurs, the ISD team opts to move theend-date. Such a reading of Gersick’s work implies that an ISD project without a fixed end-datemay fall into a pattern where each midpoint crisis leads to setting a new deadline. Then,halfway to this new deadline, another midpoint crisis ensues, leading to a new deadline. Whileanecdotal data on ISD support this simple but disturbing analysis, an empirical assessment ofthe implications of moving end-dates on ISD performance is needed.

Still, both our evidence and the work of others suggest that the work of ISD teams (like mostwork teams) changes over the life of a project (e.g. Walz et al., 1993). This suggests that workis needed to understand the patterns of variation across time and the relationships amongthese variations, the larger context in which these occur, and the impacts on performance.Newman & Robey (1992) and Robey & Newman (1996) have shown that contextual events,social activities and time are related. Our findings suggest that there is a more complex set ofevents that link teams and their larger context.

The complex nature of ISD performance

Findings from this study indicate that singular representations of ISD performance are anoversimplification. The data presented here make clear that higher levels of requirements-completion performance are not reflected in post-implementation user assessments. That is,we cannot substantiate the commonly held ‘truism’ that requirements are an instrumentalpredictor of ISD success. Further evidence of the complex nature of ISD performance is seenin the trade-offs among the boundary-spanning interactions that differentiate these clusters ofteams. For instance, the exhibitionist team’s poor labour productivity and high user satisfactionratings highlight how these teams’ additional efforts to support boundary-spanning interactionsmight be affecting their performance.

These results provide insights for professional practice and also suggest opportunities forcontinued study. For example, studies of ISD with fixed deliverable schedules and withdifferent measures of internal social activity will further add to our understanding of theperformance impacts of differing patterns of social interactions. A better understanding ofthe patterns among the variables representing social activity should also provide insight intothe potentially differential implications of development tools (e.g. integrated development

100 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

environments); different methodologies such as object-oriented or various iterative models ofISD. For example, iterative methods of design might encourage more boundary-spanningactivity across the ISD effort. Moreover, there may be benefits from information and commu-nications technology-based tools that support stakeholder access to view certain aspects ofthe ongoing development effort. Findings from this study suggest that simple or even singularmodels of ISD social activity, especially models that ignore the performance implications linkedto variations in the ways team members work together, and with stakeholders, cannotadequately represent the complexities involved in developing contemporary IS.

ACKNOWLEDGEMENTS

We gratefully acknowledge the financial support provided by the IBM Corporation and BoeingCorporation. We are indebted to the participation of the many software developers, managers,clients and others who gave so willingly of their time and expertise. This paper has been muchimproved by comments from Bob Zmud, Dan Robey, Dawn Russell, Lynette Kvasny, DavidWoods, Emily Patterson, Anne Hoag, Kay Nelson, Cynthia Beath, Mike Newman, DuaneTruex, Sandeep Purao, Kristin Eschenfelder, Debra Howcroft and three anonymous reviewers.

REFERENCES

Agerfalk, P. & Erikkson, O. (2006) Socio-instrumental

usability: IT is all about social action. Journal of Informa-

tion Technology, 21, 24–39.

Aldrich, H. & Herker, D. (1977) Boundary spanning roles

and organization structure. Academy of Management

Review, 2, 217–230.

Ancona, D. & Caldwell, D. (1990) Improving the perfor-

mance of new product teams. Research-Technology

Management, 33, 25–29.

Ancona, D. & Caldwell, D. (1992) Demography and design:

predictors of new product team performance. Organiza-

tion Science, 3, 321–341.

Ancona, D. & Caldwell, D. (1998) Rethinking team compo-

sition from the outside in. In: Research on Managing

Groups and Teams, Vol. 1, Neale, M. & Mannix, E.

(eds), pp. 21–34. JAI Press, Stamford, CT, USA.

Andrews, W. (1983) Prototyping information systems.

Journal of Systems Management, 34, 16–18.

Bailey, J. & Pearson, J. (1981) Development of a tool for

measuring and analyzing computer user satisfaction.

Management Science, 29, 530–545.

Bartel, C. (2001) Social comparisons in boundary-

spanning work: effects of community outreach on

members’ organizational identity and location. Adminis-

trative Science Quarterly, 46, 379–413.

Boehm, B. (1981) Software Engineering Economics. Pren-

tice Hall, Englewood Cliffs, NJ, USA.

Boehm, B. (1987) Improving software productivity. IEEE

Computer, 20, 43–57.

Boehm, B. (1991) Software risk management: principles

and practices. IEEE Software, 20, 32–41.

Brooks, F. (1974) The Mythical Man-Month. Addison-

Wesley Publishing, Reading, MA, USA.

Canning, R. (1977) Getting the requirements right. EDP

Analyzer, 15, 1–13.

Carmel, E. & Sawyer, S. (1998) Packaged software teams:

what makes them so special? Information Technology &

People, 11, 6–17.

Cavaye, A. (1995) User participation in system develop-

ment revisited. Information and Management, 28, 311–

323.

Cronbach, L. (1951) Coefficient alpha and the internal

structure of tests. Psychometrika, 16, 297–334.

Cross, R., Yan, A. & Louis, M. (2000) Boundary activities

in ‘boundaryless’ organizations: a case study of a trans-

formation to a team-based structure. Human Relations,

53, 841–868.

Crowston, K. & Kammerer, E. (1998) Collective mind in

software requirements development. IBM Systems

Journal, 36, 1–24.

Social ISD 101

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Curtis, B., Krasner, H. & Iscoe, N. (1988) A field study of

the software design process for large systems. Commu-

nications of the ACM, 31, 1268–1287.

Curtis, W., Hefley, W. & Miller, S. (1995) The people capa-

bility maturity model: P-CMM. Software Engineering

Institute Report, CMU/SEI-95-MM-01, Pittsburgh, PA,

USA.

Delone, W. & McLean, E. (1992) Information systems

success: the quest for the dependent variable. Informa-

tion Systems Research, 3, 60–95.

Delone, W. & McLean, E. (2003) The Delone and McLean

model of information systems success: a ten-year

update. Journal of Management Information Systems

Research, 19, 9–30.

Dillman, D. (1978) Mail and Telephone Surveys: The Total

Design Method. John Wiley & Sons, New York, NY, USA.

Doll, W. & Torkzadeh, G. (1988) The measurement of

end-user computer satisfaction. MIS Quarterly, 12, 259–

274.

Espinosa, A., Delone, W. & Lee, G. (2006) Global bound-

aries, task processes, and IS project success: a field

study. Information Technology & People, 19, 345–370.

Gersick, C. (1988) Time and transition in work teams:

toward a new model of group development. Academy of

Management Journal, 31, 9–41.

Gersick, C. (1989) Marking time: predictable transitions

in task groups. Academy of Management Journal, 32,

274–309.

Gersick, C. (1991) Revolutionary change theories: a mul-

tilevel exploration of the punctuated equilibrium para-

digm. Academy of Management Review, 16, 10–36.

Gillette, J. & McCollom, M. (1990) Groups in Context: A

New Perspective on Group Dynamics. Addison-Wesley,

Reading, MA, USA.

Ginzberg, M. (1981) Key recurrent issues in the MIS imple-

mentation process. MIS Quarterly, 5, 47–59.

Gladstein, D. (1984) Groups in context: a model of task

group effectiveness. Administrative Science Quarterly,

29, 499–517.

Glass, R. (1999) Evolving a new theory of project success.

Communications of the ACM, 42, 17–19.

Guinan, P. (1988) Patterns of Excellence for IS Profession-

als: An Analysis for IS Professionals. ICIT Press, Wash-

ington, DC, USA.

Guinan, P., Cooprider, J. & Faraj, S. (1998) Enabling

software development team performance: a behavioral

versus technical approach. Information Systems

Research, 9, 101–125.

Guinan, P., Cooprider, J. & Sawyer, S. (1997) The effec-

tive use of automated application development tools: a

four-year longitudinal study of CASE. IBM Systems

Journal, 36, 124–139.

Hair, J., Anderson, R., Tatham, R. & Black, W. (1998)

Multi-Variate Data Analysis. Prentice Hall, Upper Saddle

River, NJ, USA.

Hansen, M. (1999) The search-transfer problem: the role

of weak ties in sharing knowledge across subunits.

Administrative Science Quarterly, 44, 82–111.

Henderson, J. & Lee, S. (1992) Managing I/S design

teams: a control theories perspective. Management

Science, 38, 757–777.

Hirschheim, R., Klein, H. & Newman, M. (1991) Informa-

tion systems development as social action: theore-

tical perspectives and practice. Omega, 19, 587–

606.

Holtzblatt, K. & Beyer, H. (1995) The human requirements

gathering. Communications of the ACM, 38, 31–32.

Humphrey, W. (1995) A Discipline for Software Engineer-

ing. Addison-Wesley, Reading MA, USA.

IFPUG (2002) IFPUG Function Point Counting Practices

Manual. International Function Point Users Group,

Princeton, NJ, USA.

Iivari, J., Hirschheim, R. & Klein, H. (2004) Towards a

distinctive body of knowledge for information systems

experts: coding ISD process knowledge in two IS jour-

nals. Information Systems Journal, 14, 313–342.

Ives, B., Olson, M. & Baroudi, J. (1983) The measurement

of user information satisfaction. Communications of the

ACM, 26, 785–793.

James, L. (1982) Aggregation bias in estimates of percep-

tual agreement. Journal of Applied Psychology, 67, 219–

229.

Jones, A. & James, L. (1979) Psychological climate:

dimensions and relationships of individual and aggre-

gated work environment perception. Organizational

Behavior and Human Behavior, 23, 201–250.

Joshi, A. (2006) The influence of organizational demogra-

phy on the external networking behavior of teams.

Academy of Management Review, 31, 583–595.

Kautz, K. & Nielsen, P. (2004) Understanding the imple-

mentation of software process improvement innovations

in software organizations. Information Systems Journal,

14, 3–22.

Keen, P. (1980) MIS research: reference disciplines and a

cumulative tradition. Proceedings of the First Interna-

tional Conference on Information Systems. Philadelphia,

PA, USA. 9–18.

Keil, M. (1995) Pulling the plug: software project manage-

ment and the problem of project escalation. MIS Quar-

terly, 19, 421–448.

102 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Kirsch, L. (1996) The management of complex tasks in

organizations: controlling the systems development

process. Information Systems Research, 7, 1–21.

Klein, K. & Kozlowski, S. (2000) Multilevel Theory,

Research and Methods in Organizations: Foundations,

Extensions and New Directions. Jossey-Bass, San

Francisco, CA, USA.

Klein, K., Danserau, F. & Hall, R. (1994) Levels issues

in theory development, data collection, and analysis.

Academy of Management Review, 19, 195–229.

Kling, R. & Iacono, S. (1984) The control of information

systems development after implementation. Communi-

cations of the ACM, 27, 1218–1226.

Lakhanpal, B. (1993) Understanding the factors influenc-

ing the performance of software development groups: an

exploratory group-level analysis. Information & Software

Technology, 35, 468–473.

Langley, A. (1999) Strategies for theorizing from process

data. Academy of Management Review, 24, 691–

710.

Lee, S., Goldstein, D. & Guinan, P. (1991) Informant bias

in information systems design team research. In: Infor-

mation Systems Research: Contemporary Approaches

and Emergent Traditions, Nissen, H., Klein, H. & Hir-

schheim, R. (eds), pp. 635–656. Kluwer, North Holland,

Amsterdam, The Netherlands.

Lucas, H., Ginzberg, M. & Schultz, R. (1990) Information

Systems Implementation: Testing a Structural Model.

Ablex Publishing, Norwoord, NJ, USA.

MacCormack, A., Verganti, R. & Iansiti, M. (2001) Devel-

oping products on internet time: the anatomy of a flexible

development process. Management Science, 47, 133–

150.

Markus, L. & Bjorn-Andersen, N. (1987) Power over users:

its exercise by system professionals. Communications

of the ACM, 30, 498–504.

Melone, N. (1990) A theoretical assessment of the user-

satisfaction construct in information systems research.

Management Science, 36, 76–91.

Milligan, G. & Cooper, A. (1986a) An examination of pro-

cedures for determining the number of clusters in a data

set. Psychometrika, 50, 159–179.

Milligan, G. & Cooper, A. (1986b) A review of clustering

methodology. Unpublished Manuscript, College of Busi-

ness, Ohio State University, Columbus, OH, USA.

Morrison, D. (1967) Measurement problems in cluster

analysis. Management Science, 13, 775–780.

Nambisan, S. (2003) Information systems as a reference

discipline for new product development. MIS Quarterly,

27, 1–18.

Nambisan, S. & Wilemon, D. (2000) Software development

and new product development: potentials for cross-

domain knowledge sharing. IEEE Transactions on Engi-

neering Management, 47, 211–220.

Newman, M. & Robey, D. (1992) A social process model of

user–analyst relationships. MIS Quarterly, 16, 249–

266.

Pedhauzer, E. & Schmelkin, L. (1991) Measurement,

Design and Analysis. Lawrence Erlbaum Associates,

Hillsdale, NJ, USA.

Robey, D. & Newman, M. (1996) Sequential patterns in

information systems development: an application of a

social process model. ACM Transactions on Information

Systems, 14, 30–63.

Russo, N. & Stolterman, E. (2000) Exploring the ass-

umptions underlying information systems methodolo-

gies: their impact on past, present and future ISM

research. Information Technology & People, 13, 313–

327.

Sabherwal, R. & Robey, D. (1993) An empirical taxonomy

of implementation processes based on sequences of

events in information systems development. Organiza-

tion Science, 4, 548–576.

Scheffe, H. (1959) The Analysis of Variance. Wiley, New

York, NY, USA.

Seidler, J. (1974) On using informants: a technique for

collecting quantitative data and controlling for measure-

ment error in organizational analysis. American Socio-

logical Review, 39, 816–831.

Selltiz, C., Jahoda, M. Deutsch, M. & Cook, S. (1959)

Research Methods in Social Relations. Holt Rinehart,

New York, NY, USA.

Sheil, B. (1981) The psychological study of programming.

Computing Surveys, 13, 101–119.

Sole, D. & Edmondson, A. (2002) Bridging knowledge

gaps: learning in geographically dispersed cross-

functional development teams. In: The Strategic Man-

agement of Intellectual Capital and Organizational

Knowledge: A Collection of Readings, Choo, C. &

Bontis, N. (eds), pp. 135–159. Oxford University Press,

New York, NY, USA.

Sutton, R. & Staw, B. (1995) What theory is not. Adminis-

trative Science Quarterly, 40, 371–384.

Thayer, R. (2000) Software Engineering Project Manage-

ment. IEEE Press, New York, NY, USA.

Thompson, J. (1967) Organizations in Action. McGraw-

Hill, New York, NY, USA.

Truex, D., Baskerville, R. & Klein, H. (1999) Growing

systems in emergent organizations. Communications of

the ACM, 42, 117–123.

Social ISD 103

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Tsui, A. (1984) A role set analysis of managerial reputa-

tion. Organizational Behavior and Human Performance,

34, 64–96.

Walz, D., Elam, J. & Curtis, B. (1993) Inside a software

design team: knowledge acquisition, sharing, and inte-

gration. Communications of the ACM, 36, 63–77.

Weick, K. (1995) What theory is not: theorizing is. Admin-

istrative Science Quarterly, 40, 385–390.

Weinberg, G. (1971) The Psychology of Programming.

Van Nostrand Rheinhold, New York, NY, USA.

Biographies

Steve Sawyer is an Associate Professor at Syracuse’s

School of Information Studies. Dr Sawyer does social and

organizational informatics research with a particular focus

on the relationships among changing forms of work, orga-

nization, and uses of information and communication

technologies. This research is carried out through studies

of software developers, real estate agents, police officers,

organizational technologists and other information-

intensive work settings. This work is published in a range

of venues and supported by funds from the National

Science Foundation, IBM, Corning, and a number of other

public and private sources. Prior to returning to Syracuse,

Steve was a Founding Member and an Associate Profes-

sor at the Pennsylvania State University’s College of Infor-

mation Sciences and Technology.

Patricia J. Guinan is an Associate Professor in the

Information Technology Management Division and

teaches in the Management Division, Babson College. She

teaches multidisciplinary courses in information technol-

ogy, cross-functional teamwork, organization design, orga-

nization change and management strategy. She is the

author of an international award-winning book titled Pat-

terns of Excellence for IS Professionals: An Analysis of

Communication Behavior. Dr Guinan received two awards

for teaching excellence from Boston University, where she

taught prior to joining Babson’s faculty. Her executive edu-

cation programme teaching includes IBM, USAA, Ernst

and Young, Lucent Technologies, the Boeing Corporation,

EMC, Met Life, Houghton Mifflin State Street Bank and

Petroleos de Venezuela.

Jay Cooprider is an Associate Professor of Computer

Information Systems, Bentley College. Dr. Cooprider’s

teaching and research interests focus around the use of

technology in high-performing teams and in planning,

design and software development processes. He has

extensive industry experience in information technology

support services and consulting. He has published articles

in such journals as Information Systems Research, MIS

Quarterly, IBM Systems Journal and Journal of Information

Systems Management. Before coming to Bentley, Dr

Cooprider taught at the University of Texas at Austin,

where he was Associate Director of the Information

Systems Management programme.

APPENDIX A: USES OF THIS DATA SET

Several papers have drawn on different parts of the data set from this research project. Theseinclude:

Guinan, P., Cooprider, J. & Faraj, S. (1998) Enabling software development team performanceduring requirements gathering: a behavioral versus technical approach. InformationSystems Research, 9, 101–125.

Guinan, P., Cooprider, J. & Sawyer, S. (1997) The effective use of automated applicationdevelopment tools. IBM Systems Journal, 36, 124–139.

Guinan, P. & Faraj, S. (1998) Reducing work related uncertainty: the role of communicationand control in software development. Proceedings of the Thirty-First Annual Hawaii Inter-national Conference on System Sciences, 6, 73–82, IEEE Press.

Sussman, S. & Guinan, P. (1999) Antidotes for high complexity and ambiguity in softwaredevelopment. Information & Management, 36, 23–35.

104 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

APPENDIX B: INDICATORS AND THEIR RELIABIL IT IES

This appendix contains the indicators used in this research. These are grouped by construct.

Social activity constructs

Please indicate the extent to which you currently see it as a responsibility to engage in thefollowing activities with individuals outside your team. These outsiders may be in othercompanies or may be people in your company who are not formally assigned to the team.

Scale: Not At All A Very Small Extent To Some Extent A Very Great Extent0 1 2 3 4 5 6 7

Ambassador (alpha = 0.85 at requirements, 0.69 at implementation)

__ Persuade others to support the team’s decisions.__ Persuade others that the team’s activities are important.

Scout (alpha = 0.83 at requirements, 0.88 at implementation)

__ Scan the environment inside or outside of the organization for project ideas or expertise.__ Scan the environment inside or outside of the organization for technical ideas or expertise.

Coordinator (alpha = 0.87 at requirements, 0.84 at implementation)

__ Keep other groups in the company informed of your team’s activities.__ Coordinate activities with external groups.

Guard (alpha = 0.74 at requirements, 0.76 at implementation)

__ Avoid releasing information to others in the company to protect the team’s image or productit is working on.__ Keep news about the team secret from others in the company until the appropriate time.

Sentry (alpha = 0.80 at requirements, 0.85 at implementation)

__ Protect the team from outside interference.__ Prevent outsiders from overloading the team with too much information or too manyrequests.

Social ISD 105

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

Performance constructs

Scale: Very Poor Neutral Outstanding1 2 3 4 5 6 7

Stakeholder-rated effectiveness at requirements (alpha = 0.62)

In relation to other project teams you have observed, how would you rate this team on each ofthe following:

__ Ability to meet the goals of the project during requirements definition.__ The team’s reputation for work excellence during requirements definition.__ The number of innovations or new ideas introduced by the design team.

Stakeholder-rated effectiveness, after implementation (alpha = 0.77)

In relation to other project teams you have observed, how would you rate this team on each ofthe following:

__ The extent to which the system adds value to our firm.__ The extent to which the system adheres to organization standards.__ The extent to which the users’ business needs are reflected in the system.__ The contribution of the system to the performance of our firm.

Self-reported effectiveness, after implementation (alpha = 0.81)

In relation to other project teams you have observed, how would you rate this team on each ofthe following:

__ The extent to which the system adds value to our firm.__ The extent to which the system adheres to organization standards.__ The extent to which the users’ business needs are reflected in the system.__ The contribution of the system to the performance of our firm.

User satisfaction (3–6 months after implementation) (alpha = 0.96)

Scale: N/A Disagree Strongly nor disagree Neither agree Strongly Agree0 1 2 3 4 5 6 7

Please provide your impressions of how the present system satisfies your needs.

__ The system provides the precise information that I need.__ The information content meets my needs.__ The system provides reports that seem to be just about exactly what I need.__ The system provides sufficient information.

106 S Sawyer et al.

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107

__ I find the output relevant.__ The system is accurate.__ I am satisfied with the accuracy of the system.__ The output is reliable.__ The system is dependable.__ The output is presented in a useful format.__ The information is clear.__ I am happy with the layout of the output.__ The output is easy to understand.__ The system is user-friendly.__ The system is easy to use.__ The system is efficient.__ I get the information I need in time.__ The system provides up-to-date information.

Social ISD 107

© 2008 The AuthorsJournal compilation © 2008 Blackwell Publishing Ltd, Information Systems Journal 20, 81–107