13
This article was downloaded by: [155.33.120.167] On: 04 December 2014, At: 22:39 Publisher: Institute for Operations Research and the Management Sciences (INFORMS) INFORMS is located in Maryland, USA Information Systems Research Publication details, including instructions for authors and subscription information: http://pubsonline.informs.org A Comparative Study of Distributed Learning Environments on Learning Outcomes Maryam Alavi, George M. Marakas, Youngjin Yoo, To cite this article: Maryam Alavi, George M. Marakas, Youngjin Yoo, (2002) A Comparative Study of Distributed Learning Environments on Learning Outcomes. Information Systems Research 13(4):404-415. http://dx.doi.org/10.1287/isre.13.4.404.72 Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact [email protected]. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. © 2002 INFORMS Please scroll down for article—it is on subsequent pages INFORMS is the largest professional society in the world for professionals in the fields of operations research, management science, and analytics. For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org

A Comparative Study of Distributed Learning Environments on Learning Outcomes

Embed Size (px)

Citation preview

Page 1: A Comparative Study of Distributed Learning Environments on Learning Outcomes

This article was downloaded by: [155.33.120.167] On: 04 December 2014, At: 22:39Publisher: Institute for Operations Research and the Management Sciences (INFORMS)INFORMS is located in Maryland, USA

Information Systems Research

Publication details, including instructions for authors and subscription information:http://pubsonline.informs.org

A Comparative Study of Distributed LearningEnvironments on Learning OutcomesMaryam Alavi, George M. Marakas, Youngjin Yoo,

To cite this article:Maryam Alavi, George M. Marakas, Youngjin Yoo, (2002) A Comparative Study of Distributed Learning Environments onLearning Outcomes. Information Systems Research 13(4):404-415. http://dx.doi.org/10.1287/isre.13.4.404.72

Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions

This article may be used only for the purposes of research, teaching, and/or private study. Commercial useor systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisherapproval, unless otherwise noted. For more information, contact [email protected].

The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitnessfor a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, orinclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, orsupport of claims made of that product, publication, or service.

© 2002 INFORMS

Please scroll down for article—it is on subsequent pages

INFORMS is the largest professional society in the world for professionals in the fields of operations research, managementscience, and analytics.For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org

Page 2: A Comparative Study of Distributed Learning Environments on Learning Outcomes

Information Systems Research, � 2002 INFORMSVol. 13, No. 4, December 2002, pp. 404–415

1047-7047/02/1304/0404$05.001526-5536 electronic ISSN

A Comparative Study of DistributedLearning Environments on

Learning Outcomes

Maryam Alavi • George M. Marakas • Youngjin YooGoizueta Business School, Emory University, Atlanta, Georgia 30322

Kelley School of Business, Indiana University, Bloomington, Indiana 47405Weatherhead School of Management, Case Western Reserve University, Cleveland, Ohio 44106

[email protected][email protected][email protected]

Advances in information and communication technologies have fueled rapid growth inthe popularity of technology-supported distributed learning (DL). Many educational

institutions, both academic and corporate, have undertaken initiatives that leverage the myriadof available DL technologies. Despite their rapid growth in popularity, however, alternativetechnologies for DL are seldom systematically evaluated for learning efficacy. Considering theincreasing range of information and communication technologies available for the develop-ment of DL environments, we believe it is paramount for studies to compare the relativelearning outcomes of various technologies.

In this research, we employed a quasi-experimental field study approach to investigate therelative learning effectiveness of two collaborative DL environments in the context of an ex-ecutive development program. We also adopted a framework of hierarchical characteristics ofgroup support system (GSS) technologies, outlined by DeSanctis and Gallupe (1987), as thebasis for characterizing the two DL environments.

One DL environment employed a simple e-mail and listserv capability while the other useda sophisticated GSS (herein referred to as Beta system). Interestingly, the learning outcome ofthe e-mail environment was higher than the learning outcome of the more sophisticated GSSenvironment. The post-hoc analysis of the electronic messages indicated that the students ingroups using the e-mail system exchanged a higher percentage of messages related to thelearning task. The Beta system users exchanged a higher level of technology sense-makingmessages. No significant difference was observed in the students’ satisfaction with the learningprocess under the two DL environments.(Technology-Supported Learning; Distributed Learning; Learning Assessment; Learning Models)

IntroductionDespite interest and rapid growth in the variety, range,reach, and options of distributed learning programs,little systemic research into the effectiveness of distrib-uted learning technologies exists. Several authors (e.g.,Alavi et al. 1995, Storck and Sproull 1995) have en-couraged and called for an increase in research studies

to guide the design of effective and cost-efficient dis-tributed learning environments.

As a step toward this goal, we conducted an empir-ical assessment of two distributed learning ap-proaches, which we describe using the framework ofhierarchical characteristics of group support system(GSS) technologies as outlined by DeSanctis and

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 3: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems ResearchVol. 13, No. 4, December 2002 405

Gallupe (1987) in the context of an executive devel-opment program. The first environment was based onelementary electronic mail and listserv technology,while the second was based on a more complex andsophisticated GSS technology.

In this paper, we adopt a cognitive perspective onlearning. Learning is defined as changes in an individ-ual’s mental models or knowledge representations(Shuell 1986). According to this definition, learning in-volves acquisition of knowledge and change in knowl-edge structures rather than a behavior (performance)per se (Greeno 1974). An important implication of thisdefinition is that one learns and acquires knowledgewhile behavior (performance) is a possible outcome ofknowledge acquisition (Shuell 1981, Stevenson 1983).According to Ausubel (1968), successful performancerequires other abilities including perseverance, flexi-bility, improvisation, problem sensitivity, and tacticalastuteness, in addition to knowledge acquisition(learning). Thus, learning may not always be reflectedin behavior or performance. Following from these ob-servations, we examine the relative changes in partic-ipants’ mental models (as represented in their knowl-edge structures), rather than examining changes inbehavior, as a measure of their learning in two differ-ent distributed learning environments.

Background: CollaborativeDistributed LearningThe variety and flexibility of modern information andcommunication technologies provide a wide array ofpossibilities for the development of various forms ofdistributed learning approaches (Alavi and Leidner2001). The study contained herein focuses only on onetype of distributed learning environment, the collabo-rative distributed learning model, for its conceptualbase.

In the collaborative distributed learning model, stu-dents acquire knowledge and understanding primarilythrough social interactions across time and/or geo-graphic distance and do so by using information andcommunication technologies. Collaborative distrib-uted learning as defined herein accords with the sociallearning theory originally identified by Vygotsky

(1929), which emphasizes learning’s social genesis. Ac-cording to this theory, learning involves the socialcreation of knowledge through an instructional strat-egy employing a small-group problem-solving ap-proach by students (Johnson et al. 1991). Learningarises from the opportunity for the group members tomonitor each other’s thinking, opinions, and beliefs,while also obtaining and providing feedback for clar-ification and enhancement of comprehension. An in-dividual’s exposure to the group members’ points ofview may challenge his/her initial understanding and,thus, further motivate learning (Glaser and Bassok1989). In other words, using a collaborative distributedlearning approach, learning occurs through commu-nication and collaborative interactions: in this case,technology-mediated interactions and communication.

The primary role of technology applications in col-laborative distributed learning is to enable flexibility,reach, timeliness, and increased frequency of group in-teraction and communication processes. Examplesrange from simple e-mail to various functional levelsof group support systems that enable anytime, any-place interactions among group members.

Henceforth, the term distributed learning (DL) isused to refer to the collaborative distributed modelcharacterized in this section.

Research FrameworkIn most empirical studies focusing on the effective-ness of DL, the comparison is made between learningoutcomes (students’ grades and/or perceived learn-ing) in the DL environment and learning outcomesobtained in a traditional face-to-face environment(e.g., Alavi 1994, Alavi et al. 1995, Storck and Sproull1995, Webster and Hackley 1997, Hiltz and Wellman1997). The effectiveness of DL is then established byshowing either no significant difference in the learn-ing outcomes of the two environments or by showinghigher learning outcomes associated with the DLenvironment.

According to Turoff and Hiltz (1995), the objectiveof distributed learning is not merely to duplicate thefeatures and effectiveness of a face-to-face environ-ment, but rather to use the powers of technology to

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 4: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems Research406 Vol. 13, No. 4, December 2002

create a more effective learning environment. There-fore, we chose to employ what we believe to be a moreaccurate measure of learning to gauge the comparativeeffectiveness of the DL environments. Our decisionwas to measure the outcomes of DL in terms of bothcognitive and perceived learning. As defined previ-ously, cognitive learning involves changes in an individ-ual’s mental models; i.e., internal representations ofknowledge elements comprising a domain as well asinterrelationships among those knowledge elements(Ansari and Simon 1979, Neches 1987, Siegler 1986).Perceived learning is defined as changes in the learner’sperceptions of skill and knowledge levels before andafter the learning experience. Approaches to the mea-surement of these two learning outcomes are describedlater in the paper. Given our definition and character-ization of DL, students in this environment learn byworking cooperatively in small technology-supporteddistributed groups on problem-solving tasks designedto promote learning. By definition, however, thegroups’ collaborative learning interactions and com-munications are distributed in both time and spaceand, therefore, must be mediated and supported bysome application or suite of communication and infor-mation technologies.

The flexibility of modern information and commu-nication technologies implies that alternative configu-rations, each with different features, functionality, andcomplexity, could be employed to support distributedlearning. One general classification of such a mediatingtechnology is the typical group support system (GSS).Within the GSS literature, DeSanctis and Gallupe(1987) have identified three levels of increasing func-tionality and complexity in a GSS. Level-1 systemsprovide simple message exchange capability (i.e., ca-pability for creation, transmission, and storage of mes-sages), which is aimed primarily at facilitating funda-mental communication and information exchangeamong group members. Level-2 systems extend beyondsimple messaging capabilities and include tools forcommunication and task structuring (e.g., meetingagenda setting or group interaction models) and infor-mation manipulation, management, filtering, and sort-ing. Unlike a Level-1 system, Level 2 can support au-tomated decision support and modeling tools as well

as functions intended to facilitate multiparticipant col-laboration. Finally, Level-3 systems expand their capa-bilities beyond systems at Levels 1 and 2 and providemachine-induced group interaction patterns and guid-ance to prescribe communication rules for the group.

Considering the variety of functionality of informa-tion technologies, studies comparing the relative effi-cacy of alternative technology configurations for sup-port of DL should inform both the academic andpractitioner communities. Adopting the DeSanctis andGallupe (1987) GSS taxonomy as our guide, we inves-tigated the relative learning efficacy of two DL envi-ronments: one mirroring a Level-1 GSS and one con-figured as a Level-2 GSS. The Level-1 GSS consisted ofa simple text-based messaging system. This systemprovided capabilities for sending and receiving e-mailmessages as well as a simple listserv capability for sup-port of one-to-many messages. The Level-2 system,referred to as Beta system, provided sophisticatedmes-saging and e-mail capabilities, document and infor-mation management, workflow structuring and coor-dination, information search and filtering, and groupcollaboration and threaded discussion capabilities. Theacquisition, training, and operating costs of the Betasystem were much higher than for the Level-1 e-mailsystem. The Beta system not only enabled fluent inter-actions among participants, but also provided the ca-pability to see the overall structure of the task (all sub-components of the task and the ordered steps to betaken to accomplish them) through the informationmanagement and workflow structuring features of theGSS. These features were tightly integrated and wereprovided to the user through a unified multimediainterface.

HypothesesLearning emerges from the interaction of a stimulus(information) and the mind of the learner. Cognitivelearning theories identify a number of cognitive sub-processes involved in learning. These subprocesses canbe divided broadly into two categories: reception andstructuring. Reception (Ausubel 1968) involves the per-ception of information in the learner’s short-termmemory. Structuring (Norman 1982) consists of pro-cessing this information and connecting it to appro-priate prerequisite concepts retrieved from the long-term memory to form new (or modified) knowledge

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 5: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems ResearchVol. 13, No. 4, December 2002 407

Figure 1 An Overview of the Executive Development ProgramFocus of this Study

structures. In the context of collaborative learning,where external stimuli to individual learners arise pri-marily from social interactions with others (Vygotsky1929, Piaget 1954, Johnson et al. 1991), the receptionsubprocess occurs chiefly through interactions withothers, while the structuring is achieved mainlythrough individuals’ manipulation of symbolic andconcrete representations of the external world. In thiscase, transforming occurs principally through reading,processing, and assimilating others’ messages andcomposing their knowledge and responses to thesemessages.

In implementing DL environments, one needs toconsider the use of technology for both the receptionand structuring subprocesses of learning. Because re-ception in collaborative learning primarily involvescommunication and interactions among learners, bothLevel-1 and Level-2 GSS tools should facilitate recep-tion in DL environments through the provision of com-munication and messaging capabilities.

We hypothesize, however, that the effectiveness ofDL environments can be further enhanced by sup-porting the structuring subprocess of learning throughtechnology. According to Ausubel (1968), the structur-ing subprocess of learning can be enhanced throughadvance organizers. Advance organizers provide “idea-tional scaffolding” and refer to additional information(explanations, principles, background, supplementarymaterial) as well as to the structure of that information(form, flow, presentation mode, sequence, and orga-nization). For example, the Beta system used in thisstudy provided easy-to-use capabilities for sharing in-formation in various forms (e.g., spreadsheets andmultimedia documents), access to additional infor-mation through search and filtering features, and

structured information exchange among group mem-bers through threaded discussions and workflow mod-els. The Level-2 GSS tools can support the structuringsubprocess of DL by facilitating the creation and shar-ing of these advance organizers among the groupmembers. Thus:

Hypothesis 1. The cognitive learning outcome of a DLenvironment comprised of a Level-2 GSS will be higher thanthat of a Level-1 GSS.

Hypothesis 2. The perceived learning outcome of a DLenvironment comprised of a Level-2 GSS will be higher thanthat of a Level-1 GSS.

Furthermore, considering the additional featuresand capabilities provided by the Level-2 GSS for com-munication and information sharing:

Hypothesis 3. Learners’ satisfaction will be higher inthe DL environment comprised of the Level-2 GSS than thatof a Level-1 GSS.

The hypothesized relationships were tested in aquasi-experimental field setting as described in thenext section.

MethodologyThe research was conducted in an executive develop-ment program delivered through a major state univer-sity. The structure of this executive development pro-gram, depicted in Figure 1, consisted of (2) two-weekresidential modules separated by a 10-week DL mod-ule. The residential modules were held at the executivedevelopment facility of the state university. During the

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 6: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems Research408 Vol. 13, No. 4, December 2002

10-week DL module, small-distributed groups of ex-ecutives worked on a complex, real-world task (de-scribed in a later section). This study focused on the10-week-long DL module.

The SubjectsTwo hundred and six executives from a large federalagency participated in the study. Given the limited ca-pacity of the university’s executive development facil-ity, this large executive group was divided into fourcohorts of 50 to 53 individuals. The residential mod-ules of the program were staggered and housed oncampus.

The total sample included 121 males and 85 femaleswith an average subject age of 49. Educational back-grounds ranged slightly, with 48 subjects holdingbachelor’s degrees, 117 with master’s degrees, andthree with doctoral degrees. Thirty-eight subjects hadother types of degrees, such as law or community col-lege degrees. There were no statistically significant dif-ferences among the four cohorts in terms of age, ex-perience, or self-reported computer experience.

ProcedureThe primary learning objective of the executive de-velopment program was to develop a thorough un-derstanding of a customer-oriented perspective onhousing and urban development (referred to as “com-munity first”), as well as understanding of its impacton community well-being. The two-week residentialmodules of the program employed a mix of pedagog-ical approaches: classroom lectures and discussion,case analysis, exercises, and site visits. The 10-weekDL module of the program employed the collabora-tive distributed learning model described earlier inthe paper.

The DL module was specifically designed to pro-mote learning and understanding of the community-first concept. It involved a complex community-planning and development project for a town of 35,000people. Small groups of subjects assumed the role ofconsultant teams to the mayor of the town and devel-oped a specific strategy to increase the home owner-

ship rate from the current 38% level to the target levelof 51% (or greater) by the year 2006. At the conclusionof the 10-week project, each group submitted a reportand made a presentation containing specific recom-mendations for the mayor. All the groups were givencensus, demographic, and economic data for the townand the surrounding region. The project data weremade available to half of the distributed groups on theWorld Wide Web, and were posted on the multimediadatabase of the Beta system used by the other half ofthe distributed groups.

Completing the project required application of theconcepts, ideas, techniques, and skills taught duringthe first two weeks of the program. Completion of thedistributed group project was deemed an importantaspect of this executive development program, and allthe groups successfully completed the project.

Alternative Technologies for Distributed LearningTwo alternative technology environments were madeavailable to the executive groups involved in thestudy. Groups in Cohorts 1 and 2 used the Level-1 GSS,consisting of an e-mail system with listserv capabili-ties, in conjunction with the World Wide Web (WWW).All members of Cohorts 1 and 2 were given e-mail ac-counts and WWW access through a server at the uni-versity. The WWW was used as a publishing mediumfor posting the project data and background informa-tion, providing easy and uniform access for the dis-tributed group members. Because the executives hadprevious experience in using both the e-mail systemand the WWW in their regular work environments,little training on these tools was required.

Cohorts 3 and 4 used the Level-2 Beta system, whichprovided an integrated set of functionality and fea-tures. The features included advanced e-mail (withspell-check, multiple views for sorting messages, andextensive message-editing capabilities), a media center(for storage and access of multimedia documents),multithreaded discussion capability for asynchronouscollaboration, workflow management and coordina-tion features, and a seamless integrated environmentfor viewing and manipulating word processor andspreadsheet files. Members of Cohorts 3 and 4 wereunfamiliar with the Beta system and were therefore

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 7: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems ResearchVol. 13, No. 4, December 2002 409

trained extensively in its use during the two-week resi-dential module of the program prior to the DL mod-ule.1 They were provided with access to the Beta sys-tem on a server at the university through a toll-freetelephone number.

Distributed Learning ProceduresAt the conclusion of the first two-week residentialmodule and prior to their departure, the cohorts wererandomly divided into several groups consisting of 7–10 individuals. If group members happened to end upcolocated in the same geographic office, an adjustmentin the group composition was made to ensure a geo-physically distributed group environment (i.e., noface-to-face contact and only technology-mediated in-teractions among the group members while they wereworking on their project).

A faculty member knowledgeable about both thecourse content and the community-first project was as-signed to play the role of the project facilitator for allgroups (in both the e-mail and Beta system DL con-ditions). This faculty member was available to answerproject-related questions and clarify task-related is-sues, but she was not aware of any hypotheses underinvestigation at any time during the project.

MeasurementsA background and demographic questionnaire captur-ing the participants’ age, gender, learning style (Kolb1976), work experience, education, and self-rated com-puter experience was administered at the outset of theprogram.

The learning outcome of the DL was measured interms of both changes in mental model and perceivedlearning. The literature pertaining to mental models,while rich in depth and breadth, has provided neithera unique nor unanimous interpretation or approach tothe measurement of the mental model concept (Wilsonand Rutherford 1989, Staggers and Norcio 1993). Afterreviewing the literature, we adopted Vosniadou and

1Group members were provided with 3.5 hours of in-class, hands-on training with the Beta system. In addition, a one-hour long one-on-one help session was provided to the participants, who were alsofree to participate in a lab during their off hours. Finally, a two-hourBeta system question-and-answer session was held before partici-pants’ departure from the university campus at the conclusion of thefirst residential segment of the program.

Brewer’s (1987) broad definition of mental models as“all forms of mental representation, general or specific,from any domain, causal, intentional or spatial”(p.193).

Because mental models refer to internal representa-tions in an individual’s memory, they cannot be di-rectly investigated, nor can their changes be directlymeasured. An indirect approach to the investigation ofmental models in the cognitive science and trainingliterature has examined subjects’ knowledge structure(knowledge content and organization). For example,Sein et al. (1987) tested their subjects’ knowledge struc-ture via a set of questions concerning the various com-ponents of a system (the target of training) and therelationships among the components. The subjects’ testscores were used as a measure of the quality of sub-jects’ mental models and as such the effectiveness ofthe learning approach. Lim et al. (1997) used 10 open-ended questions to assess the knowledge of their sub-jects and to gauge the inference potential of their men-tal models. In their study, two judges graded theanswers, and high scores were assumed to indicate theexistence of a “good” mental model. Consistent withprior research, this study measured changes in the par-ticipants’ knowledge structures pertaining to the com-munity and housing development (the content of thetraining program) before and after the DL session torepresent their learning.

In the education and learning literature, several ap-proaches represent the internal knowledge structure(also referred to as cognitive structure) graphically. Forexample, the “association memory” of the information-processing theorists (Newell 1977), the “entailmentstructure” of conversation theory (Pask 1976 a and b),the “frame-system” theory for memory (Minsky 1977),concept-mapping (Dana 1993, Jones and Vesilind1995), and the networks of semantics (Rumelhart 1977)all include graphic representations aimed at modelinginternal knowledge. Consistent with this literature, weadopted a graphical approach—a flowcharting tech-nique—to create a legible graphic representation ofsubjects’ knowledge representations within this pro-ject; that is, we asked each subject to develop a flow-chart diagram to reflect individuals’ understanding ofa community-first approach to housing development(the information to be learned). We chose the flowchart

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 8: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems Research410 Vol. 13, No. 4, December 2002

approach over other possible graphic techniques forthree reasons: (1) the subjects and the two judges whoparticipated in this study were familiar with the flow-charting technique, thus eliminating the need for longand extensive training; (2) the ease with which a stan-dard set of symbols could be imposed, which wouldin turn reduce ambiguity and facilitate scoring the sub-jects’ knowledge representation of the community-firstconcept; and (3) the flowcharting technique wasparticularly well suited for representation of the com-munity-first concept, a process consisting of specificactivities, decision-making steps, sequence, and infor-mation resources.

The subjects constructed their first flowchart dia-grams at the conclusion of the first two-week residen-tial module and before the DL module. Immediatelyafter the completion of the 10-week DL module, andupon returning to the university campus, participantswere asked to draw the flowchart diagrams again.2

Two subject-matter experts unfamiliar with the re-search objectives and hypotheses judged the quality offlowchart diagrams on five-point Likert scales (1 �

low, and 5 � high). Quality was, in this context, de-fined as completeness and accuracy of the flowchartdiagrams. Completeness denoted the inclusion of allnecessary steps and information comprising the com-munity-first approach to housing development. Ac-curacy meant the correctness of information contentand the sequence of the steps represented in the flow-chart diagrams. The rating process of diagrams wasblind and the raters did not know whether they wererating the diagrams created before or after the comple-tion of the group project. Furthermore, the ratersjointly rated the diagrams to increase consistency andreliability of ratings. The raters were asked to discussdiscrepancies in their ratings to reach an agreement.The changes in the rated quality of a subject’s pre- andpost-DL flowchart diagrams were used as a measureof the subject’s cognitive learning (i.e., change in his orher knowledge representations).

Perceived learning was measured by a questionnaireconsisting of 20, 5-point Likert scales (Alavi 1994,

2The participants did not know a priori that they would be asked tocreate these two flowchart diagrams.

adapted from Hiltz 1988). This questionnaire was ad-ministered immediately before and after the DL mod-ule. The pre- and post-perceived-learning question-naire items were subjected to separate principalcomponent analyses, followed by varimax rotations.Two coherent scales with acceptable alpha reliabilitywere obtained. These scales, displayed in Table 1, in-cluded perceived subject-matter learning (consisting ofeight items SM1 through SM8) and perceived skill de-velopment (consisting of three items SD1 throughSD3).3

Because the DL process consisted of a multipartici-pant problem-solving task, satisfaction with the pro-cess was measured by Green and Taber’s (1980) groupprocess satisfaction questionnaire. This questionnaireconsists of five bipolar, five-point Likert scales and wasadministered immediately after the groups completedthe project.

The group interaction processes were defined interms of frequency and type of electronic messages ex-changed among the group members and captured inthe system files.4 Message type was determinedthrough an examination of content. Three distincttypes of messages corresponding to group interactionprocesses were identified as a result of this examina-tion: (1) task performance and coordination, (2) groupmain-tenance, and (3) technology use. Examples of messagesin each category are provided later in Table 5B.

Data Analysis and ResultsTable 2 displays the correlation matrix for the learningand satisfaction measures, along with means and stan-dard deviations for these outcomes measured by DLcondition (e-mail and Beta system) and time of mea-

3Nine items were dropped for further analyses to improve the reli-ability of the measures.4To obtain a more complete picture of the groups’ communicationbehavior, each group was asked to report on the number of tele-phone calls and faxes that were exchanged among the group mem-bers on a weekly basis. Post hoc analysis of the results indicated nosignificant differences in the number of reported telephone call andfax exchanges between the groups under the two different DL con-ditions.

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 9: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems ResearchVol. 13, No. 4, December 2002 411

Table 1 Results of Factor Analysis and Reliability Tests for Perceived Learning Measures

Pre Post

ItemsPerceived Subject-

Matter (SM) LearningPerceived Skill

Development (SD)Perceived Subject-

Matter (SM) LearningPerceived Skill

Development (SD)

SM1 0.85952 0.85948SM2 0.90499 0.87827SM3 0.86587 0.88835SM4 0.83355 0.86760SM5 0.69983 0.66652SM6 0.82988 0.80314SM7 0.77829 0.80294SM8 0.65880 0.65388SD1 0.88748 0.91043SD2 0.90582 0.90076SD3 0.87439 0.83114

Eigen value 5.29237 2.48269 5.21608 2.42971% of Variance 48.1 22.6 47.4 22.1

Reliability 0.9224 0.8784 0.9230 0.8552

Note.

SM1: I became more interested in the “community-first” concept.

SM2: I gained a good understanding of the “community-first” concept.

SM3: I learned to identify central ideas in the “community-first” area.

SM4: I developed the ability to communicate clearly about the “community-first” concept.

SM5: I was stimulated to do additional work in the area of “community-first.”

SM6: My ability to critically analyze “community-first” issues was improved.

SM7: I found the current project to be a good learning experience.

SM8: Given a choice, I would take part in a project similar to the current project.

SD1: Entering into a partnership with other public agencies.

SD2: Participating in community-based partnership to address site or area-specific community needs.

SD3: Negotiating win-win strategies among competing public and private partners.

surement (pre- and post-DL module). There was nostatistically significant impact of demographic vari-ables on learning outcomes. To test for possible group-level main effects, we calculated the Intra-Class Cor-relation (ICC) (Kenny and La Voie 1985) to determinethe appropriateness of analyses conducted at the in-dividual level. The ICC is based on a nested ANOVAtest that ascertains whether or not members in thesame team produce more similar outcomes. If the ICCscore is 1.0, it indicates that all the team members havethe same score; if the ICC is 0.0, then people within ateam are no more similar than people from different

teams. The resultant ICC values for this study (0.001to 0.045) were well below the suggested guideline of0.12 for considering group effects (James 1982). Thus,all the analyses were conducted at the individual level.

Hypotheses 1 through 3 were formulated to inves-tigate whether or not a DL environment providing thefunctions and capabilities of a Level-2 GSS affects thelearning outcomes more positively than a Level-1 GSS.To test these hypotheses, we performed a repeated-measure ANOVA with time (within-subject indepen-dent variable) and technology (between-subject inde-pendent variable). An interaction effect between time

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 10: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems Research412 Vol. 13, No. 4, December 2002

Table 2 Means, Standard Deviation, and Correlations of the Outcome Measures

Variables Mean Std Dev 1 2 3 4 5 6

Cognitive learning (Pre) 1.80 2.15 1.0000Cognitive learning (Post) 2.89 1.49 0.1952* 1.0000Perceived subject-matter learning (Pre) 3.34 0.53 �0.0031 0.0328 1.0000Perceived subject-matter learning (Post) 3.63 0.54 �0.0292 0.0224 0.4583* 1.0000Perceived skill development (Pre) 3.61 0.67 �0.0984 0.0122 0.0971 0.1228* 1.0000Perceived skill development (Post) 3.79 0.62 0.0153 0.0228 �0.0190 0.0149 0.4046* 1.0000Satisfaction with process 3.98 0.78 0.0087 �0.0550 0.1414* 0.2277** 0.0874 0.0911

*p � 0.05

**p � 0.01

Table 3 Repeated Measures ANOVA on Distributed LearningOutcome Measures

DependentVariables

Source ofVariances

Wilks’Lambda d.f. F -Value

Cognitive learning Time 0.829 203 25.775**Time � Technology 0.956 203 5.774*

Perceived subject- Time 0.771 203 60.353**matter learning Time � Technology 0.970 203 6.268*

Perceived skill Time 0.932 203 14.741**development Time � Technology 0.998 203 0.378

Table 4A E-Mail Distributed Learning Condition Paired T-test ofOutcome Measures

MeasuresPaired Difference

(Post—Pre) T-Value

Cognitive learning M 1.4433 5.631**S.D. 2.3908

Perceived subject-matter learning M 0.4019 7.244**S.D. 0.5175

Table 4B Beta System Learning Condition Paired T-test of OutcomeMeasures

MeasuresPaired Difference

(Post—Pre) T-Value

Cognitive learning M 0.8171 3.820**S.D. 2.3237

Perceived subject-matter learning M 0.2060 3.865**S.D. 0.5790

and technology would suggest different learning out-comes between e-mail and Beta systems. The repeated-measure ANOVA results showed significant interac-tion effects between time and technology for thecognitive and perceived learning measures (see Table3).

To better understand the nature of these interactioneffects, we divided the sample into two subgroupsbased on the technology used by participants. Next,we performed separate paired t-tests for cognitivelearning and perceived learning measures. For bothe-mail and Beta system conditions, the paired t-test re-sults indicated significant changes for cognitive learn-ing and perceived subject-matter learning measuresbetween pre- and post-test measures (p � 0.001). How-ever, a further analysis of the means of both learningmeasures for e-mail and Beta system conditions re-vealed that executives in the e-mail system conditionshowed a higher degree of change between pre- and

post-test measures compared to executives in the Betasystem condition (Tables 4A and 4B).

Taken together, these results indicate the following:The magnitude of change in the pre- and post-learning

measures was higher under the e-mail technology condition.That is, participants who used the e-mail technology learnedmore than those who used the more complex and sophisti-cated Beta system.

One-way ANOVA analysis indicated no significantdifferences between the participants’ satisfaction with

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 11: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems ResearchVol. 13, No. 4, December 2002 413

Table 5A One-way ANOVA Analysis of e-Mail and Beta SystemsElectronic Messages

e-Mail Beta System

Variables Mean S.D. Mean S.D. F -Value

% of task-related messages 66.15 11.82 50.31 26.53 3.171*% of group-maintenance messages 16.54 8.39 16.89 24.92 0.002% of technology-related messages 17.31 12.66 32.79 22.78 3.907*

Table 5B Samples of the e-Mail and Beta System Messages in theThree Different Content Categories

Task-Related MessagesExample 1 (e-mail):Folks, I’ve attached a chart showing how I clustered the various areas.Hope it makes sense to you. Let me know if you have any questions orcomments.

Example 2 (Beta System):Attached are WordPerfect files containing a proposed implementationstrategy format and an implementation matrix. Please comment and fillin where appropriate.

Group Maintenance-Related MessagesExample 1 (e-mail):Yea Group, Rah, Rah, Rah!!!!Well, I have finished amending my piece on the under servedpopulation—I’m glad that is at least over with! So I thought I’d sendalong the above cheer for all my group members!

Example 2 (Beta System):John, thanks for the editing (I have no misguided pride of authorship.)In fact, group editing is exactly what I hoped would occur. CE

Technology Use MessagesExample 1 (e-mail):I have not been receiving any mail via this listserv. This came up duringa conference call with my group yesterday. Please double check mye-mail address as it was wrong on the original list. My address shouldbe:. . . .Please note the change and thanks.

Example 2 (Beta System):This file does not do anything by itself. I loaded it in Quattro Pro but it’sjust a file that needs to be used with other files as part of map file stuff.There are a bunch of map files listed in the media center {a multimediadocument repository} that need to be loaded.

the distributed learning process using the two differentdistributed learning technologies.

Because there were significant differences in thelearning outcomes of the two technology conditions,

we conducted a post hoc comparison of the group elec-tronic communication messages (the frequency andthe message type with respect to content). The resultsof the post hoc analysis and examples of the variouscategorical message types are displayed in Tables 5Aand 5B.

There were no significant differences in the percent-age of group maintenance-related messages under the twotechnology conditions. However, the percentage oftask-related messages in the e-mail groups was signifi-cantly higher than the percentage of task-related mes-sages in the Beta system groups. On the other hand,the percentage of technology-related messages for theBeta system groups was significantly higher than thepercentage of technology-related messages under thee-mail condition. The implications of these findings forthe learning outcomes are discussed in the next section.

Discussion and Implications ofFindingsThe findings from this study are noteworthy and quiteintriguing. According to the data from this sample, thelearning outcome under the e-mail technology condi-tion was higher relative to the learning outcome underthe more complex GSS Level-2 technology condition.That is, executives who used e-mail in support of theirlearning interactions learned more about the community-first concept (the target of training) than did executiveswho used the more sophisticated and powerful Betasystem. One possible explanation for this finding maybe that the groups assigned to Beta system were notadequately trained in its use and experienced difficul-ties in applying the tools. However, it is unlikely thatinadequate tool training or application difficultiesplayed a role here. The post hoc examination of theBeta system user messages did not reveal any difficul-ties in its use. Furthermore, the groups using the Betasystem had access to a toll-free telephone number forhelp and troubleshooting.

The results of the post hoc analysis of the actual com-munication messages, however, shed some light onthis finding. No significant differences in the percent-age of group maintenance-related messages existed be-tween the e-mail and Beta system groups. However,

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 12: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems Research414 Vol. 13, No. 4, December 2002

the groups using Beta system had posted a signifi-cantly higher percentage of technology-related mes-sages asking group mates to confirm message trans-mission and to clarify technology use.

These findings suggest that in a DL environmentsupported by relatively new and complex technology(such as Beta system), group members seem to gothrough an initial period of technology sense-making,which in turn increases their cognitive-processingload. Thus, DL environments that involve the use ofcomplex technology require a relatively higher level oftotal participant effort. This cognitive effort perspec-tive provides some insight into how complex DL tech-nologies may influence learning by altering the effortrelationships between the learning process and the useof DL technology. According to this perspective (Toddand Benbasat 1991, Johnson and Payne 1985), as thesubcomponents of cognitive tasks are made more orless laborious, individuals will adapt their strategies insuch a way as to limit their overall expenditures ofeffort. Thus, under the Beta system condition, the in-creased level of effort stemming from the use of morecomplex technology appears to have reduced the levelof subjects’ effort allocated to the learning task. (Recallthat a smaller percentage of task-related messageswere exchanged among the Beta system users.) A de-creased level of effort expended on the learning taskseemed to have diminished the impact of advance or-ganizers (the sophisticated features of the Level-2 GSSdescribed earlier in the paper), in support of the struc-turing component of learning, as hypothesized in thisstudy. This interpretation is consistent with the workof Collins (1993), who found that the task focus andperformance of desktop publishers using a complexsoftware package decreased in complex task environ-ments. Similarly, Todd and Benbasat (1991) examinedDSS usage behavior and found that subjects adaptedtheir decision-making strategies to maintain a lowoverall level of effort. In our study, the complexity ofthe Level-2 system, combined with the task complex-ity, might have averted effort from the learning task inthe Beta system user groups, leading to the relativelylower learning outcomes. Thus, we concluded that amore complex and sophisticated DL technology envi-ronment, in and of itself, does not necessarily lead to

enhanced learning. We recommend future research fo-cusing on complex DL systems from the cognitive ef-fort perspective.

This research represents early work in the investi-gation of learning outcomes of alternative DL environ-ments and, as such, must be interpreted with cautiongiven the limited scope and sampling employed. Fur-thermore, the noncontrived field setting of this re-search raises issues with regard to the findings’ overallcapacity for generalization. Nonetheless, we believe itis necessary that inquiries into the effects of variousDL environments be pursued, such that a more rigor-ous development of useful theory in this area can berealized. We offer this study as a first step in thisjourney.

ReferencesAlavi, M. 1994. Computer-mediated collaborative learning: An em-

pirical evaluation. MIS Quart. 18 159–174.——, D. Leidner. 2001. Research commentary: Technology-mediated

learning—A call for greater depth and breadth of research. In-form. Systems Res. 12(1) 1–10.

——, B. Wheeler, J. Valicich. 1995. Using IT to reengineer businesseducation: An exploratory investigation of collaborative tele-learning. MIS Quart. 19 159–174.

Ansari, Y., H. A. Simon. 1979. The theory of learning by doing.Psych.Rev. 86 124–140.

Ausubel, D. P. 1968. Educational Psychology: A Cognitive View. Holt,Reinhart, and Winston, New York.

Collins, R. 1993. Impact of information technology on the processesand performance of knowledge workers. Unpublished Ph.D.dissertation, University of Minnesota, Minneapolis, MN.

Dana, N. F. 1993. Elementary school pre-service teachers conceptionsof social studies teaching and learning. Paper presented at theAnnual Meeting of the National Council for the Social Studies(ED367576), Nashville, TN.

DeSanctis, G., R. B. Gallupe. 1987. A foundation for the study ofgroup decision support systems.Management Sci. 33(5) 589–609.

Glaser, R., M. Bassok. 1989. Learning theory and the study of instruc-tion. Annual Rev. Psych. 40 631–666.

Green, S. G., T. D. Taber. 1980. The effects of three social decisionschemes on decision group process. Organ. Behavior and HumanPerformance 25 97–106.

Greeno, J. G. 1974. Processes of learning and comprehension. L. W.Greg, ed. Knowledge and Cognition. Lawrence Erlbaum Associ-ates, Hillsdale, NJ.

Hiltz, S. R. 1988. Learning in a Virtual Classroom. A Virtual ClassroomEIES: Final Evaluation Report, vol. 1. New Jersey Institute ofTechnology, Newark, NJ.

——, B. Wellman. 1997. Asynchronous learning networks as a virtualclassroom. Comm. ACM 40(9) 44–49.

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.

Page 13: A Comparative Study of Distributed Learning Environments on Learning Outcomes

ALAVI, MARAKAS, AND YOOA Comparative Study of Distributed Learning Environments

Information Systems ResearchVol. 13, No. 4, December 2002 415

James, L. R. 1982. Aggregation bias in estimates of perceptual agree-ment. J. Applied Psych. 67 219–229.

Johnson, D. W., R. T. Johnson, K. Smith. 1991. Creative learning: In-creasing college faculty instructional productivity. ASHE-ERICHigher Education Report (4), Clearinghouse on Higher Edu-cation. The George Washington University, Washington, D.C.

Johnson, E. R., J. W. Payne. 1985. Effort and accuracy in choice.Man-agement Sci. 31(4) 395–414.

Jones, G. M., E. Vesilind. 1995. Pre-service teachers cognitive frame-works for class management. Teaching and Teacher Evaluation11(4) 313–330.

Kenny, D. A., L. La Voie. 1985. Separating individual and group ef-fects. J. Personality Soc. Psych. 48(2) 339–348.

Kolb, D. 1976. Learning Style Inventory: Technical Manual. McBer, Bos-ton, MA.

Lim, K. H., L. M. Ward, I. Benbasat. 1997. An empirical study ofcomputer system learning: Comparison of co-discovery andself-discovery methods. Inform. Systems Res. 8(3) 254–272.

Minsky, M. 1977. Frame-system theory. P. N. Johnson-Laird andP. C. Wason, eds. Thinking—Readings in Cognitive Science. Cam-bridge University Press, Cambridge, U.K., 355–376.

Neches, R. 1987. Learning through incremental refinement of pro-cedures. D. Klahr, P. Langlery, and R. Neches, eds. ProjectionSystem Models of Learning and Development. MIT Press/BradfordBooks, Cambridge, MA, 163–219.

Newell, A. 1977. On the analysis of human problem solving. P. N.Johnson-Laird and P. C. Wason, eds. Thinking—Readings inCog-nitive Science. Cambridge University Press, Cambridge, U.K.,46–61.

Norman, D. A. 1982. Learning andMemory. W. H. Freeman, San Fran-sisco, CA.

Pask, G. 1976a. Conversation Theory—Applications in Education andEpistemology. Elsevier, Amesterdam, The Netherlands.

——. 1976b. Conversational techniques in the study and practice ofeducation. British J. Educational Psych. 46(1) 12–25.

Piaget, J. 1954. The Construction of Reality in the Child. Basic Books,New York.

Rumelhart, D. E. 1977. Introduction to Human Information Processing.John Wiley and Sons, New York.

Sein, M. K., R. P. Bostrom, L. Olfman. 1987. Conceptual models intraining novice users. H. J. Bullinger and B. Shackel, eds.Human-Computer Interactions—INTERFACE. Elsevier SciencePublishers B.V., North Holland, 861–867.

Shuell, T. J. 1981. Toward a model of learning from instructions. K.Block, ed. Psychological Theory and Educational Practices: Is it Pos-sible to Bridge the Gap? Meeting of the Amer. Educational Res.Association, Los Angeles, CA.

——. 1986. Cognitive conception of learning. Rev. Ed. Res. 56 475–500.

Siegler, R. S. 1986. Children’s Thinking. Prentice Hall, EnglewoodCliffs, NJ.

Staggers, N., A. F. Norcio. 1993. Mental models: Concepts forhuman-computer interaction research. Internat. J. Man-MachineStud. 38 587–605.

Stevenson, H. 1983. Making the grade: School achievement in Japan,Taiwan, and the United States. Paper presented at the AnnualReport of the Center for Advanced Study in the Behavioral Science,Rockville, MD.

Storck, J., L. Sproull. 1995. Through a glass darkedly: What do peoplelearn in videoconferencing? Human Comm. Res. 22 197–219.

Todd, P., I. Benbasat. 1991. An experimental investigation of the im-pact of computer based decision aids on decision making strat-egies. Inform. Systems Res. 2(2) 87–115.

Turoff, M., S. R. Hiltz. 1995. Software design and the future of thevirtual classroom. J. Inform. Tech. Teacher Ed. 4(2) 197–215.

Vosniadou, S., W. F. Brewer. 1987. Theories of knowledge structur-ing in development. Rev. Ed. Res. 57(1) 51–67.

Vygotsky, L. S. 1929. The problem of the cultural development of thechild, II. J. Genetic Psych. 36 414–434.

Webster, J., P. Hackley. 1997. Teaching effectiveness in technology-mediated distance learning. Acad. Management J. 40(6) 1282–1309.

Wilson, J. R., A. Rutherford. 1989. Mental models: Theory and ap-plication in human factors. Human Factors 31(6) 617–634.

Gerardine DeSanctis, Associate Editor. This paper was received on October 26, 2001, and was with the authors 9 months for 3 revisions.

Dow

nloa

ded

from

info

rms.

org

by [

155.

33.1

20.1

67]

on 0

4 D

ecem

ber

2014

, at 2

2:39

. Fo

r pe

rson

al u

se o

nly,

all

righ

ts r

eser

ved.