9
Measurement and assessment in computer-supported collaborative learning q Carmen L.Z. Gress b, * , Meghann Fior a , Allyson F. Hadwin a, * , Philip H. Winne b a Faculty of Education, University of Victoria, Victoria, BC, Canada b Faculty of Education, Simon Fraser University, Burnby, BC, Canada V5A 1S6 article info Article history: Available online 12 September 2007 Keywords: Computer-supported collaborative learning Measurement Assessment abstract The overall goal of CSCL research is to design software tools and collaborative environments that facilitate social knowledge construction via a valuable assortment of methodologies, theoretical and operational definitions, and multiple structures [Hadwin, A. F., Gress, C. L. Z., & Page, J. (2006). Toward standards for reporting research: a review of the literature on computer-supported collaborative learning. In Paper pre- sented at the 6th IEEE International Conference on Advanced Learning Technologies, Kerkrade, Nether- lands; Lehtinen, E. (2003). Computer-supported collaborative learning: an approach to powerful learning environments. In E. De Corte, L. Verschaffel, N. Entwistle & J. Van Merriëboer (Eds.), Unravelling basic components and dimensions of powerful learning environments (pp. 35–53). Amsterdam, Netherlands: Elsevier]. Various CSCL tools attempt to support constructs associated with effective collaboration, such as awareness tools to support positive social interaction [Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson, M. B., & McCrickard, D. S. (2003). Notification and awareness: Synchronizing task-oriented collaborative activity. International Journal of Human–Computer Studies 58, 605] and negotiation tools to support group social skills and discussions [Beers, P. J., Boshuizen, H. P. A. E., Kirschner, P. A., & Gijselaers, W. H. (2005). Computer support for knowledge construction in collaborative learning environments. Computers in Human Behavior 21, 623–643], yet few studies developed or used pre-existing measures to evaluate these tools in relation to the above constructs. This paper describes a review of the measures used in CSCL to answer three fundamental questions: (a) What measures are utilized in CSCL research? (b) Do measures examine the effectiveness of attempts to facilitate, support, and sustain CSCL? And (c) When are the mea- sures administered? Our review has six key findings: there is a plethora of self-report yet a paucity of baseline information above collaboration and collaborative activities, findings in the field are dominated by ‘after collaboration’ measurement, there is little replication and an over reliance on text-based mea- sures, and an insufficient collection of tools and measures for examining processes involved in CSCL. Ó 2007 Elsevier Ltd. All rights reserved. 1. Introduction Computer-supported collaborative learning (CSCL) is one of the more dynamic research directions in educational psychology. Com- puters and various software programs were incorporated into edu- cation to aid the administration and measurement of solo and collaborative learning activities because software can: (a) be individualized in design and use, (b) represent problems more real- istically, (c) display each step of a difficult problem solving task, (d) afford group discussion and collaboration across distances, and (e) provide immediate feedback for monitoring and evaluating stu- dent progress (Baker & Mayer, 1999; Baker & O’Neil, 2002; Schact- er, Herl, Chung, Dennis, & O’Neil, 1999). Not surprisingly the increased prevalence and benefits of computer use in collaboration has spawned new directions for research in the field of educational psychology and beyond, demonstrated by studies in the learning sciences, computer science, human computer interaction, instruc- tional psychology, educational technology, and education (Baker & Mayer, 1999; Hadwin, Winne, & Nesbit, 2005; Lehtinen, 2003). The overall goal of CSCL research is to design software tools and collaborative environments that facilitate social knowledge con- struction via a valuable assortment of methodologies, theoretical and operational definitions, and multiple structures (Hadwin, Gress, & Page, 2006; Lehtinen, 2003). CSCL environments such as CSILE/Knowledge Forum (Lipponen, 2000; Salovaara & Järvelä, 0747-5632/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2007.05.012 q Portions of this paper were presented at the Annual Meeting of the Canadian Society for the Study of Education, York University, Toronto, ON, May 26–30, 2006. Support for this research was provided by grants from the Social Sciences and Humanities Research Council of Canada to A. F. Hadwin (410-2001-1263), P. H. Winne (410-2001-1263), and P. H. Winne (principal investigator) and A. F. Hadwin (co-investigator) (512-2003-1012). * Corresponding authors. Tel.: +1 604 268 6690; fax: +1 604 291 3203 (C.L.Z. Gress). E-mail addresses: [email protected] (C.L.Z. Gress), [email protected] (A.F. Hadwin). Computers in Human Behavior 26 (2010) 806–814 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

Measurement and assessment in computer-supported collaborative learning

Embed Size (px)

Citation preview

Page 1: Measurement and assessment in computer-supported collaborative learning

Computers in Human Behavior 26 (2010) 806–814

Contents lists available at ScienceDirect

Computers in Human Behavior

journal homepage: www.elsevier .com/locate /comphumbeh

Measurement and assessment in computer-supported collaborative learning q

Carmen L.Z. Gress b,*, Meghann Fior a, Allyson F. Hadwin a,*, Philip H. Winne b

a Faculty of Education, University of Victoria, Victoria, BC, Canadab Faculty of Education, Simon Fraser University, Burnby, BC, Canada V5A 1S6

a r t i c l e i n f o a b s t r a c t

Article history:Available online 12 September 2007

Keywords:Computer-supported collaborative learningMeasurementAssessment

0747-5632/$ - see front matter � 2007 Elsevier Ltd. Adoi:10.1016/j.chb.2007.05.012

q Portions of this paper were presented at the AnnSociety for the Study of Education, York University, ToSupport for this research was provided by grants fHumanities Research Council of Canada to A. F. HaWinne (410-2001-1263), and P. H. Winne (principal in(co-investigator) (512-2003-1012).

* Corresponding authors. Tel.: +1 604 268 6690;Gress).

E-mail addresses: [email protected] (C.L.Z. GHadwin).

The overall goal of CSCL research is to design software tools and collaborative environments that facilitatesocial knowledge construction via a valuable assortment of methodologies, theoretical and operationaldefinitions, and multiple structures [Hadwin, A. F., Gress, C. L. Z., & Page, J. (2006). Toward standardsfor reporting research: a review of the literature on computer-supported collaborative learning. In Paper pre-sented at the 6th IEEE International Conference on Advanced Learning Technologies, Kerkrade, Nether-lands; Lehtinen, E. (2003). Computer-supported collaborative learning: an approach to powerfullearning environments. In E. De Corte, L. Verschaffel, N. Entwistle & J. Van Merriëboer (Eds.), Unravellingbasic components and dimensions of powerful learning environments (pp. 35–53). Amsterdam, Netherlands:Elsevier]. Various CSCL tools attempt to support constructs associated with effective collaboration, suchas awareness tools to support positive social interaction [Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson,M. B., & McCrickard, D. S. (2003). Notification and awareness: Synchronizing task-oriented collaborativeactivity. International Journal of Human–Computer Studies 58, 605] and negotiation tools to support groupsocial skills and discussions [Beers, P. J., Boshuizen, H. P. A. E., Kirschner, P. A., & Gijselaers, W. H. (2005).Computer support for knowledge construction in collaborative learning environments. Computers inHuman Behavior 21, 623–643], yet few studies developed or used pre-existing measures to evaluate thesetools in relation to the above constructs. This paper describes a review of the measures used in CSCL toanswer three fundamental questions: (a) What measures are utilized in CSCL research? (b) Do measuresexamine the effectiveness of attempts to facilitate, support, and sustain CSCL? And (c) When are the mea-sures administered? Our review has six key findings: there is a plethora of self-report yet a paucity ofbaseline information above collaboration and collaborative activities, findings in the field are dominatedby ‘after collaboration’ measurement, there is little replication and an over reliance on text-based mea-sures, and an insufficient collection of tools and measures for examining processes involved in CSCL.

� 2007 Elsevier Ltd. All rights reserved.

1. Introduction

Computer-supported collaborative learning (CSCL) is one of themore dynamic research directions in educational psychology. Com-puters and various software programs were incorporated into edu-cation to aid the administration and measurement of solo andcollaborative learning activities because software can: (a) be

ll rights reserved.

ual Meeting of the Canadianronto, ON, May 26–30, 2006.

rom the Social Sciences anddwin (410-2001-1263), P. H.

vestigator) and A. F. Hadwin

fax: +1 604 291 3203 (C.L.Z.

ress), [email protected] (A.F.

individualized in design and use, (b) represent problems more real-istically, (c) display each step of a difficult problem solving task, (d)afford group discussion and collaboration across distances, and (e)provide immediate feedback for monitoring and evaluating stu-dent progress (Baker & Mayer, 1999; Baker & O’Neil, 2002; Schact-er, Herl, Chung, Dennis, & O’Neil, 1999). Not surprisingly theincreased prevalence and benefits of computer use in collaborationhas spawned new directions for research in the field of educationalpsychology and beyond, demonstrated by studies in the learningsciences, computer science, human computer interaction, instruc-tional psychology, educational technology, and education (Baker& Mayer, 1999; Hadwin, Winne, & Nesbit, 2005; Lehtinen, 2003).

The overall goal of CSCL research is to design software tools andcollaborative environments that facilitate social knowledge con-struction via a valuable assortment of methodologies, theoreticaland operational definitions, and multiple structures (Hadwin,Gress, & Page, 2006; Lehtinen, 2003). CSCL environments such asCSILE/Knowledge Forum (Lipponen, 2000; Salovaara & Järvelä,

Page 2: Measurement and assessment in computer-supported collaborative learning

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814 807

2003; Scardamalia & Bereiter, 1996) and gStudy (Winne et al.,2006) promote multiple collaborative learning models that varyby task, purpose, tools, access to product, access to peers, and the-oretical position (see Gress, Hadwin, Page, & Church, 2010). Thedevelopment and examination of various innovative and interac-tive software tools aim to facilitate and support individual andshared construction of knowledge, skills, process products (suchas notes, drafts, and collaborative conversations) and final productsvia cueing, prompting, coaching and providing immediate feedbackof both process and product (Hadwin et al., 2006; Kirschner, Strij-bos, Kreijns, & Beers, 2004; Koschmann, 2001; Lehtinen, 2003; Sal-ovaara & Järvelä, 2003). To empirically demonstrate the beneficialnature of these collaborative environments and tools, a focus onmeasurement tools, methods, and analysis is essential (Puntambe-kar & Luckin, 2003).

2. Measurement in CSCL

Measurement in CSCL consists of observing, capturing, andsummarizing complex individual and group behaviours, fromwhich researchers make reasonable inferences about learning pro-cesses and products. Factors affecting measurement in CSCL in-clude individual differences, context, tool use, collaborativeactivities, and various theoretical backgrounds of the researchersand instructors. These inferences and interpretations form assess-ments which play a central role in guiding and driving studentlearning toward knowledge acquisition and learning outcomes(Chapman, 2003; Knight & Knight, 1995; Macdonald, 2003).Assessment targets learner’s outcomes and it infuses instructionwith objective information, to stimulate deeper knowledge andmotivate personal goals in students and educators (Baker & O’Neil,2002). Measurement and assessment in CSCL can take one of threeforms: assessing the individual about the individual, assessing theindividual about the group, and assessing the group as a whole.

We are interested in the measurement of individual and sharedlearning processes, the steps each learner takes and retakes as theyprogress towards a learning outcome, typically tracked by processproducts such as notes, drafts, discussions, and traces of learner tolearner and learner to computer interactions. Of particular interestis the measurement of process products in real time and to find away to summarize and present these products to learners to pro-vide opportunities for them to monitor, evaluate, and adapt theirlearning during collaborative activities. For example, Puntambekarand Luckin (2003) and Baker and O’Neil (2002) suggested learnersgain a better understanding of their learning processes when pro-vided opportunities to reflect on their collaborative learning prod-ucts, such as notes, conversations, drafts, group management skills,and so on. These reflection opportunities arise when instructors orsoftware programs provide real-time analysis of the artifacts learn-ers produce, such as chat records, drafts, and learning objects, andprocess statistics, such as traces of learner-software interactions(Hmelo-Silver, 2003). Process measurement and real-time analysis,however, is highly complex and challenging, as it includes (a) mea-suring the cognitive steps taken by the individual and the group inthe collaborative process requires, (b) measuring individual differ-ences in these steps, (c) designing meaningful assessments of theprocesses, and (d) developing analytical methods for understand-ing and analyzing collaborative processes and products, which in-cludes dealing with a wide variety of interaction types anddeveloping means for automatically and efficiently processing col-laborative process data (logs and tracings) and products (demon-strations of learned skills and content) so it can be viewed bylearners, educators and researchers (Lehtinen, 2003; Martı́nez,Dimitriadis, Rubia, Gómez, & de la Fuente, 2003), adapting meth-ods for different contexts (Puntambekar & Luckin, 2003).

3. Purpose of this paper

This paper stems from a literature review that identified cur-rent methods of measuring and assessing learning processes inCSCL. We wanted our comprehensive review of measurementtools and methods used in CSCL to describe the current state ofthe literature by answering three fundamental questions: (a)What measures are utilized in CSCL research? (b) Do measuresexamine the effectiveness of attempts to facilitate, support, andsustain CSCL? And (c) When are the measures administered?For example, collaboration typically includes student-centeredsmall group activities, in which learners develop the necessaryskills to share the responsibility of being active, critical, creativeco-constructors of learning processes and products. Conditionsshown to facilitate and influence collaboration include, for exam-ple, positive interdependence, positive social interaction, individ-ual and group accountability, interpersonal and group socialskills, and group processing (Kreijns, Kirschner, & Jochems,2003). Various CSCL tools attempt to support these constructs,such as awareness tools to support positive social interaction(Carroll, Neale, Isenhour, Rosson, & McCrickard, 2003) and negoti-ation tools to support group social skills and discussions (Beers,Boshuizen, Kirschner, & Gijselaers, 2005), yet few studies evaluatethese tools in relation to the above conditions. They focus insteadon comparing collaborative products or investigating tool usabil-ity or tool effects on collaborative products.

This paper describes the findings of our review. First, we answerthe three questions stated above. Second, framed by our coding tomeet the first objective, we highlight key findings and discuss po-tential directions for CSCL research. Finally, we will explore howfuture research in CSCL the Learning Kit project might contributeto developing a systematic and thorough approach for measuringcollaborative processes and products using gStudy (Winne, Had-win, & Gress, 2010; Winne et al., 2006).

4. Method

We conducted an extensive literature search for all articles re-lated to CSCL from January 1999 to September 2006 in five aca-demic databases: Academic Search Elite, Computer Science Index(which includes IEEE and ED/ITLib, formerly the AACE Digital Li-brary), ERIC, PsycArticles, and PsycInfo. Search terms included vari-ations and combinations of computer, collaboration, and learning.After the search, we focused on empirical studies, including casestudies, as long as the focus of the study was collaboration amonglearners, not software usability (23 studies), resulting in 186 arti-cles. We acknowledge that some studies may be missing from thisanalysis but we felt 186 articles should provide a strong represen-tation of the field.

Initially we critically reviewed and coded each article todelineate contextual aspects of the literature, clarifying five broadaspects of CSCL research (see Gress et al., 2010): (1) the focus of thearticle (for example, was it CSCL, computer-supported collabora-tive work, computer-supported problem solving, or computer-mediated communication); (2) whether or not the technologyproposed was designed to provide a CSCL environment or if thetechnology was add on, such as email, or stand alone chat; (3)models of collaboration, defining mode and purpose of communi-cation, level of knowledge construction, group membership, andindividual access to the group project; (4) collaborative tools;and (5) collaborative support. Discrepancies were resolved throughdiscussion to consensus between researchers.

For this paper, we added a sixth coding category, researchmethods and design. We were interested in three main attributesof existing CSCL research: constructs of interest, measures and

Page 3: Measurement and assessment in computer-supported collaborative learning

808 C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

methods, and measurement timing. To aid us in this task, we codedthe following information.

4.1. Research question

In this category, we identified the research questions or state-ments provided by the author(s): for example ‘‘this paper describesa case study that investigated students’ sociocognitive processes ina multimedia-based science learning task. By focusing on students’discursive, cognitive and collaborative activity on a micro analyticlevel, the study describes the nature of students’ strategic activityand social interaction whilst handling and processing informationfrom a multimedia CD-ROM science encyclopaedia” (Kumpulainen,Salovaara, & Mutanen, 2001, p. 481);

4.2. Constructs of interest

In this category we identified the particular constructs of inter-est, for example, anonymity in chat (Barreto & Ellemers, 2002),awareness (Carroll et al., 2003), knowledge construction (Beerset al., 2005), and efficacy (King, 2003).

4.3. Measures, methods, and timing

First, we identified all the various measures, such as question-naires and interviews, and methods, such as observations, dis-course analysis, or trace data, used in collaborative research. Wethen grouped these measures and methods according to assess-ment timing (before, during, or after collaborative task[s]). Withinthe three categories of measurement timing, we divided the mea-sures and methods into six main categories of measurement iden-tified in the first step: self-report, interview, observation, tracedata, discussion/dialogue, performance/product, and feedback.The purpose of delineating the measures and methods within the

Table 1Coding framework: categories, measures, and examples

Category Measures

Self-report Questionnaires, surveys, and inventories

Interviews All discussions between researchers, assistants, and participan

Observations All methods of visually examining and documenting actions anutterances of participants, either directly or by videotape recor

Process data Estimates of time, frequency, and sequence, as well as trace datexamined participants’ actions via the computer during thecollaborative tasks

Discussions anddialogues

Engaged purposeful conversation and/or verbal expressions coeither asynchronous or synchronous communication.

Performance andproducts

All output produced by participants’ collaborative activities

Feedback and/orprediction

1. Feedback from participants to researchers on CSCL tools,2. individual feedback from the teacher, teacher assis

researcher on individual or group work,3. participants’ feedback on collaborative group member’s act

performance in the CSCL environment, and4. prediction by instructors on the quality of after colla

products

framework of assessment timing was to highlight what measuresare used and when and what constructs have been overlooked.

5. Results

As stated above, we conducted a review of the CSCL literature toanswer three fundamental questions: (a) What measures are uti-lized in CSCL research? (b) Do measures examine the effectivenessof attempts to facilitate, support, and sustain CSCL? And (c) Whenare the measures administered? To answer these questions, we fo-cused on identifying the measures used to assess collaborativeconstructs in CSCL, when they measure these constructs, and theconstructs or information of interest.

5.1. What measures are utilized in CSCL research and when?

We evaluated 186 empirical articles and found these studiesincorporated 340 measures (and methods) of collaborative con-structs (see Table 1). The majority of these measures were self-re-port questionnaires (33%) and products of collaboration (19%)assessing individual differences associated with collaboration aswell as collaborative learning and knowledge construction. Thenumber of measures used to assess process data and discussionand dialogues were approximately equal at 12%, followed by inter-view data (10%), observations (9%), and then prediction and/orfeedback (5%). These numbers suggest much of the informationknown about collaborative processes and products is from the firsttwo forms of measurement available in CSCL: self-report measuresthat assess the individual about the individual, and the individualabout the group. This is not surprising considering the ease ofadministration, low cost, and flexible nature of self-reportmeasures.

Of the three approximate times during which collaboration wasassessed (before, during, and after), our review reveals the majority

Examples

Community Classroom Index (Rovai, 2002), Learning Styles by Honeyand Mumford (1986, as cited by Shaw and Marlow (1999)), and theCollege and University Classroom Environment Inventory (Joiner et al.,2002)

ts ‘‘the post-test interview asked the subjects to rate their experience withthe CSMS program on a 1–10 scale including ease of use and the natureand quality of recordings provided” (Riley et al., 2005, p. 1013)

dding

‘‘The discussion was videotaped and transcribed as the students tried tounderstand a case of pernicious anemia, a blood disorder that causesnervous system problems. The entire transcript was coded for the typesof questions and statements in the discourse” (Hmelo-Silver, 2003, p.408)

a which Kester and Paas’s (2005) examination of time on task and path througha physics simulation and Joiner and Issroff’s (2003) use of tracediagrams to analyze joint navigation in a virtual environment

ded as True asynchronous communication tools were text-based email (vander Meij et al., 2005), discussion forums (Scardamalia & Bereiter, 1996),and text-based chat rooms (Wang et al., 2000). Asynchronous tools insynchronous functions included chat tools similar to AOL or IM wherethe context appears to be real time but actually works like fast emailtext submissions (Linder & Rochon, 2003). True synchronouscommunication tools included chat tools like ICQ, where a user can seeall changes made to the system at all times (Van Der Puil et al., 2004)Grades (Strijbos et al., 2004), concept maps (Stoyanov & Kommers,1999), notes (Lipponen et al., 2003), and essays (Erkens et al., 2005)

tant, or

ions and

boration

‘‘user feedback was obtained in several ways including: (1) usersemailing their comments directly from within the application (2)polling users informally and (3) eliciting comments through regularmeetings with several of our user groups” (Lee & Girgensohn (2002, p.82))

Page 4: Measurement and assessment in computer-supported collaborative learning

Table 3Category, type, and frequency of constructs assessed before collaboration in 186studies

Category Construct n

Individual differencemeasures

Attitude towards science 1Attitudes toward collaborative learning 4Collaborative experience 5Collaborative skills 2Computer efficacy and/or literacy 8Epistemological beliefs 1Inductive thinking 1Instructional plans or beliefs 2Leadership abilities 1Learning skills, processes, styles, orapproaches

4

Metacognitive strategies 1Motivation 3Social networks from prior collaboration 1

Standardized test Bielefeld-screening 1Woodcock Johnson 1

Baseline information GPA 3Prior knowledge 11Student writing 1Student reflections and predictions 1Teacher predictions of students 1

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814 809

of measures used to assess some aspect of collaboration wereadministered after collaborative activity (51%), followed by duringthe activity (35%), while only 14% of the studies gathered baselinedata of the collaborative construct of interest before the actualactivity. This suggests the lion’s share of information gained fromCSCL studies is measured after collaborative activity, i.e. after po-tential changes to an individual’s learning processes and skills haveoccurred, and is based on the final products of collaboration ratherthan the process.

A more detailed look at the review data uncovered some pre-dictable patterns. As expected, some measures are better suitedto assessments at certain times than others, such as self-reportquestionnaires for before and after activities and process measure-ments or observations during collaborative activities. As we cansee in Table 2, the majority of studies assessing collaboration inthe before collaboration time slot used self-report questionnaires(74% of before measures, 10% of total measures) followed by per-formance indicators (11% of before, 1.5% of total measures), inter-views (6% of before, 1% of total), with predictions or initial CSCLfeedback and coded discussion data tied for last (4% of before,.6% of total measures). Studies measuring collaborative constructsduring collaboration did so mostly via discussion or dialogues andprocess data such as timing, frequency, and/or traces (each approx-imately 32% of during measures, 11% of total), followed by observa-tions (video or live) (22.5% of during measures, 8% of total), self-report (5% of during, 2% of total), interviews (4% of during, 1.5%of total), performance (3% of during, 1% of total), and feedback(1% of during, .3% of total).

As stated previously, most studies incorporated collaborativemeasurement after collaborative activity. These consisted mainlyof self-report questionnaires (41% of after measures, 21% of total),followed by performance and product (33% of after, 17% of total),interview (15% of after, 8% of total), feedback (9% of after, 4% of to-tal), observations of after collaborative behaviours and skills anddiscussion data (each 1% of after, .6% of total).

5.2. Do measures examine the effectiveness of facilitating, supporting,and sustaining CSCL?

5.2.1. Before collaborationOur review revealed the measures can be collapsed into three

categories: individual difference measures, standardized testing,and baseline information. As demonstrated in Table 3, individualdifference measures included measures on individual and collabo-rative learning attitudes and skills, epistemological beliefs, com-puter skills and experience, motivation, and instructionalplanning and beliefs, and so on. Standardized testing includedthe occasional pre-screening for learning disabilities or an achieve-ment test. Baseline information included pre-tests on content orexperience, grade point averages, examples of work pre-interven-tion, and predictions by learners and instructors on participantbehaviours during collaboration.

Table 2Frequency of measures used in 186 CSCL studies categorized by measurement timing

Measure type Measurement timing

Before During

n % n % N n % n

Self-report 35 74.47 10.29 6 5.00Interview 3 6.38 0.88 5 4.17Observations 0 0.00 0.00 27 22.50Process data 0 0.00 0.00 38 31.67Discussion 2 4.26 0.59 39 32.50Performance/product 5 10.64 1.47 4 3.33Prediction/feedback 2 4.26 0.59 1 0.83

Total n 47 100 13.82 120 100

We were surprised by the low number of before collaborationmeasures. Less than a fifth of the 186 studies included any beforecollaboration baseline measures other than basic descriptive infor-mation. Out of 186 studies, studies included only 12 instances ofmeasuring collaborative constructs: attitudes toward collabora-tion, skills for collaboration, prior experience, and social networks.This indicates a large gap waiting to be addressed. For example, wedid not find a single study that reported on students’ readiness tocollaborate although a few did lead the students in group discus-sions on collaboration prior to activities (e.g. Fischer, Bruhn, Grasel,& Mandl, 2002). Shumar and Renninger (2002) point out thatresearchers and laypersons typically consider non-contributingindividuals in a virtual environment as ‘‘lurkers” who are ‘‘shirkingtheir social responsibility” (p. 6). Instead, the authors suggest,some of those individuals may not be ready to collaborate. The cur-rent literature, however, does not suggest a way to assess the stateof collaborative readiness. Nardi (2005) begins to address this issuein her work on communicative readiness, a state in which ‘‘fruitfulcommunication is likely” (p. 91).

5.2.2. During collaborationThe variety of constructs assessed during collaboration was as

extensive as the disciplines and authors conducting the studies.Because the measures used were mostly observations, coding ofdiscussions, and process-oriented data, the constructs of interestwere at times less defined (due to the nature of qualitatively cod-ing process data) than would typically be seen with self-report

Totals

After

% N n % n % N n % N

1.76 71 41.04 20.88 112 32.941.47 26 15.03 7.65 34 10.007.94 2 1.16 0.59 29 8.53

11.18 0 0.00 0.00 38 11.1811.47 2 1.16 0.59 43 12.65

1.18 57 32.95 16.76 66 19.410.29 15 8.67 4.41 18 5.29

35.29 173 100 50.88 340 100

Page 5: Measurement and assessment in computer-supported collaborative learning

Table 4Category, type, and frequency of constructs assessed after collaboration in 186 studies

Category Construct n

Individual differencemeasures

Anxiety (general) 1Attitude towards science 1Attitudes toward collaborative learning 6Classroom community 2Collaborative experience 18Computer efficacy and/or literacy 9Epistemological beliefs 2Group efficacy, satisfaction, teamwork 7Instructional plans or beliefs 2Interpersonal attraction in collaboration 1Leadership abilities 1Learning skills, processes, styles, orapproaches

15

Mental effort exerted in collaboration 2Metacognitive strategies 1Motivation 3Social influences on collaboration 1Social networks after collaboration 1

Standardized test Cognitive tests (general) 1Woodcock Johnson 1

After collaboration After collaboration knowledge 18Assignments 37GPA 9Student feedback to instructors/researchers 2Student reflections 3Teacher feedback to students 7Teacher reflections 5Thoughts on software 20

810 C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

measures. It was difficult to estimate (for some studies) if studieswere investigating similar constructs under different labels. Dueto this, we do not present the individual frequencies of each inves-tigated construct. Instead we state that each of these constructs ap-pears to be investigated between one and five times.

Self-report measures incorporated during collaboration as-sessed daily journal logs of all computer activities to compareacross conditions (e.g., Gay, Sturgill, Martin, & Huttenlocher,1999), collective efficacy (e.g., Gonzalez, Burke, Santuzzi, & Bradley,2003), satisfaction in group settings (e.g., Xie & Salvendy, 2003),student opinions of contributions to CSCL (via the experience sam-pling method) (e.g., Leinonen, 2003), and opinions of the tools andcommunication in collaboration (e.g., Scott, Mandryk, & Inkpen,2003). The instances of product measures during collaboration(and feedback given to students during collaboration) includedstudent written opinions of their progress (e.g., Neo, 2003) and var-ious versions of notes and drafts leading to final products (e.g., Lei-nonen, 2003).

The 71 instances of interview, observation, and dialogues/dis-cussion measures (combined as it was not always clear which mea-sure was used to assess which construct) collected information onfour broad (at times overlapping) collaborative constructs: whatmakes collaboration successful, collaborative tools, social interac-tion and communication, and knowledge construction/skillacquisition.

5.2.2.1. Investigating successful collaboration. Within this categoryconstructs of interest included the levels of: control of interaction,reflecting, evaluating, explaining/defining/ example providing,sharing resources, providing evidence/justifications/references toliterature, turn taking, monitoring, questions for clarification, onand off task behaviours, coordination of working on a project, rolebehaviours, interference, effects of various scaffolds, awareness ofother collaborators, quality, efficiency, motivation, and productiv-ity (e.g., Baker & Mayer, 1999; Barile & Durso, 2002; Campbell,2004; Fletcher, 2001; Hakkarainen, 1999; Hmelo-Silver, 2003;Kim, Kolko, & Greer, 2002; Kraut, Fussell, & Siegel, 2003; Pata, Leh-tinen, & Sarapuu, 2006; Salovaara, Salo, Rahikainen, Lipponen, &Jarvela, 2001; Soller, 2004).

5.2.2.2. Collaborative tool use. Constructs of interest included obsta-cles to software tool use, construction of joint workspace, tool useto increase learning, expectations of tools, and the effect of physi-cal orientation when designing tools collaboratively (e.g., Carletta,Anderson, & McEwan, 2000; Joiner & Issroff, 2003).

5.2.2.3. Social interactions and communication. These studiesfocused on social interactions and communication withincollaborative settings, the constructs of interest included patternsof interactions, equality of participation, level of cooperation,dynamics, social skills, engagement, decision making, divergenceof perspectives, aspects of teamwork, and solidarity (e.g., Guzdial& Turns, 2000; Hakkarainen, 1999; Lapadat, 2004; Lingnau, Hoppe,& Mannhaupt, 2003; Puntambekar, 2006; Salovaara et al., 2001;Suthers, Hundhausen, & Girardeau, 2003).

5.2.2.4. Knowledge construction and skill acquisition. These studiesfocused on knowledge construction, thinking skills, inquiry learn-ing, self-regulated learning, metacognition, knowledge representa-tion, cognitive processes, learning strategies, goals, strategies forcollaboration, and student interpretation of contributions to col-laboration (e.g., Hmelo-Silver, 2003; Järvelä & Salovaara, 2004;Järvenoja & Järvelä, 2005; Salovaara & Järvelä, 2003).

Among the 38 instances of process statistic methods, studiesinvestigated levels of participation and engagement, interactionpatterns between collaborators and idea units, productivity, time

constructing learning objects, user actions, search time, decisiontime, time on task, sharing of learning objects, sequence of tooluse, navigation of tools, and social network (e.g., Chau, Zeng, Chen,Huang, & Hendriawan, 2003; Dewiyanti, Brand-Gruwel, & Jochems,2005; Hakkarainen, 2003; Joiner & Issroff, 2003; Kester, Kirschner,van Merri, & Eumlnboer, 2005; Lipponen, Rahikainen, Lallimo, &Hakkarainen, 2003; Nokelainen, Miettinen, Kurhila, Floreen, & Tir-ri, 2005; Rummel & Spada, 2005; Xie & Salvendy, 2003). This typeof information was gathered from a variety of statistics such as (a)total time in a role, or to complete a task, make a single or a seriesof decisions, search for information, annotate or read, (b) frequen-cies of postings, emails, visited web pages, annotations, and (c) se-quence information on emails, computer movements, andinteractions in the software.

5.3. After collaboration

As demonstrated in Table 4, measurement and analysis aftercollaboration typically examined (a) changes in perceptions or atti-tudes towards collaborative learning, CSCL software, group effi-cacy, or learning in general, (b) comparison between productscreated by an individual versus a group, (c) comparisons betweenproducts across various supports and scaffolded contexts (e.g. roletaking, problem solving, or inquiry learning), and (d) comparisonbetween products across various CSCL tools and environments.

Studies using after collaborative measures collected and re-ported on three categories of information: individual differencesand individual assessment of group processes, standardized test-ing, and after collaboration products include assignments andfeedback. Individual difference measures and individual assess-ment of group processes, assessed via self-report questionnairesand interviews were similar to pre-intervention measures (see Ta-ble 4).

We noticed an increases focus on collaboration, including mea-sures of task cohesion, collective efficacy, group effectiveness, andteam disagreements. After collaboration products included an

Page 6: Measurement and assessment in computer-supported collaborative learning

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814 811

assortment of assignments, testing, and feedback. Most commonwas collaborative and individual assignments (37 studies), fol-lowed by feedback on CSCL software, content knowledge, GPA,and other feedback including feedback from the teachers to thestudents, student feedback to the teachers and researchers on col-laborative activities, and research design (see Table 4).

6. Discussion

The purpose of this paper was to review and describe currentmethods of measuring CSCL. Our goal was twofold: find estab-lished measures that provide immediate and contextually relevantfeedback to learners on their collaborative processes and thenresearch and adapt those measures for use within our CSCLenvironment. Based on theories of self and shared regulation(Hadwin, Oshige, Gress, & Winne, 2010; Winne & Hadwin, 1998;Zimmerman, 2000) we believe learners need to be able to monitorand evaluate their solo and collaborative processes and productsduring actual engagement of learning. To accomplish this, theyneed a way to ‘see’ their processes and evaluate their products,whether those products are learning objects such as notes, searchhistories, and chats, or workings toward final project goals suchas drafts of a collaborative paper. For example, learners need tobe able to (a) identify different aspects of their learning processes,such as preparing to collaborate by reading applicable material andnegotiating knowledge construction with peers, (b) monitor thoseprocesses which requires a way to record without increasing thelearners to do list or cognitive load (i.e., learners should be focusedon learning), (c) evaluate their processes, such as determining ifthe interactions are fruitful, (d) incorporate feedback generatedpersonally (by reviewing their own behaviours) and from others,and then (e) make adaptations to their behaviours; all while en-gaged in the collaborative processes.

6.1. Key findings

Our review of the measures currently used in CSCL suggests anumber of key findings.

6.1.1. A plethora of self-reportMuch of the information known about collaborative processes

and products in general (i.e. regardless of measurement timing)is gathered via the first two forms of measurement available inCSCL: self-report measures that assess the individual about theindividual, and the individual about the group.

6.1.2. Findings in the field of CSCL are dominated by ‘aftercollaboration’ measurement

Of the three approximate times during which collaboration wasassessed (before, during, and after), our review revealed just overhalf of the instances of measuring CSCL were administered aftercollaborative activity, suggesting the majority of informationgained from CSCL studies (a) is measured after collaborative activ-ity, in order words, after potential changes to an individual’s learn-ing processes and skills have occurred, and (b) is based on theproduct, rather than the process, of collaboration.

6.1.3. A paucity of baseline information about collaboration andcollaborative processes prior to CSCL

There were a low number of before collaboration measures, asless than a fifth of the 186 studies included any before collabora-tion baseline measures other than basic descriptive information.More importantly, there were only 12 instances of collaborativeconstruct measurement before actual engagement in collaborativeactivity.

6.1.4. A lack of replication in CSCL studiesAn extensive range of constructs were assessed in CSCL studies,

perhaps as extensive as the numbers of disciplines and researchersinvolved. Although this highlights the many constructs associatedwith and important to CSCL, indicating a lack of replication andexamination of reliability across various contexts and CSCL models.

6.1.5. Insufficient tools and measures for examining processes involvedin CSCL

Because the measures used were mostly observations, coding ofdiscussions, and process-oriented data, the constructs of interestwere at times less defined (perhaps due to the nature of qualita-tively coding process data) and coding methods and frameworkswere not always shared, understandably considering the lengthydetail required for such evaluations. Unfortunately, this leads todifficulties in determining if studies were investigating similarconstructs under different labels. What would be helpful is a stan-dard metric or published set of descriptions that are transferableacross studies.

6.1.6. There is an overreliance on conventional text-based measuresrather than mining the potential of CSCL technologies to assessdynamic fine-grained details

As demonstrated in Table 1, the majority of measurementmethods were self-report, observations, and content analysis ofdiscussions. This highlights Hathorn and Ingram (2002) concernthat the emphasis of CSCL work has been on developing, describingand experimenting with CSCL as they relate to individual out-comes, rather than exploring and examining differences in the col-laborative process. For example, questionnaires and interviewsgenerate rank or ordinal data summarizing learners’ awareness,perceptions, recollections, and biases about learning processes.This information is informative yet these methods do not necessar-ily provide complete or accurate views of actual learning processes(Winne & Jamieson-Noel, 2002). An evaluation of discussions anddialogues, via methods such as content analysis or social networkanalysis (SNA), reveal thematic dialogues and display patterns ofcollaboration in the forms of sociograms or graphs that representdensity and centrality of communicative acts, describing the se-quences and articulations of learner collaboration (Martı́nezet al., 2003). Also researchers can observe and code interactions be-tween learners and interactions between learners and the com-puter and then graphically representation these behavioural logs,such as demonstrated by the Chronologically Ordered Dialogueand Features Used (CORDFU; Luckin, 2003). A draw back of bothmethods, however, is that SNA and CORDFU are analysis proce-dures that occur after the fact. This provides a wealth of informa-tion to future collaborative studies, methods and scaffolds butdoes not inform current collaborators. If we want learners to suc-cessfully self-regulate their solo and collaborative activities, weneed to design activities, CSCL tools, and analysis methods thatprovide feedback during the process in addition to feedback after(Butler & Winne, 1995; Hadwin et al., 2005).

6.2. Potential directions for measurement in CSCL

Kreijns et al. (2003) describe constructs they consider key tosuccessful collaboration in computer-supported environments:interdependence, interaction, individual accountability, interper-sonal and small group skills, and active group processing. We be-lieve appropriate measurement and research on these constructsrequires methods that assess process in real time. As demonstratedabove, the majority of measuring learning processes in CSCL isfrom discussions and observations. To translate the knowledgefound in discussions and dialogues we need methodologies thatprovide the same fine-grained detailed information we receive

Page 7: Measurement and assessment in computer-supported collaborative learning

812 C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

from content analysis but in real time, to provide feedback tolearners on their solo and collaborative learning processes whileengaged in collaborative activities.

More recent methods may provide opportunities for learners toreflect during collaboration about their own collaborative activitiesvia real-time data capturing and data analysis of detailed and accu-rate recordings or traces of complex cognitive processes (Chiu, Wu,& Huang, 2000; Hadwin et al., 2005; Jang, Steinfield, & Pfaff, 2002;Lipponen et al., 2003; Winne et al., 2006). Trace data can be col-lected unobtrusively so it does not interrupt cognitive processinglike a think-aloud can nor does it depend on learners’ memoriesor perceptions (Gress & Winne, 2007; Winne, Gupta, & Nesbit,1994). Therefore, bolstered by advances in computer technology,CSCL software can now trace, in addition to facilitate, collaboration(Hadwin et al., 2005).

The future of computer-based learning for measuring learningprocesses lies in the development of measures that provide real-time feedback and therefore enhance student learning. In thefollowing sections, we discuss gStudy (see Winne et al., 2010 forfull review) as an example of how a CSCL environment canstructure and scaffold learning while providing opportunities forreal-time feedback and encouraging student reflection, evaluationand adaptation.

1 True synchronous collaboration requires each participant to see each charactertyped and each move made during collaboration. See Gress et al. (2010) for moreinformation on delineating synchronous and asynchronous tools.

6.3. The learning kit project: gStudy as a CSCL environment

In the Learning Kit project (Winne et al., 2006), our goal is to de-velop computer-supported learning environments and tools, suchas gStudy, to facilitate individual and collaborative regulation oflearning. Collaboration builds on solo learning because before agroup interacts and while they collaborate, solo learning tacticsprovide a foundation for contributing a significant proportion ofraw material – information – to collaborative enterprise. Thus, col-laboration tools in gStudy are designed to meet three primarygoals: (a) to help students enhance their individual self-regulationof learning as they participate in collaborative learning activities;(b) to boost each group member’s learning, development, and test-ing of strategies that help them collectively to collaborate better,that is, to promote shared regulation of learning; and (c) to providemethods for systematically researching the effectiveness of thesetools in supporting productive individual and shared self-regula-tion, as well as the group’s co-regulation of their collaborativeprocesses.

An important feature of the gStudy software is the trace data itunobtrusively collects: detailed information about students’ study-ing actions by logging the time and context of every learning event.Traces recorded in gStudy are artifacts of tactics and strategies in alog of fine-grained, temporally identified data that can advance re-search about how learners go about learning. Collecting traces ofstudent activity in computer-supported learning environmentsmay provide information about the dynamic, situated nature oflearning, as well as individual differences in engagement (Winneet al., 1994).

gStudy provides numerous ways to trace collaborative exchangeand dialogue. In studying appropriation of self-regulated learning(SRL) skills, use of discourse analysis has been the main methodol-ogy. Trace data of the dialogues that occur in chat tools and coach-ing will provide data about incremental change across time, bothin the content of the interactions and user’s reliance of scaffoldedSRL tools within gStudy, because trace data logs the conversationas well as the context of the learner. From trace data, we knowwhen and which windows within gStudy are open, if learnerschange views, add information, create new links, or browse theirdocuments. Therefore we know the state of the learners’ environ-ment before, during, and after collaboration.

While conventional analysis methods have contributed to anunderstanding of mechanisms involved in a basic shift in theappropriation of SRL, gStudy provides rich and sophisticated waysto collect data about ‘‘transaction” between students and context(Corno, 2006). For example, collaboration in gStudy is facilitatedin asynchronous and near synchronous1 modes. In near-synchro-nous mode, students engage in live text exchanges using an innova-tive design for a chat tool (gChat). Our chat tool moves seamlesslybetween (a) scaffolding each collaborator’s tactics and roles in waysresearch indicates make collaboration more effective and (b) inte-grating with our coach tools to guide students to explore and sharein regulating collaborative activities to make them more effective(see Morris, Church, Hadwin, Gress, & Winne, 2010). In addition,our chat tool allows learners to ‘drag and drop’ products made ingStudy, such as notes, glossary items, and study tactics. In asynchro-nous mode, collaborators have multiple methods for sharing learn-ing and its products that are created along the timeline of acollaborative project. For example, teammates can asynchronouslyshare one kit and all its diverse, richly linked contents (notes, docu-ments, etc.). Or, learners can share their individual kits or specificobjects from their kits to a group kit, the latter providing for a highlyfocused collaboration in contrast to the former ‘‘complete package”that includes all collaborators’ contributions. Additionally, learnerscan study collaboratively outside of designated collaborative activi-ties by sharing learning objects via kit exchange, shared kits, export-ing and importing, or chat.

The gStudy collaborative environment affords learners theopportunity to give or receive modeling and feedback on theirtasks. For example, in the Self-Regulation Empowerment Program(SREP) developed by Cleary and Zimmerman (2004), a tutor dem-onstrated and verbalized how he or she used SRL skills in study-ing. In gStudy, the tutor’s demonstration can be shown via theopen chat, or the tutor can verbalize their skills through the chattool to provide modeling. Further, if these SRL skills werepre-stocked in gStudy, the guided chat would function as a modelfor the learner. In giving feedback to the student, the open chattool allows a teacher to give feedback to the contents that thestudents send.

7. Conclusions

Currently the field of CSCL is dominated by analysis of processesand products to inform future collaborative activities instead ofsupporting current ones. We suggest tools and functions in gStudyoffer new directions for research on collaborative learning. Forexample, via gStudy, learners can share one kit among group mem-bers facilitating collective regulation of learning or share multipleobjects between kits to facilitate interdependence and self-regula-tion. gStudy traces allow researchers to examine the unfolding col-laboration over time as groups collectively transition forward andbackward through phases of self-regulated learning. Logfile tracesallow researchers to track transitions between students as theyadding new objects, updating old contents, and reviewing whatis currently within the kit. Rather than a log representing onestudent’s SRL, it will represent the dynamic interactions betweenstudents and the environment (i.e., gStudy), as well as among thegroup members. This will provide ways to summarize and presentlearning processes and products to learners, providing opportuni-ties for them to monitor, evaluate, and adapt their learning duringsolo and collaborative activities.

Page 8: Measurement and assessment in computer-supported collaborative learning

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814 813

Acknowledgement

Support for this research was provided by a SSHRC Standard Re-search Grant to Allyson F. Hadwin and a SSHRC-INE grant to PhillipH. Winne (PI) [Hadwin-Co-Investigator] from the Social Sciencesand Humanities Research Council of Canada (410-2001-1263;512-2003-1012).

References

Baker, E. L., & Mayer, R. E. (1999). Computer-based assessment of problem solving.Computers in Human Behavior, 15, 269–282.

Baker, E. L., & O’Neil, H. F. J. (2002). Measuring problem solving in computerenvironments: Current and future states. Computers in Human Behavior, 18,609–622.

Barile, A. L., & Durso, F. T. (2002). Computer mediated communication incollaborative writing. Computers in Human Behavior, 18, 173–190.

Barreto, M., & Ellemers, N. (2002). The impact of anonymity and group identificationon progroup behavior in computer-mediated groups. Small Group Research, 33,590.

Beers, P. J., Boshuizen, H. P. A. E., Kirschner, P. A., & Gijselaers, W. H. (2005).Computer support for knowledge construction in collaborative learningenvironments. Computers in Human Behavior, 21, 623–643.

Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: Atheoretical synthesis. Review of Educational Research, 65, 245–281.

Campbell, J. D. (2004). Interaction in collaborative computer supported diagramdevelopment. Computers in Human Behavior, 20, 289–310.

Carletta, J., Anderson, A. H., & McEwan, R. (2000). The effects of multimediacommunication technology on non-collocated teams: A case study. Ergonomics,43, 1237–1251.

Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson, M. B., & McCrickard, D. S. (2003).Notification and awareness: Synchronizing task-oriented collaborative activity.International Journal of Human–Computer Studies, 58, 605.

Chapman, E. (2003). Alternative approaches to assessing student engagement rates.Practical Assessment, Research & Evaluation, 8.

Chau, M., Zeng, D., Chen, H., Huang, M., & Hendriawan, D. (2003). Design andevaluation of a multi-agent collaborative web mining system. Decision SupportSystems, 35, 167.

Chiu, C. H., Wu, W. S., & Huang, C. C. (2000). Collaborative concept mappingprocesses mediated by computer. Paper presented at the WebNet 2000 WorldConference on the WWW and Internet Proceedings, Taiwan.

Corno, L. (2006). Socially constructed self-regulated learning: Where social and selfmeet in the strategic regulation of learning [discussant’s comments]. Presentationat the American Educational Research Association, San Francisco, CA.

Dewiyanti, S., Brand-Gruwel, S., & Jochems, W. (2005). Applying reflection andmoderation in an asynchronous computer-supported collaborative learningenvironment in campus-based higher education. British Journal of EducationalTechnology, 36, 673–676.

Erkens, G., Jaspers, J., Prangsma, M., & Kanselaar, G. (2005). Coordination processesin computer supported collaborative writing. Computers in Human Behavior, 21,463–486.

Fischer, F., Bruhn, J., Grasel, C., & Mandl, H. (2002). Fostering collaborativeknowledge construction with visualization tools. Learning and Instruction, 12,213–232.

Fletcher, D. C. (2001). Second graders decide when to use electronic editing tools.Information Technology in Childhood Education Annual, 155–174.

Gay, G., Sturgill, A., Martin, W., & Huttenlocher, D. (1999). Document-centered peercollaborations: An exploration of the educational uses of networkedcommunication technologies. Journal of Computer-Mediated Communication, 4,1–13.

Gonzalez, M. G., Burke, M. J., Santuzzi, A. M., & Bradley, J. C. (2003). The impact ofgroup process variables on the effectiveness of distance collaboration groups.Computers in Human Behavior, 19, 629–648.

Gress, C. L. Z., Hadwin, A. F., Page, J., & Church, H. (2010). A review of computer-supported collaborative learning: Informing standards for reporting CSCLresearch. Computers & Human Behavior.

Gress, C. L. Z., & Winne, P. H. (2007). Measuring cognitive & metacognitivemonitoring of study skills with trace data. Paper to be presented at theAnnual Convention of the American Educational Research Association. Chicago,IL.

Guzdial, M., & Turns, J. (2000). Effective discussion through a computer-mediatedanchored forum. Journal of the Learning Sciences, 9, 437–469.

Hadwin, A. F., Gress, C. L. Z., & Page, J. (2006). Toward standards for reportingresearch: A review of the literature on computer-supported collaborativelearning. Paper presented at the 6th IEEE International Conference on AdvancedLearning Technologies, Kerkrade, Netherlands.

Hadwin, A. F., Oshige, M., Gress, C. L. Z., & Winne, P. H. (2010). Innovative ways forusing gStudy to orchestrate and research social aspects of self-regulatedlearning. Computers & Human Behavior, 26, 794–805.

Hadwin, A. F., Winne, P. H., & Nesbit, J. C. (2005). Roles for software technologies inadvancing research and theory in educational psychology. British Journal ofEducational Psychology, 75, 1–24.

Hakkarainen, K. (1999). The interaction of motivational orientation and knowledge-seeking inquiry in computer-supported collaborative learning. Journal ofEducational Research, 21, 263–281.

Hakkarainen, K. (2003). Emergence of progressive-inquiry culture in computer-supported collaborative learning. Learning Environments Research, 6, 199–220.

Hathorn, L. G., & Ingram, A. L. (2002). Cooperation and collaboration usingcomputer-mediated communication. Journal of Educational ComputingResearch, 26, 325.

Hmelo-Silver, C. E. (2003). Analyzing collaborative knowledge construction:Multiple methods for integrated understanding. Computers & Education, 41, 397.

Jang, C.-Y., Steinfield, C., & Pfaff, B. (2002). Virtual team awareness and groupwaresupport: An evaluation of the teamscope system. International Journal ofHuman–Computer Studies, 56, 109.

Järvelä, S., & Salovaara, H. (2004). The interplay of motivational goals and cognitivestrategies in a new pedagogical culture: A context-oriented and qualitativeapproach. European Psychologist, 9, 232.

Järvenoja, H., & Järvelä, S. (2005). How students describe the sources of theiremotional and motivational experiences during the learning process: Aqualitative approach. Learning & Instruction, 15, 465–480.

Joiner, K. F., Malone, J. A., & Haimes, D. H. (2002). Assessment of classroomenvironments in reformed calculus education. Learning Environments Research,5, 51–76.

Joiner, R., & Issroff, K. (2003). Tracing success: Graphical methods for analysingsuccessful collaborative problem solving. Computers & Education, 41, 369.

Kester, L., Kirschner, P. A., van Merri & Eumlnboer, J. J. G. (2005). The management ofcognitive load during complex cognitive skill acquisition by means ofcomputer-simulated problem solving. British Journal of Educational Psychology,75, 71–85.

Kester, L., & Paas, F. (2005). Instructional interventions to enhance collaboration inpowerful learning environments. Computers in Human Behavior, 21, 689–696.

Kim, S., Kolko, B. E., & Greer, T. H. (2002). Web-based problem solving learning:third-year medical students’ participation in end-of-life care virtual clinic.Computers in Human Behavior, 18, 761–772.

King, K. P. (2003). The webquest as a means of enhancing computer efficacy. Illinois:ERIC.

Kirschner, P., Strijbos, J.-W., Kreijns, K., & Beers, P. J. (2004). Designing electroniccollaborative learning environments. Educational Technology Research &Development, 52, 47–66.

Knight, B. A., & Knight, C. (1995). Cognitive theory and the use of computers in theprimary classroom. British Journal of Educational Technology, 26, 141–148.

Koschmann, T., (2001). Revisiting the Paradigms of Instructional Technology. InProceedings of the annual conference of the australasian society for computers inlearning in tertiary education (ASCILITE 2001) (pp. 15–22). Melbourne, Australia.

Kraut, R. E., Fussell, S. R., & Siegel, J. (2003). Visual information as a conversationalresource in collaborative physical tasks. Human–Computer Interaction, 18,13–49.

Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for socialinteraction in computer-supported collaborative learning environments: Areview of the research. Computers in Human Behavior, 19, 335–353.

Kumpulainen, K., Salovaara, H., & Mutanen, M. (2001). The nature of students’sociocognitive activity in handling and processing multimedia-basedscience material in a small group learning task. Instructional Science, 29,481–515.

Lapadat (2004). Online teaching: Creating text-based environments forcollaborative thinking. Alberta Journal of Educational Research, 50, 236–251.

Lee, A., & Girgensohn, A. (2002). Design, experiences and user preferences for a web-based awareness tool. International Journal of Human–Computer Studies, 56, 75.

Lehtinen, E. (2003). Computer-supported collaborative learning: An approach topowerful learning environments. In E. De Corte, L. Verschaffel, N. Entwistle,& J. Van Merriëboer (Eds.), Unravelling basic components and dimensions of

powerful learning environments (pp. 35–53). Amsterdam, Netherlands: Elsevier.Leinonen (2003). Individual student’s interpretations for their contribution to the

computer-mediated discussions. Journal of Interactive Learning Research, 14,99–122.

Linder, U., & Rochon, R. (2003). Using chat to support collaborative learning: Qualityassurance strategies to promote success. Educational Media International, 40,75–89.

Lingnau, A., Hoppe, H. U., & Mannhaupt, G. (2003). Computer supportedcollaborative writing in an early learning classroom. Journal of ComputerAssisted Learning, 19, 186–194.

Lipponen, L. (2000). Towards knowledge building: From facts to explanations inprimary students’ computer mediated discourse. Learning EnvironmentsResearch, 3, 179–199.

Lipponen, L., Rahikainen, M., Lallimo, J., & Hakkarainen, K. (2003). Patterns ofparticipation and discourse in elementary students’ computer-supportedcollaborative learning. Learning and Instruction, 13, 487–509.

Luckin, R. (2003). Between the lines: Documenting the multiple dimensions ofcomputer-supported collaborations. Computers & Education, 41, 379.

Macdonald, J. (2003). Assessing online collaborative learning: Process and product.Computers & Education, 40, 377–391.

Martı́nez, A., Dimitriadis, Y., Rubia, B., Gómez, E., & de la Fuente, P. (2003).Combining qualitative evaluation and social network analysis for the study ofclassroom social interactions. Computers & Education, 41, 353–368.

Morris, R., Church, H., Hadwin, A. F., Gress, C. L. Z., & Winne, P. H. (2010). The use ofroles, scripts, and prompts to support CSLC in gStudy. Computers & HumanBehavior, 26, 815–824.

Page 9: Measurement and assessment in computer-supported collaborative learning

814 C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

Nardi, B. (2005). Beyond bandwidth: Dimensions of connection in interpersonalcommunication. Computer Supported Cooperative Work: The Journal ofCollaborative Computing, 14, 91–130.

Neo (2003). Developing a collaborative learning environment using a web-baseddesign. Journal of Computer Assisted Learning, 19, 462–473.

Nokelainen, P., Miettinen, M., Kurhila, J., Floreen, P., & Tirri, H. (2005). A shareddocument-based annotation tool to support learner-centred collaborativelearning. British Journal of Educational Technology, 36, 757–770.

Pata, K., Lehtinen, E., & Sarapuu, T. (2006). Inter-relations of tutor’s and peers’scaffolding and decision-making discourse acts. Instructional Science, 34,313–341.

Puntambekar, S. (2006). Analyzing collaborative interactions: Divergence, sharedunderstanding and construction of knowledge. Computers & Education, 47,332–351.

Puntambekar, S., & Luckin, R. (2003). Documenting collaborative learning: Whatshould be measured and how? Computers & Education, 41, 309–311.

Riley, W. T., Carson, S. C., Martin, N., Behar, A., Forman-Hoffman, V. L., & Jerome, A.(2005). Initial feasibility of a researcher configurable computerized self-monitoring system. Computers in Human Behavior, 21, 1005–1018.

Rovai, A. P. (2002). Development of an instrument to measure classroomcommunity. Internet and Higher Education, 5, 197–211.

Rummel, N., & Spada, H. (2005). Learning to collaborate: An instructional approachto promoting collaborative problem solving in computer-mediated settings.Journal of the Learning Sciences, 14, 201–241.

Salovaara, H., & Järvelä, S. (2003). Students’ strategic actions in computer-supported collaborative learning. Learning Environments Research, 6, 267–284.

Salovaara, H., Salo, P., Rahikainen, M., Lipponen, L., & Jarvela, S. (2001). Developingtechnology-supported inquiry practices in two comprehensive school classrooms.Paper presented at the ED-Media 2001 World Conference on EducationalMultimedia, Hypermedia & Telecommunications, Tampere, Finland.

Scardamalia, M., & Bereiter, C. (1996). Student communities for the advancement ofknowledge. Communications of the ACM, 39, 36–38.

Schacter, J., Herl, H. E., Chung, G. K. W. K., Dennis, R. A., & O’Neil, H. F. Jr., (1999).Computer-based performance assessments: A solution to the narrowmeasurement and reporting of problem-solving. Computers in HumanBehavior, 15, 403–418.

Scott, S. D., Mandryk, R. L., & Inkpen, K. M. (2003). Understanding children’scollaborative interactions in shared environments. Journal of Computer AssistedLearning, 19, 220–228.

Shaw, G., & Marlow, N. (1999). The role of student learning styles, gender, attitudesand perceptions on information and communication technology assistedlearning. Computers & Education, 33, 223–234.

Shumar, W., & Renninger, K. A. (2002). On conceptualizing community. In K. A.Renninger & W. Shumar (Eds.), Building virtual communities (pp. 1–17).Cambridge, UK: Cambridge Univrsity Press.

Soller, A. (2004). Computational modeling and analysis of knowledge sharing incollaborative distance learning. User Modeling & User-Adapted Interaction, 14,351–381.

Stoyanov, S., & Kommers, P. (1999). Agent-support for problem solving throughconcept-mapping. Journal of Interactive Learning Research, 10, 401–425.

Strijbos, J. W., Martens, R. L., & Jochems, W. M. G. (2004). Designing for interaction:Six steps to designing computer-supported group-based learning. Computers &Education, 42, 403–424.

Suthers, D. D., Hundhausen, C. D., & Girardeau, L. E. (2003). Comparing the roles ofrepresentations in face-to-face and online computer supported collaborativelearning. Computers & Education, 41, 335–351.

van der Meij, H., de Vries, B., Boersma, K., Pieters, J., & Wegerif, R. (2005). Anexamination of interactional coherence in email use in elementary school.Computers in Human Behavior, 21, 417–439.

Van Der Puil, C., Andriessen, J., & Kanselaar, G. (2004). Exploring relationalregulation in computer-mediated (collaborative) learning interaction: Adevelopmental perspective. CyberPsychology & Behavior, 7, 183–195.

Wang, M., Laffey, J., Wangemann, P., Harris, C., & Tupper, T. (2000). How do youthand mentors experience project-based learning in the internet-based sharedenvironment for expeditions (expeditions), Missouri, USA.

Winne, P. H., Gupta, L., & Nesbit, J. C. (1994). Exploring individual differences instudying strategies using graph theoretic statistics. The Alberta Journal ofEducational Research, XL, 177–193.

Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J.Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theoryand practice (pp. 277–304). Hillsdale, NJ: Erlbaum.

Winne, P. H., Hadwin, A. F., & Gress, C. L. Z. (2010). The Learning Kit Project:Software tools for supporting and researching regulation of collaborativelearning. Computers & Human Behavior, 26, 787–793.

Winne, P. H., & Jamieson-Noel, D. L. (2002). Exploring students’ calibration of self-reports about study tactics and achievement. Contemporary EducationalPsychology, 27, 551–572.

Winne, P. H., Nesbit, J. C., Kumar, V., Hadwin, A. F., Lajoie, S. P., Azevedo, R. A., et al.(2006). Supporting self-regulated learning with gstudy software: The LearningKit Project. Technology, Instruction, Cognition and Learning, 3, 105–113.

Xie, Y., & Salvendy, G. (2003). Awareness support for asynchronous engineeringcollaboration. Human Factors & Ergonomics in Manufacturing, 13, 97–113.

Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. InM. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation(pp. 13–39). San Diego, CA: Elsevier Academic Press.