Upload
geraldine-clarebout
View
220
Download
4
Embed Size (px)
Citation preview
omputers in
CComputers in Human Behavior 22 (2006) 389–411
www.elsevier.com/locate/comphumbeh
Human Behavior
Tool use in computer-based learningenvironments: towards a research framework
Geraldine Clarebout *, Jan Elen
Center for Instructional Psychology and Technology, University of Leuven, Vesaliusstraat 2,
B-3000 Leuven, Belgium
Available online 11 November 2004
Abstract
Computer-based learning environments often confront learners with a number of tools,
i.e. non-embedded support devices. Such environments assume learners to be good judges
of their own learning needs. However, research indicates that students do not always make
adequate choices for their learning process. This especially becomes an issue with the use
of open learning environments, which are assumed to foster the acquisition of complex
problem solving skills. Such open learning environments offer students tools to support
their learning. Consequently, it is needed to understand factors that influence tool use
and acquire insight in learning effects of tool use. Both issues are addressed in this contri-
bution. A review of the existing literature has been undertaken by performing a search on
the Web of Science and the PsycInfo database. Results indicate that there is some evidence
for learner, tool and task characteristics to influence tool use. No clear indication was
found for a learning effect of tool use. The conclusion proposes a research framework
for the systematic study of tools.
� 2004 Elsevier Ltd. All rights reserved.
Keywords: Computer-based learning environments; Tool use; Literature review
0747-5632/$ - see front matter � 2004 Elsevier Ltd. All rights reserved.
doi:10.1016/j.chb.2004.09.007
* Corresponding author. Tel.: +32 16 32 5745; fax: +32 16 32 6274.
E-mail addresses: [email protected] (G. Clarebout), [email protected].
ac.be (J. Elen).
390 G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411
Computer-based learning environments regularly provide learners with a variety
of support devices to foster learning. These support devices can be either embedded
or non-embedded. Embedded support devices are totally integrated in the learning
environment and cannot but be considered by learners. Examples of such devices
are feedback, or the information structure in learning materials. Embedded supportdevices are devices with which learners are confronted without them having to re-
quest or ask for them. By contrast, non-embedded support devices are support
devices whose use depends on the learner�s initiative. They are added to the environ-
ment and it is up to the learners to decide on their use. Non-embedded support de-
vices are also called ‘‘tools’’. A tool could for instance be a button that enables the
learner to access additional information. The learners have to take action; they have
to click on the button before receiving additional information. In this contribution,
the latter kind of support devices, namely tools, are addressed.Given that the use of non-embedded support devices (tools) depends on the lear-
ner�s action, the integration of tools in learning environments presupposes, by def-
inition, that learners are good judges of their learning needs. Based on their
judgments, learners select tools when they need them. Learners control the use of
tools. Contrary to the assumptions that both providing learner control and allow-
ing learners to co-construct their learning environment establishes a ‘‘better’’ learn-
ing environment, a clear benefit of learner control on learning has not yet been
found (see reviews by Friend & Cole, 1990; Goforth, 1994; Large, 1996; Williams,1996). Most learner control studies report a positive effect on learner�s attitude,
whereas learning effects seem clearly mediated by various student characteristics.
Basically, these reviews conclude that commonly students experience difficulties
to make adequate choices for themselves (see also Clark, 1991; Hill & Hannafin,
2001; Land, 2000; Lee & Lehman, 1993; Milheim & Martin, 1991), i.e. choices ben-
eficial for their learning process. In other words, in an instructional context stu-
dents seem to lack self-monitoring and regulating skills. Applied to tools, it is
reasonable to expect that students will have problems to determine when they needhelp, what kind of help they need and hence, when the use of tools might be
beneficial.
A recent evaluation study indirectly validated this expectation (Clarebout, Elen,
Lowyck, Van den Ende, & Van den Enden, 2004). In this study, students did not
use the available tools when working on a diagnostic problem in a computer-based
learning environment. Thinking aloud protocols revealed that students thought they
would be cheating if they used the tools. In other words, students� instructional con-ceptions hampered students� judgment about the use of these tools. Similar resultswere published by Marek, Griggs, and Christopher (1999) for adjunct aids in text-
books. Students� conceptions about adjunct aids influenced the use of these aids, stu-
dents indicated to be less inclined to use those adjuncts aids promoting a more
elaborative study pattern.
These and similar studies raise doubt about the assumption underlying the ample
use of tools in learning environments: their use cannot be taken for granted. At the
same time however, from a constructivist view on learning, the use of open learning
environments is advocated to foster the acquisition of complex problem solving skills
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 391
(Jacobson & Spiro, 1995; Jonassen, 1997). These open learning environments
confront learners with a complex or ill-structured task to be solved by looking at dif-
ferent perspectives (Spiro, Feltovich, Jacobson, & Coulson, 1991). In open learning
environments students are in control and get different tools to their disposal to en-
gage in problem solving tasks. As a consequence, it becomes crucial to gain insightin (variables influencing) tool use. It might even be wondered whether these tools are
actually used and if they are used, whether they are used as intended.
Similar to research on adjunct aids, it can be hypothesized that different variables
will mediate the learning effect of tool use. Elen (1995) provides an overview of dif-
ferent variables mediating the effect of adjunct aids such as the learning task, the nat-
ure of the adjunct aid and, whether and when learners are urged to use the adjunct
aid. In this contribution, these variables will be addressed with respect to tool use in
computer-based learning environments. Additionally, the learning effect of tool usewill be addressed.
Through means of a literature study, an overview is presented of research on tools
in computer-based learning environments. 1 This overview is structured according to
questions relating to different variables that might mediate the effect of tool use on
learning.
First, the methodology will be discussed. Next, results are presented and finally
these results are reflected on. The conclusion offers possible solutions and sugges-
tions for further research.
1. Method
This literature study started with a search on the Web of Science and in the Psy-
cInfo database. 2 These databases were searched for the last 20 years (from 1982). It
can be argued that 20 years is a rather large interval for studies in computer-based
learning environment, given the evolutions in this domain. However, this allows toconsider also tool use in less complex learning environments and to see whether
the complexity of the learning environment plays a role in learners� tool use. Descrip-
tors (see Table 1) specifically relate to the use of tools or synonyms (options, adjunct
aids), and to environments in which tools are most likely made available (open learn-
ing environments, hypermedia environments). These descriptors were the results of a
brainstorm session by two researchers. Additionally, the initial results of this search
were presented on two conferences (Clarebout & Elen, 2002a, 2002b). The sugges-
tions raised by the audience were taken into account and entered in a new search.In all searches, the term ‘‘research’’ was added, since the aim of this study was to find
1 This contribution does not deal with the computer as a tool in itself, or more specific as a cognitive
tool (see Lajoie, 2000; Lajoie & Derry, 1993; Salomon, 1988).2 The search was performed June 2003 and updated in September 2004.
Table 1
Descriptors
Descriptors Web of science (SSI) PsycInfo
Option(s) use 1 3
Use of option(s) 12 15
Tool(s) use 45 141
Use of tool(s) 37 135
Open learning environment(s) 1 16
Electronic learning environment(s) 0 1
Hypermedia 137 246
Learner control 28 59
Instructional intervention(s) 19 86
Adjunct aid(s) 0 3
Discovery learning 15 37
Use of resource(s) 72 68
Resource(s) use 144 63
Inquiry-oriented instruction 1 1
Project-based environment(s) 0 2
Computer-assisted learning 31 26
Simulation* 127 178
Help use 23 11
Use of help 3 97
Scaffolds 13 53
Powerful learning environment(s) 1 6
Instructional option(s) 5 9
Instructional explanation(s) 8 4
392 G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411
research studies involving tool use rather than mere descriptions of tools. If the
search yielded too many results (N > 300), ‘‘learning’’ was entered as an additional
descriptor (marked with ‘‘*’’ in Table 1).
All abstracts were read to find out whether the publications dealt with research(qualitative or quantitative) on tools in computer-based environments at any level
of education. If this was the case, the publication was selected. Eventually, only
22 journal articles could be withdrawn. All 22 studies report research results on tool
use itself, variables influencing tool use and/or the effect of tool use on learning. No
review studies were included, as for instance the review on help seeking behavior by
Aleven, Stahl, Schworm, Fischer, and Wallace (2003). Providing the initial numbers
of records found, this limited number of selected articles seems very surprising. How-
ever, most studies provide a description of tools present in their learning environ-ment studied at hand. The studies do report research results, but these are not
related to the tools themselves. Other journal articles use the term ‘‘tools’’, but are
actually referring to embedded support devices or to computers themselves serving
as a cognitive tool.
In order to describe the different articles and to compare them, a classification sys-
tem of tools was looked for. Jonassen (1999) provides an elaborate categorization
system for support devices. This system is part of an instructional design model
for so-called ‘‘constructivist learning environments’’, i.e. learning environments thataim at fostering problem solving and conceptual development by confronting learn-
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 393
ers with ill-defined problems. This system provides a category of devices that visual-
ize, organize, automate or supplant thinking skills. Given the comprehensiveness of
this framework, it was used for classifying the different tools retrieved in the articles.
It should be noted that the Jonassen-system is not the only one. For instance, a sim-
ilar system is provided by Hannafin, Land, and Oliver (1999). However, their systemcan be completely integrated in Jonassen�s one.
Despite the elaborate system, and probably also due to the origin of the system,
one category was added based on the literature reviewed, namely ‘‘elaboration tools’’
(e.g., Carrier, Davidson, & Williams, 1985). It might be argued that elaboration tools
are knowledge modeling tools, however, elaboration tools are more directed towards
exercising rather than articulating or representing one�s knowledge.The following classification system is used:
Information resources: provide information students can use to construct their men-
tal models, formulate hypotheses and solve problems. These can be text documents,
graphics, video or animations helping students to understand the problem. Access to
the World Wide Web is an example of such a tool.
Cognitive tools: help students to engage in and facilitate specific kinds of cognitive
processing. These are devices to visualize, organize or supplant thinking skills
(e.g., visualization tools such as concept maps or simulations).
Knowledge modeling tools: help students to reflect on their learning process. Thesetools provide an environment in which students have to articulate what they know
and what the relationships are between different concepts (e.g., semantic network).
Performance support tools: facilitate the problem solving process by performing algo-
rithmic tasks for the learners. This allows learners to focus more on higher order
cognitive tasks (e.g., calculator, database shells).
Information gathering tools: help students in seeking information so that they are not
distracted from their primary goal of problem solving.
Conversation and collaboration tools: are used in collaborative learning environmentto support students in their collaborative knowledge building process (e.g., e-mail,
videoconferencing).
Elaboration tools: give access to reviews and additional exercises and practices
related to the content of the task.
For all studies, the nature of the learning task and the tool, the number of sub-
jects, the dependent and independent variables and the results will be mentioned
(see Tables 2 and 3).
2. Results
This section is structured in line with the research questions. A first section relates
to the variables influencing tool use, namely student characteristics, kind of tool,
learning task and explicit encouragement. A second section discusses research
findings with respect to learning effects of tool use.
Table 2
Factors influencing tool use
Authorsa Kind of
tool(s)
Independent
variable(s)
Result
Carrier et al. (1985) Elaboration Ability Path analysis: partial b coefficient: .59/No descriptives
N = 28/6th graders
concept learning
Locus of control Path analysis: partial b coefficient: �.08/No descriptives
Carrier et al. (1984) Elaboration Learning style (field
(in)dependence)
Frequency of option selection (proportion of total)/Definitions: FI: .25; FD: .22/Expository
instances: FI: 017; FD: .25/Practice instances: FI: .25; FD: .20/Feedback: FI: .34; FD:
.35 ) v2 analysis: only difference for expository instances: v2 = 8.87, p<.05
N = 44/7th graders
concept learning
Carrier et al. (1986)
N = 37/6th graders
Concept learning
Elaboration Option type Amount of times selected: Paraphrased definition: M = 62.8, SD = 37.1/Expository instances:
M = 26.1, SD = 25.9/Practice instance: M = 31.5, SD = 30/Analytic Feedback: M = 34.5
SD = 25.9) ANOVA: F(3, 105) = 20.13, p<.05
Encouragement Amount of option use: Encouragement group: M = 38.8/No encouragement group: M = 25.4
)ANOVA: F(1,35) = 4.82, p<.05
Chapelle and Elaboration Prior knowledged Amount of tool use Purpose of tool use Extra information
Mizuno (1989)
N = 13/University
students concept
learning
perminute:
High: M = .09/min, SD = .06
On-going problem solving: High = 0%, low = 7%
Low: M = .14/min,
SD = .08) T test:
t = �1.34; df = 11, n.s.
High = 78%, low = 48%b
Advanced organizer
High = 7% Low = 32%
Others
Amount of tool use per
sentence
Reconfirmation
High = 11%, low = 9%
High = 4% Low = 4%
High: M = .13/sent, SD = .09
) Low: M = .28/sent,
SD = .14) T test:
t = �2.27; df = 11, p<.05
394
G.Clareb
out,J.Elen
/Computers
inHumanBehavio
r22(2006)389–411
Crooks et al. (1996) Elaboration Individual/co- Amount of total optional screens consulted: Individual group: M = 74%/co-operative group:
N = 125/
Undergraduates
concept learning
operative
Leanplus/Fullminus
M = 65%) ANOVA: F(1,124) = 4.92, p<.05, ES = .35
Amount of total optional screens consulted: Leanplus: 56%/Fullminus: 83%)ANOVA: F(1,124) = 51.96, p<.05; ES = .99
Crooks et al. (1998) Elaboration Individual/co- Amount of total optional screens consulted: Individual group: M = 75.14, SD = 12.99/Co-
N = 97/
Undergraduates
concept learning
operative
Leanplus/Fullminus
operative group: M = 74.92, SD = 12.31) ANOVA: Not significant (no statistics)
Amount of total optional screens consulted: Leanplus: M = 67.36, SD = 16.11/Fullminus:
M = 82.70, SD = 9.19) ANOVA: F(1,96) = 34.36, p<.05
Grasel et al. (2001) Knowledge Strategy modeling Amount of using additional information corrective: Strategy group: M = 15.88/Without
N1 = 24/N2 = 12/
University
students problem
solving
modeling
Instruction
strategy group: M = 9.13) Mann–Whitney U test: U = 8.0 p<.05
Amount of using additional information corrective: Instruction group: M = 8.17/Without
instruction group: M = 4.83 )Mann–Whitney U test: U = 8.0, p<.10c
Fischer et al. (2003) Cognitive/
information
resources
Elaboration
(Prior knowledge)
Leanplus/Fullminus
Although prior knowledge was low; information resources hardly used (no statistics
provided). Cognitive tool was used by all students (no statistics provided)N = 11/University
students problem
solving
Hannafin and
Sullivan (1995)
Amount of total optional screens consulted: Leanplus: 32%/Fullminus: 78%) ANOVA:
F(1,132) = 4.13, p<.05
Interaction effect with ability: Fullminus-Low ability:M = 76%, SD = 18.7%/Fullminus-High
ability: M = 79%, SD = 20.6%
Leanplus-Low ability: M = 19%, SD = 5.1%/Leanplus-High ability: M = 43%,
SD = 11.4%) ANOVA: F(1,132) = 12.36, p<.05
N = 133/9th &
10th graders
concept learning
(continued on next page)
G.Clareb
out,J.Elen
/Computers
inHumanBehavio
r22(2006)389–411
395
Table 2 (continued)
Authorsa Kind of tool(s) Independent
variable(s)
Result
Hasselerharm and
Leemkuil (1990)
Cognitive Learning style (field
(in)dependence)
45% used advisement; no significant effect. No statistics provided
No significant effect, no statistics provided
N = 110/
Secondary ed. st.
concept learning
Prior achievement
Hicken et al. (1992) Elaboration Leanplus/Fullminus Amount of total optional screens consulted: Leanplus: 32% (SD = 27)/Fullminus:
80% (SD = 25)) ANOVA: F(1,92) = 70.80, p<.05; ES = 1.79N = 111/
Undergraduates
concept learning
Lee and Lehman
(1993)
N = 162/
Undergraduates
concept learning
Information
resource
Learning style
(active/neutral/
passive)
Instructional cues
Selection frequency: Active: M = 0.86, SD = 0.66/Neutral: M = 0.82, SD = 0.72/Passive:
M = 0.58, SD = 0.62) ANOVA: F(2,161) = 2.64, n.s.
Selection frequency: With cues: M = .94, SD = .58/Without cues: M = .59,
SD = .73 ) ANOVA: F(1,161) = 9.81, p<.05
Interaction effect with learning style: With cues-active: M = .82, SD = .52/With
cues-neutral: M = 1.17, SD = 0.58/With cues-passive: M = 0.75, SD = .53/Without
cues-active: M = 0.90, SD = 0.78/Without cues-neutral: M = 0.72, SD = 0.67/Without
cues-passive: M = 0.72, SD = 0.68 ) ANOVA: F(2,161) = 5.55, p<.05
Liu and Reed (1994)
N = 63/College
students concept
learning
Performance
support/
Information
resource/
Learning style (field
(in)dependence)
Amount of total tool use: FD: M = 16.21/Fmixed: M = 29.28/FI: M: 24.84
) ANOVA�s for 5 support tools: n.s., results reported for use of index: F(2,62) = 2.54,
p = .09, no results reported for other 4 tools
Elaboration
Martens et al.
(1997)e
N = 51/University
students concept
learning
Cognitive Prior knowledge
Reading comprehen
sion skills
MANOVA: k = 0.88, p< .05: Prior knowledge enhancing effect on use performance
support tools F(1,50) = 6.12, p<.05/testing tools F(1,50) = 3.99, p<.05/orienting tools:
not significant, no F value
MANOVA: No significant effects/descriptives, no statistics
396
G.Clareb
out,J.Elen
/Computers
inHumanBehavio
r22(2006)389–411
Oliver and Hannafin
(2000)
Knowledge
modeling/
performance
support
(Kind of tool)f Knowledge modeling tools rarely used (no descriptives provided)/Performance support
tool used more frequently
N = 12/8th
graders
problem solving
Pedersen and Liu
(2002)
Cognitive tool Expert support Number of times notebook was used: Group 1 (modeling): M = 86.5, SD = 21.4/Group 2
(didactic): M = 42.9, SD = 35.3/Group 3 (help): M = 51.4, SD = 24.2 ) F = 14.5; p<.05;
ES (g2) = .32N = 66/6th
graders
problem solving
Relan (1995)
N = 109/6th
graders concept
learning
Elaboration training Total amount of review: Learner control complete LCC: Comprehensive training: M:
3.2, SD = 2.4/Partial training:M = 2.9, SD = 3.0/No training:M = 3.1, SD = 2.8/Learner
control limited (LCL) Comprehensive training: M = 4.2, SD = 3.6/Partial training:
M = 1.2, SD = 1.6/No training: M = 2.4, SD = 3.6
) ANOVA: Not significant within LCC/) ANOVA: Not significant over two groups
(no statistics presented)
Renkl (2002) Cognitive Prior knowledge Cluster analysis: Four clusters: (1) above average prior knowledge, high far transfer
performance, little instructional explanation use/(2) low prior knowledge, good transfer
performance, above average use of extended instructional explanations/(3) average prior
knowledge, average performance, little use of extensive explanations, frequent use of
minimalist explanation/(4) above average prior knowledge, under average transfer
performance, little use of instructional explanations
N = 28/Student
teachers problem
solving
Schnackenberg and
Sullivan (2000)
N = 99/University
juniors concept
learning
Elaboration Ability
Leanplus/Fullminus
Amount of optional screens: High: M = 25.04, SD = 15.57/Low: M = 20.15,
SD = 15.68) ANOVA: F(1,98) = 3,71, n.s.
Amount of total optional screens consulted: Leanplus: 35%/Fullminus: 68%) ANOVA:
F(1,98) = 30.42, p<.05; ES = 1.08
(continued on next page)
G.Clareb
out,J.Elen
/Computers
inHumanBehavio
r22(2006)389–411
397
Table 2 (continued)
Authorsa Kind of
tool(s)
Independent
variable(s)
Result
Viau and Larivee
(1993)
Elaboration Prior knowledge Amount of tool use: Regression analysis: Weak: M = 8.3, SD = 6.8, r = .43, p<.05/
Avarege: M = 11.4, SD = 9.1, r = .50, p<.05/Strong: M = 10.8, SD = 6.6, r = .30,
p>.05N = 70/College
students concept
learning
a The studies are alphabetically ordered according to the first author.b The results presented are taken from Table 6 (p. 39) of this article. However, the authors report different percentages in their text (p. 38), where they state
that 71% of the high level students use the tools for on-going problem solving and 67% of the lower level students.c The authors indicate this to be significant.d This was an evaluation study, as such no real independent variables was specified in advance.
398
G.Clareb
out,J.Elen
/Computers
inHumanBehavio
r22(2006)389–411
Table 3
Learning effect of tool use
Authors Kind of tools Dependent
variable
Results
Carrier and Williams
(1988)
N = 114/6ht graders
concept learning
Elaboration Performance Pearson correlation of option selection and post-test: r = .28, p<.05;
with delayed test: r = .20, p < .05. Low opt. sel.: Mpa = 4.7, SD = 2.8/
Md = 4.8, SD = 3.1/Medium-low opt. sel.: Mp = 5.6, SD = 2.9/
Md = 5.0, SD = 3.4/Medium-high op. sel.: Mp = 8.6; SD = 2.4/
Md = 7.0; SD = 3.7/High opt. sel.: Mp = 6.9, SD = 3.7/Md = 6.5,
SD = 3.8
MANOVA on repeated measures: Interaction between treatment and
quadratic level of choice: F(1,86) = 4.28, p<.05; Interaction time by
level of option selection: Quadratic trend interaction: F(1,86) = 4.83,
p<.05
Carrier et al. (1985)
N = 20/6th graders
concept learning
Elaboration Performance Fisher�s exact test: High ability high option: post-test = .04;
delayed = .11/High-ability low option: post-test = .80, delayed = .80;
Low ability high option: post-test = .38; dealyed = .51/Low ability low
option: post-test = .11, delayed = .34
Martens et al. (1997)
N = 51/University
students concept
learning
Cognitive Performance Interaction between discernability and use of toolsbDiscernability and
use of processing tool: (F(1,42) = 5.66, p<.05)/Discernability and use
of testing tool: F(1,41) = 3.6, p>.06
Morrison et al. (1992) Elaboration Performance Correlation between tool use and performance: R = �.06; p>.05
N = 73/6th graders
concept learning
Attitude Correlation between tool use and attitude: R = .05; p>.05
(continued on next page)
G.Clareb
out,J.Elen
/Computers
inHumanBehavio
r22(2006)389–411
399
Table 3 (continued)
Authors Kind of tools Dependent
variable
Results
Oliver and Hannafin
(2000)
Knowledge monitoring Use of higher
order skills
Qualitative analysis: No effects found (little use): )no descriptives
given
N = 12/8th graders
problem solving
Renkl (2002)
N = 48/student
teachers problem
solving
Cognitive Performance One-tailed t test: t(46) = 1.71; p<.05/Post-test: Mc = 42.5, SD = 21.19;
ME = 53.71, SD = 23.30; ES = .50/Near transfer: Mc = 54.40,
SD = 26.25; ME = 63.93, SD = 28.85; ES = .34/Far transfer: Mc = 35.0,
SD = 20.22; ME = 47.32, SD = 22.85
Viau and Larivee
(1993)
N = 70/College
students concept
learning
Elaboration
Regression analysis: Frequency
glossary consultation: Weak:
M = 8.33, SD = 6.77; r = .43;
p<.05/Average: M = 11.36,
SD = 9.08; r = .50, p<.05/Strong:
M = 10.75, SD = 6.63; r = .30,
p<.05. Time on glossary: Weak:
M = 11.46, SD = 11.29; r = .29,
p<.05/Average: M = 16.26,
SD = 16.07; r = .39, p<.05/
Strong: M = 12.11, SD = 7.62;
r = .54, p<.05
Performance Multiple regression analysis: 21.6% of variance explained by frequency
of glossary consultation/Significant contribution of time (r = .32) and
frequency (r = .44) of glossary consultation. No significant contribution
for time (r = .08) or frequency (r = .12) of navigation map consultation
a Mp the mean score on post-test; Md the mean score on delayed test.b These groups consisted also of students receiving a printed version of a textbook.
400
G.Clareb
out,J.Elen
/Computers
inHumanBehavio
r22(2006)389–411
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 401
2.1. Variables influencing tool use
2.1.1. What student characteristics influence tool use?
Research on adjunct aids in textbooks suggests that different student characteris-
tics influence the use of tools (Elen, 1995). In 10 of the retrieved studies ability, priorknowledge, motivation, reading comprehension skills, locus of control and learning
style were studied.
These 10 studies show that some student characteristics have been considered as
influencing variables for tool use. However these studies appear to be inconclusive.
The effect of ability seems not stable. Two of the studies (Carrier et al., 1985; Cha-
pelle & Mizuno, 1989) found high ability students to profit more from control over
tool use than low ability students. 3 High ability students used the tools more fre-
quently than low ability students. Moreover, Chapelle and Mizuno (1989) showedthat high ability use tools differently than low ability students. High ability students
use tools as problem solving aids, while low ability students use these tools as ad-
vance organizers.
In contrast to Chapelle and Mizuno (1989), who indicated that the effect of ability
is only related to one specific tool (consultation of facts-tool) and not to the consul-
tation of a grammar-tool or dictionary, Schnackenberg and Sullivan (2000) could
not replicate the influence of ability on tool use.
Prior knowledge also is a non-stable factor. Martens, Valcke, and Portier (1997)observed a positive effect of prior knowledge on tool use in a computer-based text-
book, with more prior knowledge resulting in more tool use. Viau and Larivee (1993)
however, report a curvilinear relation. In their study, average students used the avail-
able tool more often than both weak and strong students. At the same time, Renkl
(2002) found low prior knowledge students to demand more frequently a tool pro-
viding instructional explanations than students with high prior knowledge. Again,
these results may be related to the nature of the tool. Martens et al. (1997) and Renkl
(2002) used cognitive tools, whereas Viau and Larivee (1993) used an elaborationtool.
Martens et al. (1997) also studied the effect of motivation and reading comprehen-
sion skills on tool use. No significant effects were found for these two characteristics.
Effects were neither revealed for locus of control. Carrier, Davidson, Williams,
and Kalweit (1986), showed no different tool use behavior between students who per-
ceive personal success or failure as a result of their own action (internal locus of con-
trol), and students who ascribe success or failure to external factors (external locus of
control).Learning style finally, has two meanings in the selected articles. One study denotes
activity level as learning style and differentiates between active, neutral and passive
learners (Lee & Lehman, 1993). No effects were found. All other studies measure
learning style as field (in)dependence (Carrier, Davidson, Higson, & Williams,
1984; Hasselerharm & Leemkuil, 1990; Liu & Reed, 1994). These studies are not
3 The correlations and differences reported are significant on a. 05-level.
402 G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411
conclusive: some find no effect at all (Hasselerharm & Leemkuil, 1990); in others field
independent learners tend to more frequently use an index tool than field dependent
learners, while the mixed group more frequently used a note-taking tool (Liu &
Reed, 1994). And, Carrier et al. (1984) report field independent learners to more fre-
quently use an elaboration tool than field dependent learners.Clearly, with respect to the influence of student characteristics on tool use the pre-
sented studies do not lead to a firm conclusion. The number of studies is limited and
only a restricted number of student characteristics has been investigated so far. How-
ever, the results do suggest that the nature of the tool might be important given the
interaction effects and the difference in use for different kind of tools. This aspect has
attracted specific research attention as will be presented in the next section.
2.1.2. Does the nature of the tools influence tool use?
Nine studies report on the influence of the nature of the tool on tool use. In the
previous part, three studies already pertain to this issue (Carrier et al., 1986; Chapelle
& Mizuno, 1989; Schnackenberg & Sullivan, 2000). Chapelle and Mizuno (1989) re-
vealed that the use of the glossary had a positive effect but the use of a navigation
map had no effect. Similarly, Carrier et al. (1986) showed that paraphrase tools were
more frequently used than elaboration tools. Schnackenberg and Sullivan (2000)
found that a tool to bypass instruction was used less often than a tool to request
additional information.The study of Chapelle and Mizuno (1989) confirms that the nature of the tool
matters: a glossary was used, a navigation map was not used. Oliver and Hannafin
(2000) made a similar conclusion based on a study in which they provided students
with different kinds of tools: performance support-, information gathering-, cogni-
tive- and knowledge monitoring tools. Students almost exclusively used the perform-
ance support and information gathering tools. Fischer, Troendle, and Mandl (2003)
likewise found the use of tools to be related to the kind of tool. In their study, the
cognitive tool (a visualization tool) was used, whereas information resources wereonly seldom used.
A large group of studies in this group studies differences between fullminus and
leanplus conditions (Crooks, Klein, Jones, & Dwyer, 1996; Crooks, Klein, & Sav-
enye, 1998; Hannafin & Sullivan, 1995; Hicken, Sullivan, & Klein, 1992; Schnacken-
berg & Sullivan, 2000). Such studies compare students who have access to a tool that
allows them to bypass instruction (fullminus) or students who have access to a tool
that gives them more instruction (leanplus). The additional instruction consists of re-
views, summaries and practice items (elaboration tools). In all these studies, the full-minus group views significantly more instruction than the leanplus group. Fullminus
groups only seldom use the tool to bypass instruction and leanplus groups seldom
request additional instruction.
Crooks et al. (1998) attribute the difference between the leanplus and fullminus
group to one elaboration tool, namely the consultation of practice items. No differ-
ences were found for any of the other elaboration tools. Carrier et al. (1986) also
found a difference in use between different tools in the leanplus groups: paraphrased
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 403
definition, expository instances, practice instances and analytic feedback. Para-
phrased definitions were used more often than expository instances. Similarly, prac-
tice items were more frequently used than expository instances.
Hannafin and Sullivan (1995) did not report specific differences between the ef-
fects of various elaboration tools, but they did find an interaction effect betweenability and the program version. In the leanplus version, but not in the fullminus
version, high ability students selected more (43%) options than low ability students
did (19%).
2.1.3. Do learning task and working method influence tool use?
None of the studies directly addresses the issue of learner tasks. In order to answer
this question, the tasks used in the different studies were looked at and compared to
the results with respect to tool use. In 16 studies, subjects had to learn specific con-cepts. Learning results were measured by a knowledge post-test. For instance, in the
studies of Carrier et al. (1984, 1985, 1986) subjects are confronted with a computer-
based lesson about four propaganda-techniques used in advertisement. After the les-
son, subjects were tested through means of a classification test. Only four studies deal
with problem solving tasks (Grasel, Fischer, & Mandl, 2001; Fischer et al., 2003;
Oliver & Hannafin, 2000; Pedersen & Liu, 2002).
In both groups of studies, results indicate that students tend to use some tools
more than others (see previous section) and hence, not all tools are used. It couldbe expected that in the problem-solving studies, subjects would need more tools, gi-
ven the more open character of the learning environment. However, the reviewed
studies do not confirm this expectation. These studies do not allow to draw firm con-
clusions of task influence on tool use.
Crooks et al. (1996, 1998) addressed the issue of working method. They investi-
gated the influence of individual versus co-operative work on the use of tools. The
1996-study revealed individuals to more frequently use optional elements than co-
operative groups. In the 1998-study, however, no differences were found betweenthe two working methods.
2.1.4. Does explicit encouragement of tool use affect tool use?
Advice while students are working with an application has a significant positive
effect (Carrier et al., 1986; Lee & Lehman, 1993). Students, who receive instructional
cues or encouragement to use certain options, use the available tools more compared
to students who do not receive these cues or encouragement. However, Lee and Leh-
man (1993) point to an interaction effect. A positive effect of encouragement seemedto apply only for regularly active learners, not for active or passive learners. Regu-
larly active learners with instructional cues selected more information than learners
with the same learning style without instructional cues.
Grasel et al. (2001) showed that students who received strategy training made
more adequate use of additional information (a glossary), a diagnostic help tool
and a database than students who did not receive strategy training. Additionally,
404 G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411
students in the strategy-training group adapted their problem solving process on the
basis of the additional information requested. To complicate matters, Relan (1995)
found that trained students used more frequently the elaboration tool before practice
than during practice, in contrast to the group who did not receive any training. It can
be questioned whether these mixed results are due to the difference in task or the nat-ure of the tools.
Finally, Pedersen and Liu (2002) studied the use of a notebook in a problem-
based learning environment. All students received additional support by an expert
on video. In the condition where the expert models his reasoning process by intro-
ducing and applying strategies and actually using the tools, the use of the notebook
was highest as was the number of relevant notes in that notebook. The other two
groups had an expert only providing information on how the tool functions, but
not using the tools. In addition, one group received suggestions for specific strate-gies. These two groups did not differ from one another.
Table 2 summarizes the results with respect to variables affecting tool use.
2.2. Learning effects of tool use
Whereas in the previous section variables influencing tool use were discussed, this
section addresses the effect of tool use on learning. Six out of 21 studies on tool use
report learning effects. Three of these studies deal with the effect of elaboration toolson learning (Carrier et al., 1985; Carrier & Williams, 1988; Morrison, Ross, & Bal-
dwin, 1992; Viau & Larivee, 1993); two other pertain to cognitive tools (Martens
et al., 1997; Renkl, 2002) and, one to the influence of knowledge monitoring tools
(Oliver & Hannafin, 2000). These studies provide mixed results on the effect of tool
use on performance.
Viau and Larivee (1993) showed that the use of an elaboration tool (glossary) ex-
plains 21.6% of the variance in performance results. They did not find this effect for
the use of a navigation tool, which suggests that an elaboration tool may have moreinfluence on performance than a processing tool. Carrier and Williams (1988) indi-
cated a moderate effect of tool use on performance. Moreover, this effect was medi-
ated by ability. High ability students benefit more when using the tools than low
ability student. However, they did have some statistical problems since only few
lower ability students actually used many tools.
Morrison et al. (1992) investigated the difference between a learner control and a
program control group. The learner control group performed worse than the pro-
gram control group. This group did not often use the elaboration tool present. Thislead the researchers to calculate the correlation between tool use and post-test score
for the learner control group. They could not find any significant correlation.
By using a higher order skill test, Oliver and Hannafin (2000) were unable to
reveal an effect of the use of performance tools on higher order learning. It has to
be noted that this is the only study where no knowledge test was used.
Martens et al. (1997) compared two groups, one in which students had access to
cognitive tools (discernability group) and one group where these tools were totally
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 405
integrated in the environment after students activated them (non-discernability
group). They report an interaction effect between the groups and the students� useof the support devices. Students who seldom used the elaboration and cognitive tools
scored higher on a post-test than students who frequently used these tools in the dis-
cernability group. The opposite was found for the non-discernability group.Renkl (2002) compared a group who did not get access to instructional explana-
tions with a group that did get access to these explanations when clicking on a but-
ton. He found that the learners in the experimental group were significantly more
successful in the post-test than the participants in the control group. A further anal-
ysis showed this effect to be due to the scores on the far transfer test rather than on
the near transfer test. In this study, four clusters of users could be identified. A first
cluster are students with high prior knowledge and a high gain on the transfer test
although they rarely relied on the instructional explanations. A second cluster of stu-dents had low prior knowledge, but had an overall good transfer performance and
they made often use of the extensive instructional explanations. A third cluster are
students with average prior knowledge and an average score on the transfer test.
They used the minimalistic explanations very frequently, but not the extensive ones.
In a last cluster those students could be classified with also high prior knowledge, but
only average transfer performance. They rarely sued the instructional explanations,
although the results on the post-test shows that they might have benefited from this
use.These results do not give a clear picture of the learning effects of tool use, but they
do give some indication that positive effects of tools cannot be taken for granted.
This seems to be related to the tools themselves, the way students use them, and spe-
cific student characteristics.
Of course, in order to find an effect of tool use on learning, students have to
use the tools (adequately). Clearly, this is one of the methodological problems
of studying the effect of tool use. For instance, Oliver and Hannafin (2000) report
that while cognitive tools and knowledge monitoring tools were provided, studentsonly seldom used them. Moreover, if students used these tools, they did not use
them as intended. For example, knowledge monitoring tools were developed and
integrated to promote higher order thinking processes, to organize information
or to justify ideas. However, students used these tools to make lists of web pages.
This could also explain the lack of positive effects of tool use on higher order
reasoning.
Table 3 summarizes an overview of the different studies reporting on learning
effects of tool use.
3. Discussion
Up to now the use of tools has attracted only minimal research attention. The
search revealed 17 studies addressing factors influencing tool use, and five studies
dealing with learning effects of tool use.
406 G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411
Although the analysis does not reveal a clear and convincing picture, some pre-
liminary conclusions can be drawn: tool use seems to be influenced by (1) student
characteristics, (2) the kind of tool, and (3) additional advice. Looking at the learn-
ing effect of tool use it is striking that only a small number of studies deal with the
influence of tool use on performance. Oliver and Hannafin (2000) aimed at study-ing this effect, but could not draw any conclusion since students hardly used the
tools. If the effects of tool use on performance are studied, only learning results
are looked at, the learning process itself and the effects of using these tools on
the learning process are addressed in one study only. This might be related to
the kind of task students are confronted with in these studies, namely a concept
acquisition task (see Tables 2 and 3). After working in this environment a knowl-
edge test is administered to measure whether students know these concepts. These
findings also raise the question whether there was a need for students to make useof the different tools in order to complete the task. Moreover, the reported studies
often lack a theoretical basis on which the inclusion of tools in the learning envi-
ronment and the measurement of particular (influencing) variables can be justified.
Performing a thorough task analysis before studying tool use might solve this
problem.
Providing that open learning environments foster the acquisition of complex
problem solving or higher order reasoning skills (Jacobson & Spiro, 1995; Jonassen,
1997), the research analysis, therefore, calls for a systematic research program; as in-sight in tool use becomes of particular interest with the use of open learning environ-
ments. In open learning environments, the learners construct their own knowledge in
interaction with the environment. In other words, there is a high level of learner con-
trol and learners have to regulate their own learning (Hannafin, 1995). This also in-
cludes making adequate choices towards tools to be used. In designing such
environments, it is important to understand the process of tool use. This also means
that not only the amount of tool use should be considered, but also what students
actually do when using tools (e.g. Grasel et al., 2001).However, in order to do so, some requirements have to be met:
(1) Students should first use tools before the adequacy of the actual use can be
studied. Studies indicate that this is not always the case. For instance, some of the
studies in which fullminus and leanplus groups are studied found either no effect
between these groups on performance (e.g. Hannafin & Sullivan, 1995; Crooks
et al., 1998) or they found the fullminus group to outperform the leanplus group
(e.g. Crooks et al., 1996; Schnackenberg & Sullivan, 2000). These findings are ex-
plained by referring to the number of tools used. The leanplus group almost didnot use the available elaboration tool. In a similar effort, Morrison et al. (1992)
revealed that the learner control group had reviewed only a limited number of
items. From the possible total of 12 review items, 42.3% reviewed three items,
40% four or five items and only one person reviewed them all (N = 73). There
seems to be evidence that merely providing tools to students does not result in
the use of these tools (e.g. Oliver & Hannafin, 2000; Fischer et al., 2003). However,
studying the adequacy of tool use also implies some requirements of the method-
ology used. While most studies use log files to track tool use, only the number of
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 407
use or kind of tool is looked at. The analysis of the log files on the level of when
these tools are used was not found. In three studies some other valuable methods
were used that allow gaining insight in the adequacy of tool used. Renkl (2002)
used thinking aloud procedures that allow to link tool use with participants� cog-nitive processes. Similarly, Oliver and Hannafin (2000), and Fischer et al. (2003)made use of observations to gain insight in tool use.
(2) A second requirement to study the adequacy of tool use relates to the relation
between the tool and the task. The studies investigate tool use, but the adequacy of
tool use for learning or the need for students to consult these tools when solving the
task is never questioned. An evaluation of the tools themselves, in view of the task is
required to establish the need for using these tools. Apparently, this cannot be taken
for granted. Morrison et al. (1992), for instance, did not find any correlation between
tool use and scores on a post-test. Similar results were found by Martens et al.(1997). However, with respect to the latter study it should be noted that tool use
was measured through a questionnaire that was administered after students worked
with the environment. It might be that students did not adequately report on their
actual tool use.
4. Conclusion
This contribution has reported on different studies addressing tool use. Some indi-
cation is provided of variables influencing learning and of the effect of tool use on
learning. Although only a limited number of studies were retrieved, a resemblance
with factors influencing the use of adjunct aids in texts was found.
When advocating the use of open learning environments, the issue of tool use be-
comes more apparent. Learners are in control and decide autonomously on the use
of these tools. Reviews on learner control and some of the studies reviewed, reveal
that students hardly apply monitoring and regulation skills for their learning processor for making adequate choices with respect to their learning. The limited number of
studies addressing tool use in computer-based learning environments indicates that
more research is needed to identify the different learner, tool and task characteristics
that affect tool use. For instance, a student characteristic that was not considered,
but that might be of relevance to tool use are students� conceptions about these tools.Winne (1985) already indicated that the functionality students ascribe to certain ele-
ments in a learning environment influence whether and how they will use these ele-
ments. This was already illustrated in the introduction. Similar, interviews by Brushand Saye (2001) clearly show that students do not use support devices as intended
because students were unsure how to use them. This also illustrates that the problem
not only relates to tools but to embedded support devices as well. Greene and Land
(2000) reveal that even when support devices are integrated in the environment, stu-
dents tend to use them inadequately. They provided questions to students to encour-
age deep-level processing. Instead of using these questions as a tool to aid cognition,
students responded with a superficial activity rather than with underlying cognitive
408 G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411
processes. This may explain why the provision of additional advice results in positive
effects.
Further research might identify how the optimal use of tools can be fostered.
Elaborating on the issue of providing additional advice or training can do this.
But also task, tool and student characteristics should be studied, such as students�conceptions or metacognitive skills. Different authors acknowledge the importance
of metacognitive skills for making adequate decisions (de Jong & van Joolingen,
1998; Hill & Hannafin, 2001; Kinzie & Berdel, 1990; Land, 2000; Lee & Lee,
1991). Hence, it can be expected that metacognitive skills are related to the sensible
use of tools.
Moreover, most studies in this review deal with elaboration tools, and only a lim-
ited number deal with other kind of tools, such as cognitive or knowledge modeling
tools. In order to be able to draw conclusions for the design of open learning envi-ronments, more research is needed in which students are confronted with problem
solving tasks, rather than concept acquisition tasks.
In fact, the same suggestions can be made for studying tool use, as Mayer (1979)
made for the study of advance organizers: ‘‘Future theories should attempt to specify
exactly what are the ‘‘subsuming concepts’’ in the advance organizer, how they are
related to the instructional information, and how the learning outcome of an ad-
vance organizer subject differs from the cognitive structure acquired by someone
who learns without an advance organizer’’ (Mayer, 1979, p. 163). This citationpoints also to the necessity of theories underlying the implementation of tools in a
learning environment.
In Fig. 1 an attempt is made to give an outline for further research with respect to
the use of tools. In this figure different aspects that can be involved in further
research are presented.
Taskcharacteristics
Toolcharacteristics
Studentcharacteristics
Additionalcues
quantity of tooluse
quality of tooluse
learningresults
Learning process
Fig. 1. Research outline.
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 409
References
Aleven, V., Stahl, E., Schworm, S., Fischer, F., & Wallace, R. (2003). Help seeking and help design in
interactive learning environments. Review of Educational Research, 73, 277–320.
Brush, T., & Saye, J. (2001). The use of embedded scaffolds with hypermedia-supported student-centered
learning. Journal of Educational Multimedia and Hypermedia, 10, 333–356.
Carrier, C., Davidson, G., Higson, V., & Williams, M. (1984). Selection of options by field independent
and dependent children in a computer-based concept lesson. Journal of computer-based instruction, 11,
49–54.
Carrier, C., Davidson, G., & Williams, M. (1985). The selection of instructional options in a computer-
based co-ordinate concept lesson. Educational Communication and Technology Journal, 33, 199–212.
Carrier, C., Davidson, G., Williams, M., & Kalweit (1986). Instructional options and encouragement
effects in a micro-computer concept lesson. Journal of Educational Research, 79, 222–229.
Carrier, C., & Williams, M. (1988). A test of one-learner control strategy with students of differing levels of
task persistence. American Educational Research Journal, 25, 286–306.
Chapelle, C., & Mizuno, S. (1989). Students� strategies with learner-controlled CALL. Calico Journal, 7(2),
25–47.
Clarebout, G., & Elen, J. (2002a). September. The use of tools in learning environments: a literature
review. Paper presented at the New educational benefits of ICT in Higher Education-conference,
Rotterdam, the Netherlands.
Clarebout, G., & Elen, J. (2002b). November. The use of tools in computer-based learning environments: a
literature review. Paper presented at the AECT conference, Dallas, USA.
Clarebout, G., Elen, J., Lowyck, J., Van den Ende, J., & Van den Enden, E. (2004). Evaluation of an open
learning environment: The contribution of evaluation to the ADDIE process. In A. Armstrong (Ed.),
Structional design in the real world (pp. 119–135). Hershey, PA: Idea Group Publishing.
Clark, R. E. (1991). When teaching kills learning: Research on mathematantics. In H. Mandl, E. De Corte,
N. Bennett, & H. F. Friedrich (Eds.), European research in an international context. Learning and
instruction (Vol. 2, pp. 1–22). Oxford: Pergamon Press.
Crooks, S. M., Klein, J. D., Jones, E. E., & Dwyer, H. (1996). Effects of cooperative learning and learner-
control modes in computer-based instruction. Journal of research on computing in education, 29,
109–123.
Crooks, S. M., Klein, J. D., & Savenye, W. C. (1998). Effects of cooperative and individual learning during
learner-controlled computer-based instruction. Journal of Experimental Education, 66, 223–244.
de Jong, T., & van Joolingen, W. (1998). Scientific discovery learning with computer simulations of
conceptual domains. Review of Educational Research, 68, 179–201.
Elen, J. (1995). Blocks on the road to instructional design prescriptions. Leuven: University Press.
Fischer, F., Troendle, P., & Mandl, H. (2003). Using the internet to improve university education:
problem-oriented web-based learning with MUNICS. Interactive Learning Environments, 11(3),
193–214.
Friend, L. C., & Cole, C. L. (1990). Learner control in computer-based instruction: a current literature
review. Educational Technology, 30(11), 47–49.
Goforth, D. (1994). Learner control = decision making + information: a model and meta-analysis. Journal
of Educational Computing Research, 11, 1–26.
Grasel, C., Fischer, F., & Mandl, H. (2001). The use of additional information in problem-oriented
learning environments. Learning Environment Research, 3, 287–325.
Greene, B. A., & Land, S. M. (2000). A qualitative analysis of scaffolding use in a resource-based learning
environment involving the world wide web. Journal of Educational Computing Research, 23, 151–179.
Hannafin, M. (1995). Open-ended learning environments: Foundations, assumptions, and implications for
automated design. In R. D. Tennyson & A. E. Barron (Eds.), Automating instructional design:
Computer-based development and delivery tools (pp. 101–129). Berlin: Springer.
Hannafin, M., Land, S., & Oliver, K. (1999). Open learning environments: Foundation, methods and
models. In Instructional design theories and models. A new paradigm of instructional theory (Vol. II,
pp. 115–140). Mahwah, NJ: Lawrence Erlbaum.
410 G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411
Hannafin, R. D., & Sullivan, H. J. (1995). Learner control in full and lean CAI-programs. Educational
Technology, Research and Development, 43, 19–30.
Hasselerharm, E., & Leemkuil, H. (1990). The relation between instructional control strategies and
performance and attitudes in computer-based instruction. In J. M. Pieters, P. R. Simons, & L. De
Leeuw (Eds.), Research on computer-based instruction (pp. 67–80). Amsterdam: Swets & Zeitlinger.
Hicken, S., Sullivan, H., & Klein, J. (1992). Learner control modes and incentive variations in computer-
delivered instruction. Educational Technology, Research and Development, 40, 15–26.
Hill, J. R., & Hannafin, M. J. (2001). Teaching and learning in digital environments: the resurgence of
resource-based learning. Educational Technology, Research and Development, 49, 37–52.
Jacobson, M. J., & Spiro, R. J. (1995). Hypertext learning environments, cognitive flexibility and the
transfer of complex knowledge. Journal of Educational Computing Research, 12, 301–333.
Jonassen, D. H. (1997). Instructional design models for well-structured and ill-structured problem-solving
learning outcomes. Educational Technology, Research and Development, 45, 65–91.
Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.),
Instructional-design theories and models. A new paradigm of Instructional Theory (Vol. II, pp. 215–239).
Mahwah, NJ: Lawrence Erlbaum.
Kinzie, M. B., & Berdel, R. L. (1990). Design and use of hypermedia systems. Educational Technology,
Research and Development, 38(3), 61–68.
Lajoie, S. P. (Ed.). (2000). Computers as cognitive tools. No more walls (Vol. II). Mahwah, NJ: Lawrence
Erlbaum.
Lajoie, S. P., & Derry, S. J. (Eds.). (1993). Computers as cognitive tools. Hillsdale, NJ: Lawrence Erlbaum.
Land, S. M. (2000). Cognitive requirements for learning with open-learning environments. Educational
Technology, Research and Development, 48(3), 61–78.
Large, A. (1996). Hypertext instructional programs and learner control: a research review. Education for
Information, 14, 95–108.
Lee, S., & Lee, Y. H. (1991). Effects of learner-control versus program-control strategies on computer-
aided learning of chemistry problems: for acquisition or review?. Journal of Educational Psychology, 83,
491–498.
Lee, Y. B., & Lehman, J. D. (1993). Instructional cueing in hypermedia: a study with active and passive
learners. Journal of Educational Multimedia and Hypermedia, 2, 25–37.
Liu, M., & Reed, W. M. (1994). The relationship between the learning strategies and learning styles in a
hypermedia environment. Computers in Human Behavior, 10, 419–434.
Marek, P., Griggs, R. A., & Christopher, A. N. (1999). Pedagogical aids in textbooks: Do college students�perceptions justify their prevalence?. Teaching of Psychology, 26(1), 11–19.
Martens, R. L., Valcke, M. M., & Portier, S. J. (1997). Interactive learning environments to support
independent learning: the impact of discernability of embedded support devices. Computers in
Education, 28, 185–197.
Milheim, W. D., & Martin, B. L. (1991). Theoretical bases for the use of learner control: three different
perspectives. Journal of computer-based instruction, 18, 99–105.
Morrison, G. R., Ross, S. M., & Baldwin, W. (1992). Learner control of context and instructional support
in learning elementary school mathematics. Educational Technology, Research and Development, 40(1),
5–13.
Oliver, K. M., & Hannafin, M. J. (2000). Student management of web-based hypermedia resources during
open-ended problem solving. The Journal of Educational Research, 94, 75–92.
Pedersen, S., & Liu, M. (2002). The effects of modeling expert cognitive strategies during problem-based
learning. Journal of Educational computing research, 26, 353–380.
Relan, A. (1995). Promoting better choices: effects of strategy training on achievement and choice behavior
in learning controlled computer-based instruction. Journal of Educational Computing Research, 13,
129–149.
Renkl, A. (2002). Worked-out examples: instructional explanations support learning by self-explanations.
Learning and Instruction, 12, 529–556.
Salomon, G. (1988). AI in reverse: computer tools that turn cognitive. Journal of Educational Computing
Research, 4, 123–139.
G. Clarebout, J. Elen / Computers in Human Behavior 22 (2006) 389–411 411
Schnackenberg, H. L., & Sullivan, H. J. (2000). Learner control over full and lean computer-based
instruction under differing ability levels. Educational Technology, Research and Development, 48(2),
19–35.
Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1991). Knowledge representation, content
specification and the development of skill in situation-specific knowledge assembly: some constructivist
issues as they relate to cognitive flexibility. Educational Technology, 31(9), 22–25.
Viau, R., & Larivee, J. (1993). Learning tools with hypertext: an experiment. Computers & Education, 20,
11–16.
Williams, M. D. (1996). Learner-control and instructional technology. In D. H. Jonassen (Ed.), Handbook
of research for educational communications and technology (pp. 957–983). New York: Macmillan
Library.
Winne, P. H. (1985). Steps towards cognitive achievements. Journal of Elementary School Journal, 85,
673–693.
Geraldine Clarebout is research assistant at the Center for Instructional Psychology and Technology at the
University of Leuven.
Jan Elen is professor at the Center for Instructional Psychology and Technology at the University of
Leuven.