22
Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners 1 Lori Foster Thompson 2 North Carolina State University Jeffrey D. Sebastianelli CASTLE Worldwide, Inc. Nicholas P. Murray East Carolina University Web-based training programs commonly capture data reflecting e-learners’ activi- ties, yet little is known about the effects of this practice. Social facilitation theory suggests that it may adversely affect people by heightening distraction and arousal. This experiment examined the issue by asking volunteers to complete a Web-based training program designed to teach online search skills. Half of participants were told their training activities would be tracked; the others received no information about monitoring. Results supported the hypothesized effects on satisfaction, per- formance, and mental workload (measured via heart rate variability). Explicit awareness of monitoring appeared to tax e-learners mentally during training, thereby hindering performance on a later skills test. Additionally, e-learners reported less satisfaction with the training when monitoring was made salient.Technology has altered many aspects of the modern-day workplace, and training is one area that has changed dramatically in recent years. Today, a great number of e-learning opportunities reside on the Internet and organi- zational intranets. Meanwhile, the contemporary work world has witnessed an increasing reliance on electronic performance monitoring, which is defined as the use of computer and communication technologies to gather informa- tion about work performance (Aiello & Douthitt, 2001). The fusion of these two trends has led to the design of Web-based training systems that capture data reflecting employees’ e-learning activities. To the best of our knowledge, no past research has examined the effects of this practice on people who are trying to acquire new knowledge and skills online. 1 The authors thank John G. Cope and Karl L. Wuensch for their insightful comments and suggestions concerning this research. An earlier version of this paper was presented at the 20 th annual conference of the Society for Industrial and Organizational Psychology, Los Angeles, CA, April 2005. Portions of this study were carried out while the first two authors were affiliated with East Carolina University. 2 Correspondence concerning this article should be addressed to Lori Foster Thompson, Department of Psychology, North Carolina State University, Campus Box 7650, Raleigh, NC 27695-7650. E-mail: [email protected] 2191 Journal of Applied Social Psychology, 2009, 39, 9, pp. 2191–2212. © 2009 Copyright the Authors Journal compilation © 2009 Wiley Periodicals, Inc.

Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Embed Size (px)

Citation preview

Page 1: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Monitoring Online Training Behaviors: Awareness ofElectronic Surveillance Hinders E-Learners1

Lori Foster Thompson2

North Carolina State UniversityJeffrey D. Sebastianelli

CASTLE Worldwide, Inc.

Nicholas P. MurrayEast Carolina University

Web-based training programs commonly capture data reflecting e-learners’ activi-ties, yet little is known about the effects of this practice. Social facilitation theorysuggests that it may adversely affect people by heightening distraction and arousal.This experiment examined the issue by asking volunteers to complete a Web-basedtraining program designed to teach online search skills. Half of participants weretold their training activities would be tracked; the others received no informationabout monitoring. Results supported the hypothesized effects on satisfaction, per-formance, and mental workload (measured via heart rate variability). Explicitawareness of monitoring appeared to tax e-learners mentally during training,thereby hindering performance on a later skills test. Additionally, e-learnersreported less satisfaction with the training when monitoring was made salient.jasp_521 2191..2212

Technology has altered many aspects of the modern-day workplace, andtraining is one area that has changed dramatically in recent years. Today, agreat number of e-learning opportunities reside on the Internet and organi-zational intranets. Meanwhile, the contemporary work world has witnessedan increasing reliance on electronic performance monitoring, which is definedas the use of computer and communication technologies to gather informa-tion about work performance (Aiello & Douthitt, 2001).

The fusion of these two trends has led to the design of Web-based trainingsystems that capture data reflecting employees’ e-learning activities. To thebest of our knowledge, no past research has examined the effects of thispractice on people who are trying to acquire new knowledge and skills online.

1The authors thank John G. Cope and Karl L. Wuensch for their insightful comments andsuggestions concerning this research. An earlier version of this paper was presented at the 20th

annual conference of the Society for Industrial and Organizational Psychology, Los Angeles,CA, April 2005. Portions of this study were carried out while the first two authors were affiliatedwith East Carolina University.

2Correspondence concerning this article should be addressed to Lori Foster Thompson,Department of Psychology, North Carolina State University, Campus Box 7650, Raleigh, NC27695-7650. E-mail: [email protected]

2191

Journal of Applied Social Psychology, 2009, 39, 9, pp. 2191–2212.© 2009 Copyright the AuthorsJournal compilation © 2009 Wiley Periodicals, Inc.

Page 2: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

The present study, therefore, is designed to investigate whether the awarenessthat training activities are electronically monitored will affect e-learners, bothphysiologically and psychologically.

Electronic Monitoring of E-Learners

To some degree, organizations have always monitored trainees. Sign-inrosters, knowledge tests, and other measures help employers track workers’developmental efforts. To this end, one might argue that the electronic moni-toring of e-learners is simply a straightforward adaptation of practices thathave been around for quite some time. We disagree with this assertion forthree reasons.

First, classroom trainees typically know what is being measured andwhen. Online surveillance is less obtrusive, and e-learners are not alwaysprivy to the status of monitoring activities (Alge, Ballinger, & Green, 2004;Stanton & Barnes-Farrell, 1996; Wells, Moorman, & Werner, 2007). Second,the amount of information collected in the classroom pales in comparison tothe range and volume of data that can be gathered online. According to Alge,Ballinger, Tangirala, and Oakley (2006), today’s workers “face increasinglyinvasive information collection and dissemination demands from their orga-nizations” (p. 221). The accelerated development of a variety of inexpensiveelectronic devices (e.g., computer networks, wireless technologies) enablesthe collection of data (e.g., keystroke recording, Internet monitoring) thatwould be difficult, if not impossible, to obtain through physical monitoring(McNall & Roch, 2007; Stanton & Barnes-Farrell, 1996). Thus, employerswho choose to monitor training activities potentially have more informationat their disposal when the monitoring occurs online, rather than in person.Third, whereas classroom data-collection activities are usually discreteevents, online monitoring can be constant and ongoing (Wells et al., 2007).For instance, some electronic monitoring systems are designed to providecontinuous and real-time views of employees’ onscreen activities (Alge et al.,2004).

Some specific examples of the types of data that can be captured by ane-learning package include login times and dates; number and types of pre-and post-tests attempted, along with the number of items answered correctlyduring each attempt; which training modules the learners have tried; datesand times in which the modules were attempted; how long the traineesworked on each module; number of practice exercises attempted/completed;duration of each practice exercise; and whether or not the employees fol-lowed through and finished various training units. Clearly, there are manypractical uses for these types of data. Henderson, Mahar, Saliba, Deane, and

2192 THOMPSON ET AL.

Page 3: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Napier (1998) maintained that monitoring is essential for performance andproductivity in the modern office. It can be used to provide timely workperformance feedback in training situations. Moreover, trainees’ onlinedata can facilitate employee development and needs analysis by helpingorganizations understand who tends to need and use which types of learningopportunities.

Management often argues that the monitoring of employees also encour-ages proper time delegation, courteous demeanor in interpersonal environ-ments, and increased productivity (Oz, Glass, & Behling, 1999). Duringtraining, it may promote accountability, ensuring that employees feel respon-sible for completing necessary learning activities, even in the absence of anin-person instructor and peers. Research by Brown (2001) has suggested thatemployees may move through computer-based training rather quickly, skip-ping critical practice exercises and reducing their knowledge gain as a result.Monitoring e-learners and holding them responsible for their training activi-ties may help curtail this problem.

In short, the reasons for tracking Web-based training activities are com-pelling; therefore, e-learning software continues to incorporate trackingfeatures. The degree to which employers actually use these data is presentlyunknown and may matter less than the degree to which employees suspectthat their training data are being tracked. Research in other domains (e.g.,employee surveys) has suggested that employees hold concerns about data-tracking technologies and are skeptical about the privacy afforded to themonline, even when this skepticism is unjustified (Thompson, Surface,Martin, & Sanders, 2003). Thus, even trainees whose employers do not puttheir software’s tracking capabilities to use may suspect that they are beingmonitored.

Social Facilitation Theory and the Effects of Electronic Monitoring

To date, there is an unfortunate dearth of research empirically examininghow learners react to the perception that their training behaviors are beingtracked and accounted for. Moving beyond the training literature, socialfacilitation theory offers insights into the potential effects of perceived sur-veillance on e-learners. Social facilitation theory has provided a frameworkfor much of the research in the area of performance monitoring (Aiello &Douthitt, 2001). This theory suggests that the presence of others enhances theperformance of simple tasks and worsens the performance of complex tasks(Geen, 1989). For example, a 1973 study by Hunt and Hillery showed that thepresence of others reduced the errors produced by individuals learning aneasy maze and increased the errors produced by those learning a difficultmaze.

MONITORING ONLINE TRAINING 2193

Page 4: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Zajonc (1980) has linked this effect to increases in alertness and arousalthat result from the uncertainty of another’s behavior. Sanders, Baron, andMoore (1978) indicated that the presence of others serves to distract a per-former (or, in the present case, a trainee). Arousal occurs as a result of anoverloaded cognitive system, which stems from an inherent conflict betweenpaying attention to others and paying attention to the task at hand (Baron,1986; Myers, 2005). Consistent with this view, classic work by Pessin andHusband (1933) showed that the presence of a passive other decreases theefficiency with which individuals can memorize nonsense syllables. Morerecent work by Huguet, Galvaing, Monteil, and Dumas (1999) demonstratedthat the presence of others inhibits automatic verbal processing.

Self-presentation has also been implicated in the social facilitation effect.In the presence of others, people may be especially self-attentive in anattempt to conform to norms of behavior (Carver & Scheier, 1981) andpresent themselves favorably (Bond, 1982). As a result, extra mentalresources are required to complete a task when others are aware of one’sperformance. Evaluation apprehension is also believed to partially explainthe social facilitation effect, though Zajonc (1980) argued that this is not theonly reason why the presence of others produces performance effects.

The social facilitation effect occurs even when others are unfamiliar anddifficult to see (Guerin & Innes, 1982; Myers, 2005). In fact, others whocannot be seen are thought to produce even greater physiological effects on aperformer than those who are visible (Bond & Titus, 1983). The socialfacilitation framework, therefore, extends beyond in-person monitoring tothe domain of computer performance monitoring, wherein electronic surveil-lance serves the role of “invisible others.”

Research supports this line of reasoning, demonstrating that electronicmonitoring diminishes performance, unless the task at hand is a simple one.For example, a study by Douthitt and Aiello (2001) showed that monitoringimpaired the performance of people working on a complex task involvingtimed screen images requiring arithmetic calculations. Other research hasdemonstrated that the presence of computer monitoring decreases the per-formance of people working on difficult anagrams and has the reverse effecton those tasked with easy anagrams (Davidson & Henderson, 2000). Mean-while, explicit knowledge that performance is not being monitored has beenshown to enhance the performance of people asked to solve 60 five-letteranagrams in 10 min (Aiello & Svec, 1993).

In sum, the literature has theoretically linked electronic monitoring toarousal, affect, and performance. The implications for e-learners are clear.Our first hypothesis is based on the contention that monitoring taxese-learners’ cognitive systems by dividing their attention between the trainingmaterial and the awareness that their data will be examined by outside others.

2194 THOMPSON ET AL.

Page 5: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Testing this initial prediction requires a meaningful assessment of cogni-tive load. Mental workload has been defined as the mental or perceptual costincurred by a person to achieve a given performance level (Hart & Staveland,1988), or simply the effort invested in task performance (Braarud, 2001). Ina training context, Paas (1992) conceptualized mental effort as the capacitylearners allocate to instructional demands.

Time on task is a commonly used, yet contaminated measure of learnereffort (Fisher & Ford, 1998). Unfortunately, however, the self-report alter-native to time on task also has problems. Self-report measures of mentalworkload can be tainted by purposeful response distortion. Moreover, learn-ers who are not particularly self-reflective may have trouble accuratelyreporting on the application of their own mental resources (Fisher & Ford,1998).

Rowe, Sibert, and Irwin (1998) highlighted a number of advantages ofemploying physiological measures to determine mental effort. They sug-gested that heart rate variability may be used to indicate the point at whicha person’s mental capacities to process stimuli are being exceeded. Heart ratevariability indexes circumvent many of the concerns associated with self-report measures of mental effort because they do not ask learners to self-reflect.

Physiologically, heart rate variability is a measure of cardiac autonomicfunction that reflects both sympathetic and parasympathetic nervous systemactivity, including balances and imbalances (De Vito, Galloway, Nimmo,Maas, & McMurray, 2002; Mussalo et al., 2001). It is determined by medi-ated beat-to-beat variability and reflects this continuous oscillation aroundits mean value, thus providing noninvasive data about control of heart rate inreal-life conditions (Routledge, Chowdhary, & Townend, 2002). Most heartrate variability measures produce several different indexes of physiologicalfunctioning. The index of interest in this study is the very low frequency(VLF) score. VLF, which is found to increase as mental workload increases,reflects sympathetic activity (Metelka, Weinbergova, Opavsky, Salinger, &Ostransky, 1999).

Although heart rate variability has not yet found its way into mainstreamtraining research, numerous studies outside of the industrial–organizationalpsychology domain have used it to assess mental workload (e.g., De Vitoet al., 2002; Kallio et al., 2000; McMillan, 2002; Mussalo et al., 2001; Rout-ledge et al., 2002). Heart rate variability has been utilized in both laboratoryand field settings and has been found to be sensitive to manipulations in taskcomplexity (e.g., Rowe et al., 1998). For example, Aasman, Mulder, andMulder (1987) examined participants who were asked to press one of twobuttons to indicate the presence or absence of a stimulus. Heart rate vari-ability levels were significantly altered when participants were asked to think

MONITORING ONLINE TRAINING 2195

Page 6: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

about other things (e.g., keep a running mental count of memory set items)while performing this button-pressing task. Meanwhile, Croizet et al. (2004)used a measure of heart rate variability to assess the disruptive, heightenedmental load that hinders performance in the presence of stereotype threat: aphenomenon said to involve apprehension about being evaluated based on anegative stereotype (Myers, 2005). In short, the literature supports the use ofheart rate variability as an index of mental workload in general, and thedisruptive mental workload stemming from evaluation apprehension in par-ticular. The present study operationalizes cognitive load accordingly.

Hypothesis 1. Heightened perceptions of electronic monitoringwill increase e-learners’ mental workload (i.e., VLF scores).

Notably, monitoring Web-based trainees is expected to produce effectsthat extend beyond physiological arousal. Social facilitation theory main-tains that the presence of others hinders the performance of difficult tasks(Geen, 1989). Much training is presumed to be a challenging activity because,more often than not, the skill or body of knowledge being taught has notyet been mastered (hence, the need for training). Therefore, we predict thefollowing:

Hypothesis 2. Heightened perceptions of electronic monitoringwill reduce performance on a post-training skills test.

The social facilitation literature and the preceding hypotheses imply amediated model, wherein the performance-reducing effects of monitoringoccur as a result of learning decrements stemming from a cognitive systemthat is overloaded during training. This overload should be reflected inheightened VLF scores. It arises from the conflict between attending to otherswhile attending to the training material. Our third prediction tests this model:

Hypothesis 3. Mental workload (VLF scores) will mediate therelationship between heightened perceptions of monitoring andperformance on a post-training examination.

The preceding predictions suggest that e-learners obtain less return ontheir mental investments when they are aware that their training data arebeing captured online. This may adversely affect their feelings about theWeb-based training program. Computer monitoring has been shown to havenegative affective consequences (Davidson & Henderson, 2000). E-learnerswho are monitored, therefore, are expected to react less favorably whenasked to express their attitudes toward an online training program.

2196 THOMPSON ET AL.

Page 7: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Hypothesis 4. Heightened perceptions of electronic monitoringwill reduce satisfaction with an online training program.

Method

Participants

The participants in the present study were 58 volunteers (27 females, 31males) from a range of courses at a large southeastern university. The samplewas 88% Caucasian, 7% African American, and 5% belonged to other ethnicgroups. Participants’ mean age was 21.7 years (SD = 4.8). In terms of Internetexposure, participants varied quite a bit and reported spending an average of5.28 hr searching for information online each week (SD = 5.19).

Design and Procedure

The independent variable, perceived computer monitoring, had twolevels: heightened perceptions (where participants were told that their train-ing activities would be tracked) and control perceptions (where participantswere given no information about monitoring). The 58 trainees were ran-domly assigned to one of the two conditions. The dependent variables weremental workload, performance on a post-training skills test, and satisfactionwith the online training program.

Data collection occurred in a laboratory equipped with a desk, a book-shelf, the experimenter’s laptop computer, and a university-registered Intel™Pentium®-class computer on which Microsoft Internet Explorer wasinstalled. Volunteers participated one at a time.

After arriving at the laboratory, each participant was given an informedconsent form. A lightweight heart rate variability monitor, called the Biocompulse wave sensor M-2001, was then comfortably affixed to the trainee’s leftear. This device was connected to the experimenter’s laptop computer. Theexperimenter asked participants to report their age, and age data wereentered into the computer to ensure an accurate heart rate variability reading.

Next, participants in both conditions were introduced to a pre-task ques-tionnaire that gathered demographic and other data. Trainees were thenasked to use their university-issued usernames and personal passwords to logon to “Blackboard,” the university’s Web-based instructional medium.Afterward, they were familiarized with the Web-based training program,which was identical for both the monitored and control groups.

The Web-based training program was designed to teach participants howto locate information (e.g., newspaper articles) on the university library’s

MONITORING ONLINE TRAINING 2197

Page 8: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Website. It gave trainees the opportunity to read instructions on how toconduct searches and to view screenshots exemplifying proper searches. Prac-tice exercises were included to facilitate learning. The training program wasflexible in that it allowed participants to determine the ordering of themodules, how long they wished to spend on each module, and the number ofpractice exercises they wished to attempt.

Prior to training, participants in the present study were asked to rate theirfamiliarity with the library’s Website on a 5-point scale ranging from 1 (noexperience) to 5 (a lot of experience). The average rating was 2.25 (SD = 0.97),and only 6 individuals (10% of the sample) rated their experience levels asabove average. Indeed, a needs analysis conducted prior to this study high-lighted the necessity of this type of program for the population from whichour sample was drawn. Data from a previous study (King, 2003) confirmedthat the content was indeed challenging and demonstrated that the programsignificantly increased students’ online library search skills.

After receiving an introduction to the Web-based training program, thosewho were assigned to the heightened perceptions condition were informedthat their performance in the training program would be tracked. The experi-menter stated

Be aware that your performance on the training program aswell as the practice exercises is being closely monitored, tracked,and recorded. Both [name of faculty member] and I will care-fully review this information from the other room while you areworking.

They were then shown a fictitious report, which illustrated the type of genericdata the monitoring program supposedly captured. The experimenter thenstated “This is a generic example of a printout that [name of faculty member]and I receive regarding the type of data the monitoring program captures.”Participants were shown an additional on-screen fictitious report, whichillustrated data supposedly captured from a different participant. The experi-menter then stated

Hang on just a second, let me close the report from the personwho went before you, then I’ll explain the type of data that arebeing monitored. Okay. As you can probably see, we are able totrack not only how much time you’re spending on the trainingmodules, but also how many practice exercises you get through,as well as your overall success in the practice exercises. Thesegraphs outline your overall success on the training modules.This graph illustrates the amount of time elapsed in each train-ing module. This graph illustrates the practice exercises thathave been completed.

2198 THOMPSON ET AL.

Page 9: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

It should be noted that these training data were not actually tracked,because of limitations of the software being used. However, these statementswere expected to convince those in the experimental condition that theirtraining data were indeed being collected.

The experimenter then initiated the heart rate variability reading, left thelaboratory, and returned in 25 min. Upon returning to the laboratory, theexperimenter removed the heart rate variability monitor from the partici-pant’s ear and administered the post-task questionnaire and skills test. After-ward, participants were thanked, debriefed, and dismissed. Each sessionlasted approximately 75 min.

Measured Variables

Manipulation check. We included four items as a manipulation check (seethe Appendix). Participants rated their agreement with the items on a 5-pointscale ranging from 1 (strongly disagree) to 5 (strongly agree). A sample itemis “The experimenter is able to check the computer to verify how manypractice exercises I completed.” Responses to these items (a = .60) wereaveraged. High scores represented assured beliefs that training activities werebeing monitored.

Mental workload. The physiological measure of mental workload wastaken via the heart rate variability monitor attached to each participant’s ear.VLF scores were examined. These scores ranged from a 9 (implying a highheart rate variability, low mental workload) to a 3026 (implying a low heartrate variability, high mental workload).

Satisfaction. The post-task measure included three items (a = .60) thatasked e-learners to rate their satisfaction with the training program (i.e.,whether they were satisfied with, enjoyed, and would take part in anothertraining program like this; see the Appendix) on a 5-point scale ranging from1 (strongly disagree) to 5 (strongly agree). Responses to these items wereaveraged, and high scores represented favorable reactions.

Alliger, Tannenbaum, Bennett, Traver, and Shotland (1997) proposed amulti-level framework for researchers and practitioners concerned withassessing training outcomes. The satisfaction items in this study can best becharacterized as reactions as affect. As this is a component of the trainingcriterion framework proposed by Alliger et al., the inclusion of this outcomevariable helps to facilitate an examination of the practical implications of theexperimental manipulation.

Post-training skills test performance. Our next measure was designed toassess behavior/skill demonstration, which is another important element ofAlliger et al.’s (1997) training criterion framework. The skills test included

MONITORING ONLINE TRAINING 2199

Page 10: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

four questions directly related to the training content (see the Appendix).There are two questions that assessed e-learners’ ability to determine whichjournals the library carried in hard copy, and two additional questions mea-sured their ability to locate newspaper articles from the library’s Website. Forexample, one item asked participants to find a newspaper article online andwrite down a specific fact contained in the first paragraph of the article. Eachquestion had a verifiable answer. Answers were assigned a score of either 0(incorrect) or 1 (correct). These scores were then summed.

Self-reported tension. For exploratory purposes, the Profile of MoodStates (POMS) scale (Shacham, 1983) was included on the post-task ques-tionnaire as well. Of particular interest was the six-item tension subscale(a = .70), which asked participants to indicate their present mood states on a5-point scale ranging from 0 (not at all) to 4 (extremely). Sample items are“tense,” “uneasy,” and “anxious.” Responses to the six items were summed.

Results

Preliminary analyses were conducted to determine if there were any pre-existing differences between the two conditions in terms of participant ageand sex. The results indicated that age did not vary significantly between thetwo conditions, t(56) = 0.84, p = .41. Similarly, sex was not confounded withthe experimental manipulation, c2(1, N = 58) = 0.07, p = .79.

Next, the measured variables were checked for normality. With oneexception, the skewness and kurtosis values were close enough to 0 to justifyuse in analyses assuming normally distributed data. The positively skewedVLF scores were the exception. This was addressed via a logarithmic trans-formation, which reduced the skewness appropriately. The transformed VLFdata were used in all subsequent analyses.

Table 1 shows the correlations among the study variables of interest.Table 2 shows the results of the one-way MANCOVA and follow-up univari-ate ANCOVAs that were computed to examine the effects of our manipula-tion on perceptions of monitoring (i.e., manipulation check), mentalworkload, performance, and satisfaction. The dependent measures wereadjusted for three covariates: Internet exposure, library experience, and expe-rience with the instructional medium (i.e., Blackboard).

The three covariates were measured via the questionnaire administeredprior to training. In that questionnaire, trainees were asked to report theamount of time they spent searching for information on the Internet eachweek (i.e., Internet exposure). They were also asked to rate their familiaritywith the training platform (i.e., Blackboard experience), and the libraryWebsite on which the training content was based (i.e., library experience).

2200 THOMPSON ET AL.

Page 11: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Newer Internet users are less comfortable and are more likely to encoun-ter stress-inducing problems online (Eastin & LaRose, 2000). It was expectedthat people who use the Internet a great deal would naturally react to thetraining program more favorably than would those with less exposure to theInternet. Likewise, experience with the training platform and the library’sWebsite was expected to influence our dependent measures. Thus, Internetexposure, library experience, and Blackboard experience were included ascovariates because of their logical linkages with our dependent measures.Adjusting for these scores enabled a more powerful look at the effects of ourmanipulation by minimizing error variance (Tabachnick & Fidell, 2001).

As shown in Table 2, the manipulation-check data included in theMANCOVA indicated that trainees in the heightened awareness (i.e., treat-ment) condition were more convinced of the presence of electronic monitor-ing than were those in the control condition. This difference was statisticallysignificant, thereby confirming that the manipulation operated as intended.

Hypothesis 1 predicted that heightened awareness of electronic monitor-ing would increase e-learners’ mental workload (i.e., VLF scores). As Table 2indicates, the average VLF scores produced by trainees in the heightenedawareness condition significantly exceeded those produced by trainees in thecontrol condition. Therefore, the results support Hypothesis 1.

Table 1

Correlations Among Study Variables

Variable 1 2 3 4

1. Monitoring condition (control vs.heightened perceptions)

2. Mental workload (heart ratevariability, VLF log)

.50** —

3. Post-training skills test performance -.29* -.33* —4. Satisfaction -.27* -.04 .24 —5. Self-reported tension -.15 .06 .17 .11

Note. VLF = very low frequency. The correlations shown here were computed aftercontrolling for pre-training Internet exposure, library experience, and experience withthe training platform (i.e., Blackboard). Monitoring condition was dummy coded as1 for the control condition (in which participants were given no information aboutmonitoring) and 2 for the heightened awareness condition (in which participants weretold that their training activities would be tracked).*p < .05 (two-tailed). **p < .01 (two-tailed).

MONITORING ONLINE TRAINING 2201

Page 12: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Hypothesis 2 anticipated a reduction in skill attainment as a result ofelectronic monitoring. As shown in Table 2, trainees in the control conditionsignificantly outperformed those in the heightened awareness condition.Thus, Hypothesis 2 was also supported.

Hypothesis 3 linked the first two predictions, proposing that mental work-load mediates the relationship between heightened perceptions of monitoringand post-training performance. As outlined by Baron and Kenny (1986), fourconditions must be present to demonstrate full mediation. For the modeltested in this study, these requirements were (a) heightened perceptions ofmonitoring must predict post-training performance; (b) heightened percep-tions of monitoring must predict mental workload; (c) mental workload mustpredict post-training performance; and (d) the effect of heightened perceptionsof monitoring, controlling for mental workload, must be 0.

To maintain consistency with prior analyses, all correlations examined inHypothesis 3 were computed after controlling for the three pre-trainingcovariates described earlier. The partial correlations computed to test

Table 2

Differences Between E-learners in the Monitored and Control Conditions

Control(N = 28a)

Monitored(N = 28a)

F (1,51)b p h2pM SD M SD

Manipulation check:Perceived presenceof monitoring

3.21 0.71 3.68 0.66 6.63 .013 .115

Mental workload(heart ratevariability,VLF log)

2.31 0.39 2.71 0.29 16.99 <.001 .250

Post-training skillstest performance

0.63 0.29 0.51 0.30 4.76 .034 .085

Satisfaction 4.01 0.68 3.76 0.63 4.08 .049 .074

aOne case was dropped per condition because of missing data on a covariate. bAone-way MANCOVA (controlling for pre-training Internet exposure, library experi-ence, and experience with the instructional medium, Blackboard) was conducted,along with follow-up univariate ANCOVAs to examine the manipulation’s effects onperceived monitoring (i.e., manipulation check), mental workload, performance, andsatisfaction.

2202 THOMPSON ET AL.

Page 13: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

requirements (a), (b), and (d) involved the dichotomous condition variable,which was dummy coded (1 = control condition; 2 = heightened awarenesscondition). Accordingly, these partial correlations are equivalent toANCOVAs. As shown in Table 1, the results reveal significant correlationsbetween heightened perceptions of monitoring and post-training perfor-mance (r = -.29, p = .034); heightened perceptions of monitoring and mentalworkload (r = .50, p < .001); and mental workload and post-training perfor-mance (r = -.33, p = .017), thereby satisfying the first three mediationrequirements.

Next, we examined requirement (d): the effect of the manipulation aftercontrolling for mental workload. A nonsignificant relationship was foundbetween heightened perceptions of monitoring and post-training perfor-mance when mental workload was extracted (r = -.16, p = .26). Thus, itappears that the effect of heightened perceptions of monitoring on post-training performance was explained by the fact that the perception of beingtracked distracted trainees, increased their mental workload levels, decreasedthe amount they learned, and, therefore, impaired their performance on thepost-training skills test.

Our final hypothesis predicted that an awareness of electronic monitor-ing would reduce satisfaction with the online training program. Table 2shows the average scores produced by two groups of e-learners who wereasked to rate their satisfaction with the training program’s value. As can beseen in Table 2, the data support Hypothesis 4, indicating that the aware-ness of electronic monitoring adversely affected trainees’ reactions to theprogram.

Finally, self-reported tension data were examined for exploratory pur-poses. Post hoc analyses tested whether increased mental workload (VLF)levels experienced during training predicted tension reported after training.The results show no significant relationship between VLF and post-trainingtension scores (r = .08, p = .575).

Subsequent exploration reveals that this nonsignificant relationship didnot characterize both of the study conditions. When examining the dataproduced by the monitored participants only, VLF significantly predictedpost-training tension (r = .56, p < .001). Conversely, VLF did not predictpost-training tension among trainees in the control group (r = -.06, p = .753).A moderated regression analysis was conducted to test this interaction. Vari-ables were first mean-centered, per recommendations by Howell (2002). Theregression analysis reveals an R2 value of .17, which was statistically signifi-cant, F(3, 54) = 3.60, p = .019. The interaction term was significant (b = .34),t(54) = 2.62, p = .011, suggesting that the heart rate variability experiencedduring training predicted post-training tension reported by participants inthe monitored group, but not the control group.

MONITORING ONLINE TRAINING 2203

Page 14: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Discussion

The present study used a theory-driven framework to examine howe-learners react to the awareness of online monitoring, thereby contributingnew knowledge to the training literature. More generally, this research alsohelps to expand what is known about social facilitation and computer sur-veillance by testing the changes in physiological arousal that result fromWeb-based monitoring and demonstrating the performance consequences ofthis physiological mediator.

The findings reveal that heightened awareness of monitoring hamperedperformance on a post-training skills test, presumably because this awarenesshindered learning during training. Learning was obstructed as a result ofincreases in mental workload (i.e., heightened sympathetic nervous systemactivity/VLF). In the end, e-learners were less satisfied with the trainingprogram when monitoring was made salient, perhaps because they receivedfewer returns on their mental investments.

Practical Implications

The implications of these findings for applied practice must be considered.Clearly, the online evaluation of learner progress offers opportunities toimprove instructional effectiveness at the individual, unit, and organizationallevels. For example, module completion times could be used to gauge thedegree of difficulty typically associated with various training segments.Armed with this knowledge, organizations could ensure adequate amounts of“release time” for trainees working on different topics. Moreover, automati-cally linking training data with demographic data and other informationfrom the organization’s human resource information system may facilitateneeds analysis by enabling the organization to identify better where knowl-edge deficits are likely to exist. Grouping training data (e.g., scores on post-tests or overall module completion status) at the unit and organization levelscould provide useful proficiency snapshots as well.

The question is not whether the data collected online can inform trainingdesign and delivery; rather, the question is whether the organizational ben-efits of collecting these data outweigh the costs to e-learners. If the drawbacksof online monitoring overshadow the advantages, employers should recon-sider capturing trainees’ learning data electronically. At the same time,researchers should search for ways to minimize or reverse the negative con-sequences of online surveillance to allow organizations to capitalize on thepossibilities afforded by contemporary training technologies.

This study begins to shed light on the aforementioned cost–benefit ratioby examining the effects of perceived monitoring on trainees. The fact that

2204 THOMPSON ET AL.

Page 15: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

participants’ post-training test scores were significantly degraded by the per-ception of monitoring is noteworthy. Additionally, practitioners should notoverlook the detrimental effect of monitoring on e-learners’ mental workloadlevels. Coupled with the performance reduction, this suggests a reduction inlearning efficiency.

Finally, the adverse satisfaction effect must also be taken into account.This finding is particularly important in light of Alliger et al.’s (1997) asser-tion that affective reactions to training can influence important variables,such as later training attendance, word-of-mouth advertising, and subse-quent training funding.

Another point of practical interest can be gleaned from Table 2. Althoughthe monitored participants were especially inclined to believe that their workwas being tracked, those in the control condition were far from convincedthat their work was unmonitored. On a 5-point scale, where higher scoresrepresent stronger beliefs in the presence of monitoring, the control traineesaveraged above the uncertain midpoint. This is noteworthy for two reasons.First, it suggests that the effects uncovered in this study may underestimatethe impact of perceived monitoring on trainees. Perhaps e-learners who areconvinced that their work is unmonitored would have exhibited even lessphysiological arousal, more learning, and more positive reactions than thecontrol participants in our program.

Second, this finding suggests that people may be naturally skeptical aboutthe degree of privacy afforded by a Web-based learning opportunity. Tosome degree, asking participants to rate their agreement with items such as “Ifeel certain the experimenters are able to check the computer to verify howmany practice exercises I completed” probably primed this skepticism.However, with online surveillance techniques increasing in popularity (Algeet al., 2004), this uncertainty may be more and more commonplace in thedays to come. Thus, employers who opt to avoid monitoring e-learners inorder to circumvent the negative side effects uncovered in this study mayneed to go to extra lengths to convince trainees that their data are not beingtracked. As suggested by the present study, simply saying nothing about thepresence or absence of monitoring does not guarantee feelings of privacy.This issue is paramount because it is the perception of monitoring (regardlessof the reality) that produces the effects shown in this study.

Limitations and Future Research

This study’s results and conclusions should be interpreted in the contextof several notable limitations. The average age of our sample was 22 years.We do not know if these results characterize older individuals, who might

MONITORING ONLINE TRAINING 2205

Page 16: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

differ from younger people in relevant ways. For example, older individualsmay feel relatively uncomfortable with technologically mediated instructionoverall (Brown, 2001) and could, therefore, react to the monitoring of Web-based training activities differently than do their younger counterparts.Moreover, it is unclear whether these findings generalize to trainees in theworkplace. Field research examining e-learners in organizational settingswould help to address these external validity issues.

It is also important to acknowledge that our measures were brief, in somecases producing reliability indexes of .60. This is lower than desired. Improv-ing the internal consistency of scales used to assess the outcomes of electronicmonitoring should be a priority during follow-up research examining theissues uncovered in the present study.

To some degree, all participants in this study were monitored, as a resultof the presence of the ear clip measuring heart rate variability. Future workcomparing the effects of perceived monitoring on the performance and atti-tudes of Web-based trainees who are not connected to a heart rate variabilitymonitor would be informative. By its very design, such research will not beable to look at group differences in heart rate variability; therefore, alterna-tive measures of mental workload are needed to re-examine the presentfindings in the absence of heart rate variability equipment. Follow-up workcomparing trainees who are and are not connected to a heart rate variabilitymonitor would shed light on any effects the heart rate variability monitorused in this study may have had.

Related to the preceding limitation, two points that support the validity ofthis research may be worth mentioning. First, the design of this study heldheart rate variability monitoring constant across the two conditions. There-fore, the effects demonstrated in this research were clearly a function ofincreased awareness of monitoring induced by the experimental manipula-tion. Because heart rate variability monitoring was not confounded with ourexperimental manipulation, the group differences uncovered in this study donot reflect the combined effects of heart rate variability monitoring andperceived computer monitoring; rather, they simply reflect the effects ofperceived computer monitoring on trainees.

Second, the presence of the heart rate variability clip was not expected tounduly influence the perceived monitoring scores shown in Table 2 becausethis composite consisted of pointed questions about the trainer’s ability tocheck the computer for verification of practice exercises and the like. Answersto such questions are not likely to be affected greatly by the presence of aheart rate variability monitor.

On a different note, the exploratory analyses in this study reveal that themental workload or VLF scores produced during training predicted latertension levels experienced by monitored trainees, but not control trainees.

2206 THOMPSON ET AL.

Page 17: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

This finding warrants further consideration. Within the monitored condition,the link between VLF and tension was presumably mediated by increases inarousal, distraction, and perhaps evaluation apprehension. Monitored train-ees’ heightened VLF levels were thus assumed to reflect the extra mentalresources required because of self-presentation concerns and the conflictbetween simultaneously attending to others and the training material. Thismay help to explain the association between VLF and the post-trainingtension experienced by participants in the monitored condition. Conversely,variation in the mental workload (i.e., VLF scores) experienced by trainees inthe control group may have occurred for other reasons that were less likely tolead to tension. Future research pinpointing the reason for this interactionwould be informative.

While the health implications of the findings from this research areunknown, it is worth noting that the VLF component of heart rate variabilityreflects sympathetic activity, which is associated with the development andmaintenance of hypertension (McCraty, Atkinson, & Tomasino, 2007) andchronic heart failure (Andreas, Bingeli, Mohaesi, Luscher, & Noll, 2003).Accordingly, heart rate variability is viewed as a prognostic measure inpeople with hypertension, coronary heart disease, and heart failure; and ithas predictive value for morbidity and mortality among healthy adults aswell (De Vito et al., 2002; Mussalo et al., 2001; Stein & Kleiger, 1999).Though training is not an ongoing event that would be expected to increaseVLF on an ongoing basis, the perception of monitoring during training maybe part of a more generalized belief regarding privacy in workplace comput-ing. People who maintain ongoing beliefs that they are being monitored ontheir computers at work may suffer in the long run. Clearly, more researchinvestigating the long-term health consequences of monitoring is imperative.

Finally, research examining moderators of the effects uncovered in thisstudy would be worthwhile. For example, employees’ perceptions of organi-zational justice (e.g., justifications for monitoring and knowledge of perfor-mance from monitoring) can influence their views of the fairness of electronicperformance monitoring (Stanton, 2000). Applying this framework to theonline training domain, it is possible that e-learners will react less negativelyto online monitoring when they perceive it as fair.

The salience of monitoring is another possible moderator worth consid-ering, because of the invisible nature of monitoring and the potential fororganizations to increase or decrease the number of cues indicating thattraining behavior is being observed.3 Salience was discussed in a review ofsocial facilitation by Aiello and Douthitt (2001), but the effects were notentirely straightforward. On the one hand, arousal can be heightened when

3We thank an anonymous reviewer for suggesting this idea.

MONITORING ONLINE TRAINING 2207

Page 18: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

the observer is not visible, as noted earlier (Guerin & Innes, 1982). Thisfinding seems quite applicable to electronic monitoring. However, as Stantonand Barnes-Farrell (1996) found, electronically monitored study participantsfor whom monitoring was more salient felt less control over their work. In thecase of online training, future research should examine whether controllingthe salience of monitoring is an effective strategy for influencing trainingprocesses and outcomes of interest.

Online training packages can be designed to include tools for trackingemployees’ progress through various modules. There is a distinct need tounderstand better how perceptions of this form of surveillance affect workersas they are trying to acquire new knowledge and skills. Findings from thepresent study provide initial evidence that an awareness of monitoring servesto tax e-learners mentally in ways that do not contribute to their performanceand satisfaction. Thus, organizations should proceed with caution whenconsidering the implementation of electronic monitoring during Web-basedtraining. Moreover, employers may benefit from strategies that assureworkers of the absence of monitoring when the surveillance tools included ine-learning packages are not being utilized.

References

Aasman, J., Mulder, G., & Mulder, L. J. M. (1987). Operator effort and themeasurement of heart-rate variability. Human Factors, 29, 161–170.

Aiello, J. R., & Douthitt, E. A. (2001). Social facilitation from Triplett toelectronic performance monitoring. Group Dynamics: Theory, Research,and Practice, 5, 163–180.

Aiello, J. R., & Svec, C. M. (1993). Computer monitoring of work perfor-mance: Extending the social facilitation framework to electronic presence.Journal of Applied Social Psychology, 23, 537–548.

Alge, B. J., Ballinger, G. A., & Green, S. G. (2004). Remote control: Predic-tors of electronic monitoring intensity and secrecy. Personnel Psychology,57, 377–410.

Alge, B. J., Ballinger, G. A., Tangirala, S., & Oakley, J. L. (2006). Informa-tion privacy in organizations: Empowering creative and extrarole perfor-mance. Journal of Applied Psychology, 91, 221–232.

Alliger, G. M., Tannenbaum, S. I., Bennett, W., Jr., Traver, H., & Shotland,A. (1997). A meta-analysis of the relations among training criteria. Per-sonnel Psychology, 50, 341–358.

Andreas, S., Bingeli, C., Mohaesi, P., Luscher, T. F., & Noll, G. (2003).Nasal oxygen and muscle sympathetic nerve activity in heart failure.Chest, 123, 366–371.

2208 THOMPSON ET AL.

Page 19: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Baron, R. W. (1986). Distraction-conflict theory: Progress and problems. InL. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 19,pp. 1–39). Orlando, FL: Academic Press.

Baron, R., & Kenny, D. (1986). The moderator–mediator variable distinc-tion in social psychological research: Conceptual, strategic, and statisticalconsiderations. Journal of Personality and Social Psychology, 51, 1173–1182.

Bond, C. F., Jr. (1982). Social facilitation: A self-presentational view. Journalof Personality and Social Psychology, 42, 1042–1050.

Bond, C. F., Jr., & Titus, L. J. (1983). Social facilitation: A meta-analysis of241 studies. Psychological Bulletin, 94, 265–292.

Braarud, P. O. (2001). Subjective task complexity and subjective workload:Criterion validity for complex team tasks. International Journal of Cog-nitive Ergonomics, 5, 261–273.

Brown, K. G. (2001). Using computers to deliver training: Which employeeslearn and why? Personnel Psychology, 54, 271–296.

Carver, C. S., & Scheier, M. F. (1981). Attention and self-regulation: Acontrol-theory approach to human behavior. New York: Springer-Verlag.

Croizet, J., Després, G., Guazins, M., Huguet, P., Leyens, J., & Méot, A.(2004). Stereotype threat undermines intellectual performance by trigger-ing a disruptive mental load. Personality and Social Psychology Bulletin,30, 721–731.

Davidson, R., & Henderson, R. (2000). Electronic performance monitoring:A laboratory investigation of the influence of monitoring and difficulty ontask performance, mood state, and self-reported stress levels. Journal ofApplied Social Psychology, 30, 906–920.

De Vito, G., Galloway, S. D. R., Nimmo, M. A., Maas, P., & McMurray,J. J. V. (2002). Effects of central sympathetic inhibition on heart ratevariability during steady-state exercise in healthy humans. Clinical Physi-ology and Functional Imaging, 22, 32–38.

Douthitt, E. A., & Aiello, J. R. (2001). The role of participation andcontrol in the effects of computer monitoring on fairness perceptions,task satisfaction, and performance. Journal of Applied Psychology, 86,867–874.

Eastin, M. S., & LaRose, R. (2000). Internet self-efficacy and the psychologyof the digital divide. Journal of Computer-Mediated Communication, 6.

Fisher, S. L., & Ford, J. K. (1998). Differential effects of learner effort andgoal orientation on two learning outcomes. Personnel Psychology, 51,397–419.

Geen, R. G. (1989). Alternative conceptions of social facilitation. In P. B.Paulus (Ed.), Psychology of group influence (pp. 15–51). Hillsdale, NJ:Lawrence Erlbaum.

MONITORING ONLINE TRAINING 2209

Page 20: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Guerin, B., & Innes, J. M. (1982). Social facilitation and social monitoring: Anew look at Zajonc’s mere presence hypothesis. British Journal of SocialPsychology, 21, 7–18.

Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (TaskLoad Index): Results of empirical and theoretical research. In P. A.Hancock & N. Meshkati (Eds.), Human mental workload (pp. 139–183).New York: Elsevier Science Publishers.

Henderson, R., Mahar, D., Saliba, A., Deane, F., & Napier, R. (1998).Electronic monitoring systems: An examination of physiological activityand task performance within a simulated keystroke security and elec-tronic performance monitoring system. International Journal of Human-Computer Studies, 48, 143–157.

Howell, D. C. (2002). Statistical methods for psychology (5th ed.). PacificGrove, CA: Duxbury/Thomson Learning.

Huguet, P., Galvaing, M. P., Monteil, J. M., & Dumas, F. (1999). Socialpresence effects in the Stroop task: Further evidence for an attentionalview of social facilitation. Journal of Personality and Social Psychology,77, 1011–1025.

Hunt, P. J., & Hillery, J. M. (1973). Social facilitation in a coaction setting:An examination of the effects over learning trials. Journal of ExperimentalSocial Psychology, 9, 563–571.

Kallio, M., Haapaniemi, T., Turkka, J., Suominen, K., Tolonen, U., Sotani-emi, K., et al. (2000). Heart rate variability in patients with untreatedParkinson’s disease. European Journal of Neurology, 7, 667–672.

King, J. L. (2003). The role of goal setting and Internet self-efficacy duringWeb-based training. Unpublished master’s thesis, East Carolina Univer-sity, Greenville, NC.

McCraty, R., Atkinson, M., & Tomasino, D. (2007). Summary of impact of aworkplace stress reduction program on blood pressure and emotional healthin hypertensive employees. Retrieved December 20, 2007, from www.heartmath.org/research/science-of-the-heart/soh_54.html

McMillan, D. E. (2002). Interpreting heart rate variability sleep/wake pat-terns in cardiac patients. Journal of Cardiovascular Nursing, 17, 69–81.

McNall, L. A., & Roch, S. G. (2007). Effects of electronic monitoring typeson perceptions of procedural justice, interpersonal justice, and privacy.Journal of Applied Social Psychology, 37, 658–682.

Metelka, R., Weinbergova, O., Opavsky, J., Salinger, J., & Ostransky, J.(1999). Short-term heart rate variability changes after exercise training insubjects following myocardial infarction. Acta Universitatis PalackianaeOlomucenis Facultatis Medicae, 142, 79–82.

Mussalo, H., Vanninen, E., Ikaheimo, R., Laitinen, T., Laakso, M., Lansi-mies, E., et al. (2001). Heart rate variability and its determinants in

2210 THOMPSON ET AL.

Page 21: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

patients with severe or mild essential hypertension. Clinical Physiology,21, 594–604.

Myers, D. G. (2005). Social psychology (8th ed.). Boston: McGraw-Hill.Oz, E., Glass, R., & Behling, R. (1999). Electronic workplace monitoring:

What employees think. Omega: The International Journal of ManagementScience, 27, 167–177.

Paas, F. G. W. C. (1992). Training strategies for attaining transfer ofproblem-solving skill in statistics: A cognitive-load approach. Journal ofEducational Psychology, 84, 429–434.

Pessin, J., & Husband, R. W. (1933). Effects of social stimulation on humanmaze learning. Journal of Abnormal and Social Psychology, 28, 148–154.

Routledge, H. C., Chowdhary, S., & Townend, J. N. (2002). Heart ratevariability: A therapeutic target? Journal of Clinical Pharmacy and Thera-peutics, 27, 85–92.

Rowe, D. W., Sibert, J., & Irwin, D. (1998). Heart rate variability: Indicatorof user state as an aid to human–computer interaction. Proceedings of theCHI 1998 Conference on Human Factors in Computing Systems, 480–487.

Sanders, G. S., Baron, R. S., & Moore, D. L. (1978). Distraction and socialcomparison as mediators of social facilitation effects. Journal of Experi-mental Social Psychology, 14, 291–303.

Shacham, S. (1983). A shortened version of the Profile of Mood States.Journal of Personality Assessment, 47, 305–306.

Stanton, J. M. (2000). Traditional and electronic monitoring from an orga-nizational justice perspective. Journal of Business and Psychology, 15,129–147.

Stanton, J. M., & Barnes-Farrell, J. L. (1996). Effects of electronic perfor-mance monitoring on personal control, task satisfaction, and task perfor-mance. Journal of Applied Psychology, 81, 738–745.

Stein, P. K., & Kleiger, R. E. (1999). Insights from the study of heart ratevariability. Annual Review of Medicine, 50, 249–261.

Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th

ed.). Boston: Allyn & Bacon.Thompson, L. F., Surface, E. A., Martin, D. L., & Sanders, M. G. (2003).

From paper to pixels: Moving personnel surveys to the Web. PersonnelPsychology, 56, 197–227.

Wells, D. L., Moorman, R. H., & Werner, J. M. (2007). The impact of theperceived purpose of electronic performance monitoring on an array ofattitudinal variables. Human Resource Development Quarterly, 18, 121–138.

Zajonc, R. B. (1980). Compresence. In P. B. Paulus (Ed.), Psychology ofgroup influence (pp. 35–60). Hillsdale, NJ: Lawrence Erlbaum.

MONITORING ONLINE TRAINING 2211

Page 22: Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners

Appendix

Questionnaire and Test Items

(Questionnaire and test items that are not published elsewhere in theliterature are presented here.)

Manipulation Check Itemsa

1. I am certain the computer was recording my progress through theWeb-based training program.

2. I feel certain the experimenters are able to check the computer toverify how many practice exercises I completed.

3. I am sure the experimenters have a record of how much time it tookme to complete each training module.

4. While I was working, at least one experimenter was observing myprogress through the training program.

Satisfaction Itemsa

1. I am satisfied with the Web-based Joyner Library training program.2. Overall, I enjoyed this Web-based training.3. If given the opportunity, I would take part in another Web-based

training program.

Post-Training Skills Assessmentb

1. Erica Goode published an article for the New York Times onOctober 1, 2002, titled “Deflating self-esteem’s role in society’s ills.”Find the full-text article online. According to the article, what aretwo examples of social ills that low self-esteem is to blame for?______________

2. The Toronto Star published an article on December 20, 2000, titled“Politics Not Meant for the Simple Folk.” Find the full-text articleonline. According to the article, on what date was the TorontoStar’s “politics-free edition” supposed to occur? ______________

3. Is a hard copy of the journal Journal of Experimental Education,Volume 33, available in Joyner Library? ______________

4. Is a hard copy of the journal American Journal of Dance Therapy,Volume 8, available in Joyner Library? ______________

aRated on a 5-point scale ranging from 1 (strongly disagree) to 5 (stronglyagree). bFill-in-the-blank questions.

2212 THOMPSON ET AL.