10
Online study tools: College student preference versus impact on achievement Genevieve Marie Johnson * Department of Psychology, Grant MacEwan College, 10700-104 Avenue, Edmonton, Canada T5J 4S2 Available online 3 April 2007 Abstract Forty-eight college students participated in an ABAB analysis; the A condition was online study groups and the B condition was online practice tests. Students prepared for two in-class examina- tions under the A condition and two in-class examinations under the B condition. Based on Bloom’s taxonomy, all examinations contained items that assessed student mastery of course content in terms of: (1) knowledge, (2) comprehension, (3) application, and (4) synthesis. Ten questionnaire items established that participating students preferred online practice tests over online study groups. Such preference, however, was not significantly related to any measure of academic achievement. While small sample size renders generalization of findings tenuous, the results of the investigation suggest that various online study tools may have differential effectiveness for knowledge, comprehension, application, and synthesis instruction objectives. Although student preference is an important con- sideration, instructors should select online study tools on the basis of established learning benefits. Ó 2007 Elsevier Ltd. All rights reserved. Keywords: Online study tools; Online study; Online study groups; Online quizzes 1. Introduction Instructional applications of computer technology frequently augment college student learning and are believed to support academic achievement (Byers, 1999; Crook, 2001; Grabe & Sigler, 2001; Hutchins, 2003; Johnson, 2006a; Smith et al., 2000). Practice tests are particularly popular forms of online support, apparently helping students evaluate 0747-5632/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2007.02.012 * Tel.: +1 780 497 4541; fax: +1 780 497 5308. E-mail address: [email protected] Available online at www.sciencedirect.com Computers in Human Behavior 24 (2008) 930–939 Computers in Human Behavior www.elsevier.com/locate/comphumbeh

Online study tools: College student preference versus impact on achievement

Embed Size (px)

Citation preview

Available online at www.sciencedirect.com

Computers in

Computers in Human Behavior 24 (2008) 930–939

Human Behavior

www.elsevier.com/locate/comphumbeh

Online study tools: College student preferenceversus impact on achievement

Genevieve Marie Johnson *

Department of Psychology, Grant MacEwan College, 10700-104 Avenue, Edmonton, Canada T5J 4S2

Available online 3 April 2007

Abstract

Forty-eight college students participated in an ABAB analysis; the A condition was online studygroups and the B condition was online practice tests. Students prepared for two in-class examina-tions under the A condition and two in-class examinations under the B condition. Based on Bloom’staxonomy, all examinations contained items that assessed student mastery of course content in termsof: (1) knowledge, (2) comprehension, (3) application, and (4) synthesis. Ten questionnaire itemsestablished that participating students preferred online practice tests over online study groups. Suchpreference, however, was not significantly related to any measure of academic achievement. Whilesmall sample size renders generalization of findings tenuous, the results of the investigation suggestthat various online study tools may have differential effectiveness for knowledge, comprehension,application, and synthesis instruction objectives. Although student preference is an important con-sideration, instructors should select online study tools on the basis of established learning benefits.� 2007 Elsevier Ltd. All rights reserved.

Keywords: Online study tools; Online study; Online study groups; Online quizzes

1. Introduction

Instructional applications of computer technology frequently augment college studentlearning and are believed to support academic achievement (Byers, 1999; Crook, 2001;Grabe & Sigler, 2001; Hutchins, 2003; Johnson, 2006a; Smith et al., 2000). Practice testsare particularly popular forms of online support, apparently helping students evaluate

0747-5632/$ - see front matter � 2007 Elsevier Ltd. All rights reserved.

doi:10.1016/j.chb.2007.02.012

* Tel.: +1 780 497 4541; fax: +1 780 497 5308.E-mail address: [email protected]

G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939 931

their learning and focus study effort accordingly (Fritz, 2003; Haberyan, 2003; Itoh &Hannon, 2002; Jensen, Johnson, & Johnson, 2002; Jensen, Moore, & Hatch, 2002; Kille-dar, 2002). Derouza and Fleming (2003) reported that students who took online practicetests academically outperformed students who took the same tests in pencil-and-paper for-mat. Not surprisingly, given convenience and perceived benefits, online or otherwise auto-mated practices tests characteristically accompany introductory undergraduate textbooks(for example, refer to McGraw-Hill Higher Education, 2004).

In addition to practice tests, online study groups are increasingly popular forms of sup-port for college student learning (Johnson, Howell, & Code, 2005; Miller & Lu, 2003;Shale, 2002; Tait & Mills, 2003). Crook (2002) suggested that ‘‘new technology maybecome a lever on what is otherwise a failure by students to take advantage of collabora-tive opportunities’’ (p. 66) and reported that when students were assigned to an onlinestudy group, ‘‘71% said that it was helpful or very helpful’’ (p. 75). Johnson and Johnson(2005) compared the effectiveness of two study strategies in cooperative online groups,reciprocal peer questions and mnemonic devices. While there was no significant differencein academic achievement between students in the two online study groups, ‘‘students in thereciprocal peer questioning group reported higher levels of satisfaction with the virtualstudy experience’’ (p. 2025).

Although online learning tools such as practice tests and study groups are commonlyavailable to college students, the academic benefits of such support are not clearly estab-lished (Brothen & Wambach, 2001; Herring, 1999; Johnson, 2006a; McConnell, 2005;Perlman, 2003). Bol and Hacker (2001) reported that graduate students who used practicetests as study strategies scored significantly lower on the midterm examination in aresearch methods course than students without access to practice tests. Johnson (2006b)found that the highest achieving students were the most critical of cooperative online studygroups. Such incongruence between research and practice may be the result of simplisticresearch designs that fail to capture the complexity of instructional phenomena. Instruc-tional objectives, for example, may influence student demonstration of learning. A funda-mental consideration in evaluating online study tools is the extent to which effectiveness ismediated by instructional objectives. That is, the effectiveness of an online study tool maydepend upon the nature of course learning objectives and the manner in which studentmastery of such objectives is evaluated.

2. Instructional objectives and evaluation of student learning

Bloom’s taxonomy of instructional objectives has guided educators for almost 50 yearsand is considered among the most significant educational contributions of the 20th century(Anderson & Sosniak, 1994). The six originally proposed (Bloom, Engelhart, Frost, Hill,& Krathwohl, 1956) cognitive instructional objectives include: knowledge (i.e., remember-ing or recognizing), comprehension (i.e., understanding), application (i.e., using a generalconcept to solve a specific problem), analysis (i.e., understanding the components of a lar-ger process or concept), synthesis (i.e., combining ideas and information), and evaluation

(i.e., judging value or quality). The taxonomy provides a ‘‘complexity hierarchy thatorders cognitive processes from simply remembering to higher order critical and creativethinking’’ (Noble, 2004, p. 194) and is commonly used by instructors to ensure that learn-ing objectives include a variety of cognitive skills.

932 G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939

Because multiple choice items are easily scored and can assess a range of content, col-lege student learning is commonly measured with such test format (Williams & Clark,2004). While it is often assumed that multiple choice items only measure factual knowl-edge, if the questions require application, analysis or synthesis, higher levels thinking skillsare assessed (Gronlund & Cameron, 2003; Johnson & Johnson, 2005; Popham, 2002). Atthe college level, test item banks and instructor generated test items are often guided byBloom’s taxonomy of instructional objectives. For example, Renaud (2003) organized testitem bank questions in terms of: knowledge questions that require students to recognize orrecall facts, terms, and rules; comprehension questions that require understanding of clas-sifications, principles, and methods; application questions that require students to employconcepts, methods, and principles in unique problem-solving situation; and synthesis ques-tions that require students to consider more than one piece of information in arriving atthe correct response.

The effectiveness of online study tools in relation to instructional objectives has notbeen investigated. The current study sought to determinate the impact of online practicetests and study groups on student achievement as measured by factual, comprehension,application, and synthesis multiple choice test items. Is one form of online support moreeffective for higher versus lower levels of cognitive instructional objectives? Is one form ofonline support more effective in facilitating long term retention of course content as mea-sured by examination review items? Do students express preference for one online studytool and is such preference related to academic achievement? The answers to such ques-tions may assist instructors in selecting online study tools that maximize student leaning.

3. Methods and procedures

3.1. Research participants

Students in two sections (40 students per section) of an educational psychology coursewere required to use two online study tools in preparation for four proctored in-classexaminations. At the end of the academic term, students completed a questionnaire andgave permission for their course marks to be used for research purposes. Due to studentwithdrawal from the course as well as absenteeism on the day that permission was sought,48 students participated in the study. Students ranged in age from 18 to 33 years (mean21.3 years). Approximately 77% of the sample was female which is characteristic of thestudent population in the participating college. Students reported an average of 32 collegecredits complete (range 0–120).

3.2. Online learning support

In preparation for the first and third in-class examinations, students made postings inonline study groups using the WebCT discussion tool. In preparation for the second andfourth in-class examinations, students completed online practice tests using the WebCTquiz tool.

3.2.1. Online study groupsStudents were randomly assigned to WebCT online study groups consisting of eight

members. The course outline stated that study group postings were not restricted to but

G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939 933

may include: study notes, chapter summaries, practice test items, questions for reflection,definitions, key terms, and mnemonic devices. Online study group membership did notchange throughout the academic term, although student withdrawal from the coursealtered group composition in some cases. Study groups opened two weeks prior to theexamination and closed the day of the examination. Student postings were individuallymarked in terms of number, quality, and variety of study strategies. Online study grouppostings that supported learning for the first examination contributed 5% to the finalcourse grade; postings that supported learning for the third examination contributed10% to the final course grade. The mean student grade for postings associated with the firstexamination was 82.5% (range 0–100%); the mean student grade for postings associatedwith third examination was 80.5% (range 0–100%). Within the context of the educationalpsychology course, the online study groups served two purposes; (1) contributed 15% tothe final course grade and (2) helped students prepare for two in-class examinations.

3.2.2. Online practice tests

The online practice tests contained true–false and fill-in-the-blank items that corre-sponded to the content assessed on the second examination and approximately two-thirdsof the content assessed on the four examination (i.e., the fourth examination was cumula-tive but online practice tests did not support review of previously tested material). Allpractice test questions were imported to WebCT from the test item bank associated withthe course textbook (Renaud, 2003). Online practice tests became available two weeksprior to the examination and were unavailable following the examination. Students hadtwo attempts at each practice tests with only the highest mark contributing to the finalcourse grade. Four online practice tests (i.e., two true–false and two fill-in-the-blanks) sup-ported learning for the second examination and contributed 5% to the final course grade.Ten online practice tests (i.e., five true–false and five fill-in-the-blanks) supported learningfor new content assessed on the fourth examination and contributed 10% to the finalcourse grade. Students completed the online practice tests without supervision. The meanstudent grade for practice tests associated with the second examination was 91.5% (range0–100%) and the mean student grade for practice tests associated with the fourth exami-nation was 80.5% (range 0–100%). Within the context of the educational psychology

Fig. 1. ABAB research design: the effect of online study groups versus online practice tests on examinationperformance.

934 G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939

course, the online practice tests served two purposes; (1) contributed 15% to the finalcourse grade and (2) helped students prepare for two in-class examinations.

Fig. 1 provides a graphic representation of the pattern of online learning support andin-class examinations. The research design is an ABAB analysis; the A condition is onlinestudy groups and the B condition is online practice tests. The effect of A and B on in-classexamination performance is measured.

3.3. Measures

To address the research questions, two variables were measured. First, student prefer-ence or subjective evaluation of the online study tools was determined with a brief ques-tionnaire. Second, student academic achievement was measured via in-class examinations.

3.3.1. Student evaluation of the online study tools

Ten questionnaire items requiring yes–no responses assessed student evaluation of theonline study tools. Five items focused on student perception of online study groups; fiveitems focused on student perception of online practice tests. Yes–no responses allowedfor two-group comparisons (i.e., students who agreed compared to students who disagreedwith the questionnaire item). Table 1 presents the percentage of students who responded inthe affirmative and negative to questionnaire items that assessed student personal interpre-tation of the two online study tools.

3.4. Student academic achievement

Student achievement was measured with the objective test items on three midtermexaminations and one final examination. The midterm examinations were not cumulative,assessing student mastery of a relatively limited amount of course material. The finalexamination was cumulative, assessing mastery of all course content. Each midterm

Table 1Percentage of students responding in the affirmative and negative to questionnaire items that evaluated studentinterpretation of the online study tools

Questionnaire item Yes (%) No (%)

Interpretation of online study groups

My virtual study groups helped me do well in the exams. 43.5 56.3I prefer face-to-face study groups rather than online study groups. 58.3 41.7The members of my online study group made good postings. 75.0 22.9The members of my online study group benefited from my postings. 81.3 16.7The virtual study group helped me more than the online tests. 37.5 62.5

Interpretation of online practice tests

Online practice tests helped me do well in the exams. 77.1 22.9I prefer paper tests rather than online tests. 45.8 54.2Online practice tests help students learn. 83.3 14.6Online tests make me nervous. 33.3 66.7Online tests helped me more than the virtual study group. 60.4 39.6

Note: Items were presented in random order in the student questionnaire. Cases of missing data are reflected inpercentages.

G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939 935

examination contained 24 multiple choice items and the final examination contained 80multiple choice items (36 items assessed mastery of previously tested material and 44 itemsassessed mastery of course material covered subsequent to the third midterm examina-tion). While the midterm and final examinations included case study analyses that contrib-uted to examination marks, due to the subjective nature of marking these items, they werenot included in any metric of student achievement.

All multiple choice examination items were keyed to question type according toBloom’s Taxonomy of Educational Objectives, Cognitive Domain (Bloom et al., 1956).As specified in the introduction to the test item bank provided by the textbook publisher(Renaud, 2003): knowledge questions required students to recognize or recall facts, terms,and rules; comprehension questions required understanding of classifications, principles,and methods; application questions required students to employ concepts, methods, andprinciples in unique problem-solving situations; synthesis questions required considerationof more than one piece of information to arrive at the correct answer. Multiple choiceitems were further divided into questions that assessed recent versus remote learning.Recent learning was measured with test items that assessed mastery of new or previouslyuntested material; remote learning was measured with final examination items thatassessed continued mastery of course material presented early in the term (i.e., reviewitems associated with content previously assessed on the first and second midtermexaminations).

Correct responses for test items associated with each cognitive instructional objective(i.e., knowledge, comprehension, application, and synthesis) under each learning condi-tions (i.e., recent or remote) were summed across examinations. Each of the four cognitiveinstructional objectives assessed under recent learning conditions was measured with 29multiple choice test items (i.e., six from each of three midterm examinations and 11 frompreviously untested material assessed on the final examination). Each of the four cognitiveinstructional objectives assessed under remote learning conditions was measured with fivemultiple choice test items (i.e., three final examination items that assessed continued mas-tery of course material assessed on the first examination and two final examination itemsthat assessed continued mastery of course material assessed on the second examination).Table 2 presents a description of these eight measures of academic achievement for thegroup of participating college students.

Table 2Descriptive statistics for measures of student academic achievement

Type of test item Total items Mean Range SD

Recent or new learning

Knowledge 29 21.9 11–27 3.27Comprehension 29 20.0 8–24 3.02Application 29 18.8 10–26 3.87Synthesis 29 13.7 8–18 2.62

Remote or review of past learning

Knowledge 5 2.8 1–5 1.12Comprehension 5 2.7 0–5 1.18Application 5 4.4 3–5 1.30Synthesis 5 1.9 0–4 0.95

936 G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939

3.5. Data analysis

Independent-sample t-tests compared the academic achievement of students whoagreed and disagreed with each of the 10 questionnaire items that determined student eval-uation of the online study tools. Paired-sample t-test analysis established differences inacademic achievement when student learning was supported by each of the two onlinestudy tools. That is, percentage of examination items answered correctly under the A con-dition was compared to percentage of examination items answered correctly under the Bcondition.

Affirmative responses to the five questionnaire items that determined student evaluationof the online study groups, reverse scored in one case, were summed to provide a 0–5 rat-ing of endorsement of online study groups. Affirmative responses to the five questionnaireitems that determined student evaluation of online practice tests, reverse scored in twocases, were summed to provide a 0–5 rating of endorsement of the online practice tests.Scores of zero reflected no endorsement of the online study tool; scores of five reflectedmaximum endorsement of the online study tool. The mean student endorsement ratingfor online study groups was 2.8 (range = 0–5, standard deviation = 1.32); the mean stu-dent endorsement rating for online practice tests was 3.5 (range = 0–5, standard devia-tion = 1.51). Correlational analysis determined the extent of relationship between studytool endorsement and student achievement.

4. Results

Differences in student subjective evaluation of the two online study tools were not sig-nificantly related to any measure of academic achievement. That is, the average achieve-ment of students who agreed with the questionnaire items was not significantly differentthan the average achievement of students who disagreed with the questionnaire items.Although students clearly favored online practice tests over online study groups asreflected in endorsement ratings (t = 3.31, df = 46, p = .003), correlational analysisrevealed no significant relationship between extent of endorsement of a specific study tooland any measure of academic achievement. Apparently, student subjective evaluation andinterpretation of study tool effectiveness is unrelated to actual test performance.

Table 3Achievement differences in recent learning supported by the two online study tools

Instructional objective Mean df t P

Knowledge Study groups 77.3%Practice tests 74.0%

Comprehension Study groups 58.9%Practice tests 59.8%

Application Study groups 60.5% 46 2.21 .032Practice tests 55.5%

Synthesis Study groups 45.7% 46 �3.00 .004Practice tests 51.9%

G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939 937

Paired-sample t-test analysis revealed that the two online study tools were not associ-ated with significant differences in overall student achieved. That is, average studentachievement on the first and third examinations was not significantly different than aver-age student achievement on the second and final examinations, although online studygroups supported learning in the first case while online practice tests supported learningin the second case. However, when paired-sample t-tests compared average studentachievement in terms of instructional objectives, numerous significant differences emerged.Table 3 presents the significant differences in recent learning supported by the two onlinestudy tools. Student mastery of knowledge and comprehension instructional objectives,the lowest levels of Bloom’s taxonomy, was not affected by the type of online study toolthat supported learning. Student mastery of application instructional objectives, however,was greater when learning was supported by online study groups as opposed to onlinepractice tests. Student mastery of synthesis instructional objectives, the highest level ofBloom’s taxonomy, was greater when learning was supported by online practice tests asopposed to online study groups.

Table 4 presents the significant differences in remote learning supported by the twoonline study tools. Student mastery of synthesis instructional objectives was not affectedby the type of online study tool that previously supported learning. Student mastery ofknowledge and application instructional objectives, however, was greater when previouslearning was supported by online practice tests. Student mastery of comprehensioninstructional objectives was greater when previous learning was supported by online studygroups.

5. Discussion and implications for practice

While small sample size (n = 48) renders generalization of findings tenuous, the resultsof the current investigation suggest that various online study tools may have differentialeffectiveness for student mastery of instruction objectives under recent and remote learningconditions. Online study groups, as opposed to online practice tests, supported studentrecent learning of application instructional objectives as measured by test items requiringthe use of concepts, methods, and principles in unique problem-solving situations. Onlinepractice tests, as opposed to online study groups, supported student recent learning ofsynthesis instructional objectives as measured by test items requiring consideration of mul-

Table 4Achievement differences in remote learning supported by the two online study tools

Instructional objective Mean df t p

Knowledge Study groups 53.9% 47 �2.64 .011Practice tests 65.6%

Comprehension Study groups 60.1% 47 4.69 .000Practice tests 36.5%

Application Study groups 72.9% 47 �2.83 .007Practice tests 84.4%

Synthesis Study groups 35.1%Practice tests 35.4%

938 G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939

tiple facts and concepts. Online study groups, as opposed to online practice tests, sup-ported student remote learning of comprehension instructional objectives as measuredby test items requiring understanding of classifications, principles, and methods. Onlinepractice tests, as opposed to online study groups, supported student remote learning offactual and application instructional objectives. While such finding may be idiosyncraticto the sample of participating college students, at the very least it seems reasonable to con-clude that the effectiveness of online study tools is related to instructional objectives andthat patterns of tool effectiveness may vary as a function of time lapse between instructionand assessment of student learning.

While the benefits of practice tests have not been clearly established (Brothen & Wam-bach, 2001; Herring, 1999; Perlman, 2003). the results of the current investigation suggestthat online practice tests support student learning. Of eight comparisons to determine dif-ferences in average student achievement, five reached significance; three favored onlinepractice tests as providing more support for student learning than online study groups.Indeed, the majority of students who participated in the study expressed preference foronline practice tests rather than online study groups. That is, more than 60% of the stu-dents responded in the negative to the questionnaire item, The virtual study group helped

me more than the online practice tests; more than 60% of the students responded in theaffirmative to the questionnaire item, Online tests helped me more than the virtual study

group. Nonetheless, in some cases online study groups were more effective than onlinepractice tests in supporting student learning as measured by examination performance.It may be that students are less familiar with online study groups than with practice tests.Such differences in familiarity may account for rather neutral (mean of 2.8 on a scale of0–5) student interpretation of the value of online study groups.

College student satisfaction is often perceived as a particularly important aspect ofcourse evaluation, and so it should be. The ultimate goal of instruction, however, is dem-onstrated student learning. The results of the current investigation found no significantdifferences between student self-report preference for an online study tool (i.e., studygroups versus practice tests) and any measure of academic achievement. Such findings sug-gest that students may not be accurate judges of the tools that facilitate their learning andexamination performance. Instructors should make instructional tools available to stu-dents on the basis of established learning benefits. Students might be informed of the find-ing that subjective interpretation of tool effectiveness is less accurate than researchestablishing tool effectiveness relative to instructional objectives.

References

Anderson, L. W., & Sosniak, L. A. (1994). Bloom’s taxonomy: a forty-year retrospective. ninety-third yearbook for

the national society for the study of education, part II. Chicago: University of Chicago Press.Bloom, B. S., Engelhart, M. D., Frost, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational

objectives: handbook I. Cognitive domain. New York: David McKay.Bol, L., & Hacker, D. J. (2001). A comparison of the effects of practice tests and traditional review on

performance and calibration. Journal of Experiential Education, 69, 133–151.Brothen, T., & Wambach, C. (2001). Effective student use of computerized quizzes. Teaching of Psychology, 28,

292–294.Byers, J. A. (1999). Interactive learning using expert system quizzes on the Internet. Educational Media

International, 36, 191–194.Crook, C. (2001). The campus experience of networked learning. In C. Steeples & C. Jones (Eds.), Networked

learning (pp. 293–308). London: Springer-Verlag.

G.M. Johnson / Computers in Human Behavior 24 (2008) 930–939 939

Crook, C. (2002). Deferring to resources: collaborations around traditional vs. computer-based notes. Journal of

Computer Assisted Learning, 18, 64–76.Derouza, E., & Fleming, M. (2003). A comparison of in-class quizzes vs. online quizzes on student exam

performance. Journal of Computing in Higher Education, 14, 121–134.Fritz, K. M. (2003). Using blackboard 5 to deliver both traditional and multimedia quizzes on-line for foreign

language classes. ERIC Document Reproduction No. ED482584.Grabe, M., & Sigler, E. (2001). Studying online: evaluation of an online study environment. Computers and

Education, 38, 375–383.Gronlund, N. E., & Cameron, I. J. (2003). Assessment of student achievement. Toronto, ON: Pearson Education

Canada.Haberyan, K. A. (2003). Do weekly quizzes improve student performance on general biology exams? The

American Biology Teacher, 65, 110–114.Herring, W. (1999). Use of practice tests in the prediction of GED test scores. Journal of Correctional Education,

50, 6–8.Hutchins, H. M. (2003) Instructional immediacy and the seven principles: strategies for facilitating online courses,

Online Journal of Distance learning Administration, 6, retrieved August 12, 200. Available from http://www.westga.edu/~distance/ojdla/fall63/hutchins63.html.

Itoh, R., & Hannon, C. (2002). The effect of online quizzes on learning Japanese. CALICO Journal, 19, 551–561.Jensen, M., Johnson, D. W., & Johnson, R. T. (2002). Impact of positive interdependence during electronic

quizzes on discourse and achievement. Journal of Educational Research, 95, 161–167.Jensen, M., Moore, R., & Hatch, J. (2002). Electronic cooperative quizzes. American Biology Teacher, 64,

169–174.Johnson, G. M. (2006a). Perception of classroom climate, use of WebCT, and academic achievement. Journal of

Computing in Higher Education, 17, 25–46.Johnson, G. M. (2006b). Student psycho-educational functioning and satisfaction with online study groups.

Educational Psychology, 26, 677–688.Johnson, G. M., Howell, A. J., & Code, J. R. (2005). Online discussion and college student learning: toward a

model of influence. Technology, Pedagogy and Education, 14, 61–75.Johnson, G. M., & Johnson, J. A. (2005). Online study groups: comparison of two strategies. In Proceedings of

the World conference on educational multimedia, hypermedia, and telecommunications (pp. 2025–2030).Norfolk, VA: Association for the Advancement of Computing in Education.

Killedar, M. (2002). Online self-tests: a powerful tool for self-study. Indian Journal of Open Learning, 11, 135–146.McConnell, D. (2005). Examining the dynamics of networked e-learning groups and communities. Studies in

Higher Education, 30, 25–42.McGraw-Hill Higher Education (2004). Educational Psychology Online Learning Center. Retrieved October 15,

2005. Available from http://highered.mcgraw-hill.com/sites/0070909695/student_view0/index.html.Miller, M. T., & Lu, M. Y. (2003). Serving non-traditional students in e-learning environments: building

successful communities in the virtual campus. Education Media International, 40, 163–169.Noble, T. (2004). Integrating the revised blooms taxonomy with multiple intelligences: a planning tool for

curriculum differentiation. Teachers College Record, 106, 193–211.Perlman, C. L. (2003). Practice tests and study guides: do they help? are they ethical? what is ethical test preparation

practice? ERIC Document Reproduction No. ED480062.Popham, W. J. (2002). Classroom assessment: what teachers need to know. Boston: Allyn and Bacon.Renaud, R. (2003). Test item file for educational psychology. Toronto, ON: Pearson Education Canada.Shale, D. (2002). The hybridisation of higher education in Canada, International Review of Research in Open and

Distance Learning, 2. Retrieved August 2, 2005. Available from http://www.irrodl.org/content/v2.2/shale.html.

Smith, J. L., Brooks, P. J., Moore, A. B., Ozburn, W., Marquess, J., & Horner, E. (2000). Course managementsoftware and other technologies to support collaborative learning in nontraditional Pharm. D. courses,Interactive Multimedia Electronic Journal of Computer-Enhanced Learning, 2. Retrieved August 5, 2005.Available from http://imej.wfu.edu/articles/2000/1/05/index.asp.

Tait, A., & Mills, R. (Eds.). (2003). Rethinking learner support in distance education: change and continuity in an

international context. London: RoutledgeFalmer.Williams, R. L., & Clark, L. (2004). College students’ ratings of student effort, student ability and teacher input as

correlates of student performance on multiple-choice exams. Educational Research, 46, 229–239.