12
Main article Student perceptions about computerized testing in introductory managerial accounting Barbara Apostolou a, * , Michael A. Blue b,1 , Ronald J. Daigle c,2 a Division of Accounting, College of Business and Economics, West Virginia University, P.O. Box 6025, Morgantown, WV 26506-6025, United States b Department of Finance, E.J. Ourso College of Business, Louisiana State University, Baton Rouge, LA 70803-6304, United States c Department of Accounting, College of Business Administration, Sam Houston State University, Huntsville, TX 77341-2056, United States abstract This study reports on the implementation of computerized testing in an introductory managerial accounting course. Students were surveyed about their perceptions of computerized testing after tak- ing two major computerized exams. Results show that students perceived both negative and positive aspects about computerized testing, and overall perceptions tended to be more negative than positive. Clear differences in student perceptions existed when analyzing results by instructor, indicating that individual instruc- tors can manage student perceptions about computerized testing. Suggestions for addressing negative student perceptions are pro- vided for accounting educators who are considering the use of computerized testing in introductory courses. Ó 2010 Elsevier Ltd. All rights reserved. 1. Introduction The purpose of this study is to gain insight into student perceptions of computerized testing in an introductory managerial accounting course. Certain benefits to computerized testing have been noted in the literature, yet studies of how students perceive a computerized-testing environment are scarce. Those perceptions can be used to facilitate transition from the traditional paper and pencil exam set- ting to computerized settings. The shift of major professional exams to computerized settings provides 0748-5751/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.jaccedu.2010.02.003 * Corresponding author. Tel.: +1 304 293 0091. E-mail addresses: [email protected] (B. Apostolou), [email protected] (M.A. Blue), [email protected] (R.J. Daigle). 1 Tel.: +1 225 578 6291. 2 Tel.: +1 936 294 1249. J. of Acc. Ed. 27 (2009) 59–70 Contents lists available at ScienceDirect J. of Acc. Ed. journal homepage: www.elsevier.com/locate/jaccedu

Student perceptions about computerized testing in introductory managerial accounting

Embed Size (px)

Citation preview

J. of Acc. Ed. 27 (2009) 59–70

Contents lists available at ScienceDirect

J. of Acc. Ed.

journal homepage: www.elsevier .com/locate/ jaccedu

Main article

Student perceptions about computerized testingin introductory managerial accounting

Barbara Apostolou a,*, Michael A. Blue b,1, Ronald J. Daigle c,2

a Division of Accounting, College of Business and Economics, West Virginia University, P.O. Box 6025, Morgantown,WV 26506-6025, United Statesb Department of Finance, E.J. Ourso College of Business, Louisiana State University, Baton Rouge, LA 70803-6304, United Statesc Department of Accounting, College of Business Administration, Sam Houston State University, Huntsville, TX 77341-2056,United States

a r t i c l e i n f o

Article history:

0748-5751/$ - see front matter � 2010 Elsevier Ltdoi:10.1016/j.jaccedu.2010.02.003

* Corresponding author. Tel.: +1 304 293 0091.E-mail addresses: [email protected]

(R.J. Daigle).1 Tel.: +1 225 578 6291.2 Tel.: +1 936 294 1249.

a b s t r a c t

This study reports on the implementation of computerized testingin an introductory managerial accounting course. Students weresurveyed about their perceptions of computerized testing after tak-ing two major computerized exams. Results show that studentsperceived both negative and positive aspects about computerizedtesting, and overall perceptions tended to be more negative thanpositive. Clear differences in student perceptions existed whenanalyzing results by instructor, indicating that individual instruc-tors can manage student perceptions about computerized testing.Suggestions for addressing negative student perceptions are pro-vided for accounting educators who are considering the use ofcomputerized testing in introductory courses.

� 2010 Elsevier Ltd. All rights reserved.

1. Introduction

The purpose of this study is to gain insight into student perceptions of computerized testing in anintroductory managerial accounting course. Certain benefits to computerized testing have been notedin the literature, yet studies of how students perceive a computerized-testing environment are scarce.Those perceptions can be used to facilitate transition from the traditional paper and pencil exam set-ting to computerized settings. The shift of major professional exams to computerized settings provides

d. All rights reserved.

.edu (B. Apostolou), [email protected] (M.A. Blue), [email protected]

60 B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

additional motivation for studying student perceptions (e.g., the CMA, CPA, and CIA exams were com-puterized in 1997, 2004, and 2008, respectively).

Research suggests that computerized tests can reduce testing time (Bunderson, Inouye, & Olsen,1989; McBride, 1985; Sereci, 2003; Wise & Plake, 1989). Computerized tests can also provide greaterimmediacy of grading and reporting exam results (Bugbee, 1992; McBride, 1985; Sereci, 2003). Im-proved test security is another potential benefit to computerized testing (Grist, Rudner, & Wise,1989; Sereci, 2003), as is flexible scheduling (Bugbee, 1992; Hambleton, Zaal, & Pieters, 1991; Sereci,2003).

Butler (2003) studied computerized testing in an introductory psychology course and identifiedother advantages to the format over traditional paper and pencil tests: (a) reduced paper and copyingcosts; (b) elimination of the need for proctors; and (c) out-of-class exams to accommodate largecourses. Certain disadvantages were also identified: (a) testing at a computer screen without the abil-ity to underline or make notations; (b) stress from looking at a computer screen; and (c) anxiety ofchanging from the traditional paper and pencil test setting. Despite the disadvantages, students hadmore positive perceptions about computerized tests than paper and pencil tests.

Few studies have focused on computerized testing in accounting and business courses, as shown bythe absence of references in the accounting education literature reviews covering published researchsince 1991 (Apostolou, Watson, Hassell, & Webber, 2001; Rebele et al., 1998a; Rebele et al., 1998b;Watson, Apostolou, Hassell, & Webber, 2003; Watson, Apostolou, Hassell, & Webber, 2007).3 One re-lated study is deLange, Suwardy, and Mavondo (2003), who surveyed on-campus students in an intro-ductory accounting course about the use of WebCT.

The most relevant study to date is Peterson and Reider (2002), who surveyed successful and unsuc-cessful candidates of the Institute of Management Accountants’ (IMA) computerized Certified Finan-cial Manager (CFM) exam. Despite some negative perceptions (e.g., computer screen fatigue,elimination of partial credit), CFM candidates in general (whether successful or unsuccessful) per-ceived the computerized exam setting as an overall positive experience. This result is shown by thefinding that 64% of successful candidates and 62% of unsuccessful candidates perceived that a candi-date’s computerized exam score accurately reflects the candidate’s knowledge.

Introductory managerial accounting courses tend to have large enrollments because a majority ofbusiness majors and minors are served. Those teaching such courses may have an interest in comput-erized testing because of the perceived benefits to be gained. Accounting educators teaching onlinecourses may have an interest because the online environment imposes computerized testing. Someaccounting educators may be interested in not only gaining testing efficiencies, but in giving studentssome insight into the environment of computerized professional exams.

The potential interest in computerized testing by accounting educators requires the study of howcomputerized testing can impact accounting courses, including student perceptions. Understandingstudent perceptions is important because negative perceptions can be a detriment to a course. As Pet-erson and Reider (2002) provide insights for the IMA and other professional organizations that use ormay adopt computerized testing, this study seeks to provide insights for accounting educators inter-ested in using computerized testing in introductory accounting courses. This study adapts Petersonand Reider’s (2002) survey instrument to gather and report student perceptions about computerizedtesting in an introductory managerial accounting course taught by multiple instructors.4

Students in this study reported significant positive and negative perceptions about computer-ized testing. Overall perceptions tend to be more negative. Student perceptions differed by instruc-tor, a factor not considered in prior research. This finding suggests that the instructor can play animportant role with managing perceptions of the test environment. Accounting educators consid-ering the use of computerized testing in introductory courses should be cognizant of these findings,and the corresponding suggestions provided for managing negative perceptions about computer-ized testing.

3 Boyce (1999) analyzes how computers can expand teaching and learning in accounting curricula.4 This study does not focus on perceptions associated with smaller-scale technologies such as WebCT, online ancillary quizzes,

or Blackboard.

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70 61

The remainder of the paper describes the research method, presents the results, and provides sug-gestions for managing students’ perceptions about computerized testing. Limitations of the currentstudy and suggestions for future research are provided in the final section.

2. Research method

2.1. Description of computerized-testing environment under study

Introductory managerial accounting is a coordinated course taught in multiple sections by experi-enced instructors at Louisiana State University. Computerized tests are offered in a three-day ‘‘examwindow.” Students sign up for a two-hour time slot at a campus testing center, each having 150 com-puter terminals dedicated for exams. Test center administration, scheduling, proctoring, and other testprocessing functions are managed by the University’s Office of Assessment and Evaluation.

Each student’s exam is randomly generated from a test universe, which is created by the coursecoordinator from materials provided by the textbook publisher. The software that generates the ques-tions is Questionmark™ Perception™ (2009). The software can generate multiple-choice questions,true/false questions, questions with drop-down menus, and problems or questions requiring fill-in-the-blank responses. Each exam is randomly generated so that the order of questions and values inproblems differs for each student. The order of choices in multiple-choice questions also may be ran-domly assigned. Results are available for viewing by the course coordinator and instructor as soon asthe student submits the exam electronically.

2.2. Survey data collection

Student perceptions about computerized tests were collected using a survey instrument adaptedfrom that developed by Peterson and Reider (2002).5 The survey (see Appendix) was given after com-pletion of two mid-term exams, and consisted of 18 Likert-scale (scale of 1–5), two yes/no, and one open-ended question that gave students the opportunity to elaborate on perceptions. The survey was both vol-untary and anonymous, with participants signing a separate consent form. Prior to giving the survey,each instructor read a brief, standard script to ensure consistency across all sections. The survey tookapproximately 20 min to complete during class, and included all students present for class on the des-ignated day (n = 223).

3. Results

Table 1 summarizes key demographics of students initially enrolled in the course. The data werecollected during the first class meeting of the semester.6 Students reported previous computerized testexperience by a 3–1 margin. Biology, information systems, music, and physics were the most citedcourses in which the students had previously experienced centralized computerized testing. The mostcommon classification was junior, followed by senior and sophomore. Approximately 55% of theresponding students were male, 40% were female, and the remainder gave no response. Approximately92% reported English as their first language.

Approximately half of the enrolled students were in business major specialties, with the most com-mon being general business administration (21.84%), marketing (13.65%), and management (7.94%).The rest of the enrolled students were business minors from 23 non-business majors. Constructionmanagement (14.64%) and general studies (6.70%) were the most common non-business majors.

5 An exemption from Institutional Review Board oversight for using students as subjects was received for this study.6 In a separate survey administered at the beginning of the semester, students provided demographic data. The students who

participated in the study do not represent all of the students who completed the demographic survey. However, presentation of thedemographic information is included to provide a general description of those initially enrolled in the managerial accountingcourse.

Table 1Self-reported demographic data of students surveyed.

Question Responses (n = 396)

Computer testing experienceYes 76.01%No 23.48%No response 0.51%

Course(s) providing previous computer testing experienceBiology 31.28%Information systems 18.94%Music 6.17%Physics 5.96%Other 15.53%No response 22.12%

Majora

Business 50.62%Non-business 45.66%No response 3.72%

Current overall GPAMean 3.06Standard deviation 0.45No response 5.81%

ACT scoreMean 24.93Standard deviation 3.35No response 27.53%

ClassificationFreshman 0.00%Sophomore 18.94%Junior 44.44%Senior 30.81%Graduate 2.02%No response 3.79%

AgeMean 21.15Standard deviation 2.96No response 5.30%

GenderMale 55.30%Female 39.65%No response 5.05%First language is englishYes 91.67%No 2.27%No response 6.06%

a Seven students had dual majors. Those with at least one major in business are considered business majors.

62 B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

3.1. Perceptions about computerized testing

Table 2 provides mean responses to the 18 Likert-scale and two yes/no questions, as well as therescaled difference of each question’s mean response from its respective neutral score.7 For discussionpurposes, the questions are described to correspond to the scheme of the survey instrument (i.e., Q1,Q2, . . . , Q14). Rescaled differences are given to help emphasize how negative/positive a particular

7 The scale for Q13 is in the opposite direction of all other perception questions. For all perception questions but Q13, the lowerthe response, the more negative the perception. Therefore, responses to Q13 were reverse coded when performing the analysisdescribed.

Table 2Mean student responses and differences of means from respective neutral score.

Question Brief descriptiona Explanation for arating of 5b

Mean (standard deviation)(n = 223)

Difference of meanfrom neutral scorec

t-statistic for comparisonof mean to neutral score

Q1 Difficulty of computerized exams compared to paper-based exams

Much easier 2.32 (1.02) �0.68 �9.82***

Q2 Scope of material that can be tested Strongly expands 2.60 (0.85) �0.40 �7.11***

Q3 Perceived quality of grade earned Strongly improves 2.33 (0.93) �0.67 �10.74***

Q4a Flexibility of scheduling and taking exams Very positive 4.74 (0.58) 1.74 44.99***

Q4b Prompt feedback of results Very positive 3.66 (1.26) 0.66 7.82***

Q4c Ability to make educated guesses of answers because ofobjective format

Very positive 3.27 (1.00) 0.27 4.02***

Q4d Elimination of essay questions or long-form problems Very positive 3.73 (1.20) 0.73 9.00***

Q4e Elimination of judgment in grading Very positive 2.12 (1.15) �0.88 �11.37***

Q4f Elimination of partial credit in grading Very positive 1.45 (0.69) �1.55 �33.35***

Q4g Required knowledge about computers when taking acomputerized exam

Very positive 3.03 (0.77) 0.03 0.52

Q4h Elimination of in-class return and review of exam Very positive 1.89 (1.02) �1.11 �16.24***

Q5 Impact on student’s stress and anxiety Strongly reduces 2.42 (0.89) �0.58 �9.75***

Q6 Impact on opportunity to cheat Much more difficult 3.89 (1.01) 0.89 13.08***

Q7 Advantage of taking exam later than earlier in test windowprovided

Have an advantage 3.53 (0.84) 0.53 9.44***

Q8 Impact of looking at computer screen on exam performance Very positive effect 2.25 (0.78) �0.75 �14.40***

Q9 Ability to make notes on exam on exam performance Very positive effect 1.93 (0.80) �1.07 �19.95***

Q10 Ability to preview exam and budget time on examperformance

Very positive effect 2.56 (1.03) �0.44 �6.38***

Q11 Ability to scan through/review unanswered questionson exam performance

Very positive effect 2.49 (0.83) �0.51 �9.24***

Q12 Whether other forms of tests should also be given 1 = Yes; 2 = No 1.73 (0.45) 0.23 7.50***

Q13 Accuracy of computerized exams measuring studentlearning

1 = Yes; 2 = No 1.50 (0.50) 0.00 0.13

a See Appendix for complete question descriptions.b Q1–Q11 are answered on a Likert-scale of 1–5, with a neutral score of 3. Q12 and Q13 are coded 1 for ‘Yes’ and 2 for ‘No’, with a neutral score of 1.5.c The standard deviations of the differences in this column are the same as the respective means shown in the previous column.* p-value < 0.05.

** p-value < 0.01.*** p-value < 0.0001.

B.Apostolou

etal./J.of

Acc.Ed.27

(2009)59–

7063

64 B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

perception is about computerized testing. Perceptions to 11 questions are significantly negative, percep-tions to seven questions are significantly positive, and perceptions to two questions are neutral. All sig-nificant means except those to three questions (Q2, Q4c, and Q10) are greater than a half a point from therespective neutral score. All significant means are therefore meaningful to discuss for gaining insights tostudent perceptions about computerized testing in an introductory accounting course.

With respect to negative perceptions, students perceived that computerized tests: (a) are more dif-ficult to complete (Q1); (b) limit the scope of test material (Q2); (c) weaken the perceived quality ofone’s grade earned (Q3); and (d) create greater stress and anxiety (Q5). Students negatively perceivedthat computerized tests eliminate: (a) judgment in grading because of the objective test format (Q4e);(b) partial credit (Q4f); and (c) in-class return and review (Q4h). Students negatively perceived: (a)looking at a computer screen for extended time periods (Q8); (b) the inability to make notes on theexam (Q9); (c) the inability to quickly preview the entire exam and budget one’s time (Q10); and(d) the inability to scan and review ‘‘marked” or ‘‘unanswered” questions (Q11). Perceptions aboutthe elimination of partial credit (Q4f), inability to make notes (Q9), and loss of in-class return of exams(Q4h) were the most negative, as shown by the rescaled differences in Table 2.

With respect to positive perceptions, students perceived: (a) greater scheduling and exam-takingflexibility (Q4a); (b) more prompt feedback than with paper-based tests (Q4b); (c) an ability to makeeducated guesses because of the objective test format (Q4c); and (d) elimination of essay questions orlong-form problems (Q4d). Students also positively perceived that those taking an exam later in thetest window have an advantage over those taking the exam earlier (Q7), which may be due to havingmore time to study, as well as learning information about the exam from those who took it earlier.Even with this perception, students perceived more difficulty with finding opportunities to cheat(Q6), a finding that should please educators.

Q12 required a yes/no response, with 1 for ‘yes’ and 2 for ‘no’. The mean of 1.73 indicates thatapproximately 73% of students did not favor separate tests, which would include handwritten essayor long-form problems. This perception is consistent with that favoring the elimination of essay ques-tions or long-form problems (Q4d). Of all positive perceptions, students were most positive aboutscheduling and exam-taking flexibility (Q4a), as shown by the rescaled differences in Table 2.

Students reported two neutral perceptions: (1) the need for possessing computer knowledge fortaking computerized exams (Q4g), and (2) whether computerized exams accurately measure learning(Q13). While just one question, Q13 could be deemed among the most important in the survey be-cause a perception that exams do not accurately measure learning could have an undermining impacton a course. This question required a yes/no response. The mean of 1.50 indicates that students wereequally split on whether computerized exams accurately measure learning. While statistically neutral,the mean response can be viewed in essence as a negative perception because it is not positive.

Students also were asked to answer an open-ended question (Q14) for elaborating on their percep-tions. A summary count by response type is shown in Table 3. Approximately 60% of students (135/223) answered this question, and 67% of those who answered the question gave a negative response,with the most common types of responses categorized as follows: (a) the desire for paper and pencil tomake notes and hand computations; (b) concern about the loss of partial credit due to the objective

Table 3Summary of open-ended responses elaborating on perceptions about computerized testsa.

Nature of response Number (percentage)of students ofinstructor #1

Number (percentage)of students ofinstructor #2

Number (percentage)of students ofinstructor #3

Total number(percentage)of students

Positive 17 (37.0) 10 (18.5) 3 (8.6) 30 (22.2)Negative 28 (60.9) 35 (64.8) 28 (80.0) 91 (67.4)Neutral 1 (2.1) 9 (16.7) 4 (11.4) 14 (10.4)Total responses 46 (100.0) 54 (100.0) 35 (100.0) 135 (100.0)No response 19 64 5 88Total students 65 118 40 223

a See Q14 in the Appendix for complete question description.

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70 65

question format; (c) stress; (d) having to look at a computer screen; and (e) instructor absence fromtesting facility. Representative comments include ‘‘Can’t show your work on the computer. What if Ipick the wrong answer but have the work right???” and ‘‘What if I have a question during the exam? Ican’t ask my teacher.” Roughly 22% of those who answered the question gave a positive response, pri-marily about exam-taking flexibility (e.g., ‘‘Can schedule exam when I want” or ‘‘We took them onlinein biology and it was okay.”)

3.2. Instructor effect

Further analyses were performed to determine if student perceptions differed by instructor. Noprior study has considered differences across instructors. There were no a priori expectations in thisstudy for an instructor effect because of the experience and record of positive student evaluations ofeach of the three instructors teaching the managerial accounting course. Analyses, however, show adistinct pattern of differences among student perceptions about computerized testing by instructor.

Table 4 shows the rescaled difference of each mean from the respective neutral score by instructorand the sum of the differences of means by instructor. Significant differences between two or moreinstructors are noted for 14 of the 20 questions (p-value < 0.05), as well as between all the sums ofthe differences of means by instructor (p-value < 0.01). Comparison of each sum to that of a neutralscore for all 20 questions shows that students of Instructor #1 had a neutral perception (�0.35, p-valueof 0.743), students of Instructor #2 had a slight negative perception (�3.82, p-value < 0.0001), and stu-dents of Instructor #3 had a more negative perception (�8.45, p-value < 0.0001). While the sum of dif-ferences for students of Instructor #1 is approximately zero and the sum for students of Instructor #2 isslightly less than zero, the sum for students of Instructor #3 is almost a half a point below a neutralscore per question (�8.45/20). A distinct difference exists between student perceptions by instructor,thereby indicating that the instructor may influence student perceptions about computerized testing.

4. Managing student perceptions about computerized testing

In response to this study’s findings, the three instructors sought to better manage student percep-tions about computerized testing in the introductory managerial accounting course. The adjustedbehaviors are provided in this section to serve as suggestions to accounting educators for managingstudent perceptions about computerized testing in introductory courses.

The three instructors recognize that potential student perceptions should first be evaluated beforeusing computerized testing, which helps anticipate issues that may arise with its use. A proactivestance is especially important when a course is taught by multiple instructors. Formal discussionamong instructors makes each uniformly aware of how student perceptions can impact the class envi-ronment, which should result in the adoption of a consistent approach with managing perceptionsabout computerized testing in all sections of the course.

The instructors discuss the computerized-testing environment with students in-class at the begin-ning of the semester, before the first exam, and as concerns arise during the semester. The benefits ofcomputerized testing are emphasized to students at the beginning of the course. If students raise con-cerns, instructors focus on how those concerns are addressed. Efforts are made to ensure equal oppor-tunity to get preferred times within the window because some students perceived that the testwindow offered grade advantages to late exam takers.

In response to concerns about partial credit, two actions were taken. The instructors incorporateblended exams that are substantially computerized with some traditional problems to be answeredby hand. Partial credit can be earned on these problems. Instructors also use some computational mul-tiple-choice questions that have incorrect choices that are partially correct with some point values tobe earned.

Elimination of the in-class return of exams is one negative perception that can be challenging tominimize, especially if the final exam is comprehensive. Exams cannot be returned or reviewed in-class because each student’s exam is unique. Students may review exams in their instructor’s office.However, a formalized system was established in which a graduate student assistant holds blocks

Table 4Differences of means from respective neutral score by instructor.

Question Brief descriptiona Explanationfor a ratingof 5b

Instructor #1difference of Meanfrom neutral score(standard deviation)(n = 65)c

Instructor #2difference of meanfrom neutral score(standard deviation)(n = 118)c

Instructor #3difference of Meanfrom neutral scorestandard deviation)(n = 40)c

Q1 Difficulty ofcomputerizedexams compared topaper-based exams

Much easier �0.26 (0.94) �0.78 (1.00) �1.08 (0.97)

Q2 Scope of materialthat can be tested

Stronglyexpands

�0.19 (0.88) �0.50 (0.82) �0.50 (0.82)

Q3 Perceived quality ofgrade earned

Stronglyimproves

�0.43 (1.04) �0.67 (0.82) �1.10 (0.90)

Q4a Flexibility ofscheduling andtaking exams

Very positive 1.72 (0.48) 1.83 (0.49) 1.52 (0.85)

Q4b Prompt feedback ofresults

Very positive 0.92 (1.08) 0.65 (1.28) 0.24 (1.39)

Q4c Ability to makeeducated guesses ofanswers because ofobjective format

Very positive 0.40 (0.91) 0.22 (1.05) 0.20 (1.00)

Q4d Elimination of essayquestions or long-form problems

Very positive 1.02 (1.10) 0.69 (1.20) 0.35 (1.29)

Q4e Elimination ofjudgment in grading

Very positive �0.65 (1.16) �0.89 (1.12) �1.25 (1.17)

Q4f Elimination ofpartial credit ingrading

Very positive �1.28 (0.89) �1.59 (0.60) �1.90 (0.30)

Q4g Required knowledgeabout computerswhen taking acomputerized exam

Very positive 0.04 (0.64) 0.05 (0.89) �0.05 (0.55)

Q4h Elimination of in-class return andreview of exam

Very positive �0.78 (1.21) �1.16 (0.91) �1.50 (0.85)

Q5 Impact on student’sstress and anxiety

Stronglyreduces

�0.38 (0.86) �0.61 (0.85) �0.80 (0.99)

Q6 Impact onopportunity to cheat

Much moredifficult

1.00 (0.92) 0.90 (1.01) 0.68 (1.16)

Q7 Advantage of takingexam later thanearlier in testwindow provided

Have anadvantage

0.49 (0.73) 0.53 (0.87) 0.58 (0.90)

Q8 Impact of looking atcomputer screen onexam performance

Very positiveeffect

�0.58 (0.79) �0.79 (0.76) �0.90 (0.78)

Q9 Ability to makenotes on exam onexam performance

Very positiveeffect

�0.97 (0.73) �1.08 (0.80) �1.20 (0.91)

Q10 Ability to previewexam and budgettime on examperformance

Very positiveeffect

�0.29 (0.98) �0.39 (1.00) �0.82 (1.11)

Q11 Ability to scanthrough/reviewunansweredquestions on examperformance

Very positiveeffect

�0.51 (0.81) �0.44 (0.85) �0.75 (0.74)

66 B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

Table 4 (continued)

Question Brief descriptiona Explanationfor a ratingof 5b

Instructor #1difference of Meanfrom neutral score(standard deviation)(n = 65)c

Instructor #2difference of meanfrom neutral score(standard deviation)(n = 118)c

Instructor #3difference of Meanfrom neutral scorestandard deviation)(n = 40)c

Q12 Whether otherforms of testsshould also be given

1 = Yes;2 = No

0.30 (0.40) 0.20 (0.46) 0.17 (0.48)

Q13 Accuracy ofcomputerizedexams measuringstudent learning

1 = Yes;2 = No

0.08 (0.50) 0.00 (0.50) �0.15 (0.48)

Sum (StandardDeviation) of theDifferences ofMeansd

�0.35 (8.01) �3.82 (7.83) �8.45 (7.22)

a See the Appendix for complete question descriptions.b Q1–Q11 are answered on a Likert-scale of 1–5, with a neutral score of 3. Q12 and Q13 are coded 1 for ‘Yes’ and 2 for ‘No’,

with a neutral score of 1.5.c Bolded and italicized differences denote that one or more differences significantly differ from one or more other differences

across instructors for a particular question at p < 0.05.d Bolded and italicized sums of the differences denote that one or more sums significantly differ from one or more other sums

across all instructors at p < 0.01.

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70 67

of office hours following each exam dedicated to individualized exam reviews. As another way to easeconcerns about computerized testing, instructors use class time to review a sample exam to demon-strate the different formats that can appear on the exams.

5. Limitations and further suggested research

One limitation of this study is that results may not be generalizable because the study was con-ducted at a single university. The results are limited to measuring student perceptions after givingtwo computerized exams in one semester. Future research should compare perceptions before andafter the exams to offer insight about how perceptions change by the experience.

Students in this study were either pursuing a major or minor in business. A study about comput-erized testing in a course taken exclusively by accounting majors may be of more interest to educatorsdesiring to give students some insight into the environment of computerized professional exams. Theresults could be directly compared to the results of Peterson and Reider (2002) regarding the percep-tions of candidates of professional accounting-related exams.

If computerized testing is implemented in multiple courses in an accounting curriculum, a longi-tudinal study of student perceptions may provide useful insights. A study could help determine if par-ticular perceptions, such as computerized testing accurately measuring a student’s learning, becomemore positive with increased experience in computerized test settings. Another longitudinal studycould measure student perceptions within a specific course using computerized testing over multiplesemesters. Insights could be gained as to whether student perceptions improved with instructor expe-rience and computerized test environment modifications.

Some analyses in this study combined data from 20 survey questions. These analyses assumed thatresponses can equally offset each other. Some aspects of computerized testing may be more importantto students than others. Future studies can focus on determining whether students weigh or rank theimportance of perceptions differently, which can help educators when designing and using comput-erized tests.

This study was exploratory in nature. Studies that test formal hypotheses based on theory areessential for explaining how and why computerized testing impacts accounting courses. The potentialinterest in computerized testing by educators requires the need to study how computerized testingcan impact accounting course delivery, including student perceptions of the test environment.

68 B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

Appendix. Survey instrument

The department of accounting is implementing computerized testing this semester in IntroductoryManagerial Accounting. We are interested in your personal views related to computerized testing nowthat you have had some experience with it. Please circle the number that corresponds to the best an-swer to each question.

B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70 69

70 B. Apostolou et al. / J. of Acc. Ed. 27 (2009) 59–70

References

Apostolou, B. A., Watson, S. F., Hassell, J. M., & Webber, S. A. (2001). Accounting education literature review (1997–1999). Journalof Accounting Education, 19(1), 1–61.

Boyce, G. (1999). Computer-assisted teaching and learning in accounting: Pedagogy or product? Journal of Accounting Education,17(2–3), 191–220.

Bugbee, A. C. (1992). Examination on demand: Findings in ten years of testing by computer 1982–1991. Edina, MN: TRO Learning.Bunderson, C. V., Inouye, D. K., & Olsen, J. B. (1989). The four generations of computerized educational measurement. In R. L. Linn

(Ed.), Educational measurement (3rd ed., pp. 367–407). New York: The Macmillan Company.Butler, D.L. (2003). The impact of computer-based testing on student attitudes and behavior. The technology source archives at

the University of North Carolina, January/February. <http://technologysource.org/article/impact_of_computerbased_testing_on_student_attitudes_and_behavior/> Accessed 26.01.2010.

deLange, P., Suwardy, T., & Mavondo, F. (2003). Integrating a virtual learning environment into an introductory accountingcourse: Determinants of student motivation. Accounting Education, 12(1), 1–14.

Grist, S., Rudner, L., & Wise, L. (1989). Computer adaptive tests. ERIC Clearinghouse on tests, measurement, and evaluation.Washington, DC: American Institute for Research.

Hambleton, R. K., Zaal, J. N., & Pieters, J. M. (1991). Computerized adaptive tests: Theory, applications, and standards. In R. K.Hambleton & J. N. Zaal (Eds.), Advances in educational and psychological testing (pp. 341–366). Boston: Kluwer.

McBride, J. R. (1985). Computerized adaptive testing. Educational Leadership, 43(2), 25–28.Peterson, B. K., & Reider, B. P. (2002). Perceptions of computer-based testing: A focus on the CFM examination. Journal of

Accounting Education, 20(4), 265–284.Questionmark™ Perception™. (2009). <http://www.questionmark.com/us/home.htm> Accessed 26.01.2010.Rebele, J. E., Apostolou, B. A., Buckless, F. A., Hassell, J. M., Paquette, L. R., & Stout, D. E. (1998a). Accounting education literature

review (1991–1997), part I: Curriculum and instructional approaches. Journal of Accounting Education, 16(1), 1–51.Rebele, J. E., Apostolou, B. A., Buckless, F. A., Hassell, J. M., Paquette, L. R., & Stout, D. E. (1998b). Accounting education literature

review (1991–1997), part II: Students, educational technology, assessment, and faculty issues. Journal of AccountingEducation, 16(2), 179–245.

Sereci, S. G. (2003). Computerized adaptive testing: An introduction. In J. E. Wall & G. R. Walz (Eds.), Measuring up: Assessmentissues for teachers counselors and administrators (pp. 685–694). Greensboro: CAPS Press.

Watson, S. F., Apostolou, B. A., Hassell, J. M., & Webber, S. A. (2003). Accounting education literature review (2000–2002). Journalof Accounting Education, 21(4), 267–327.

Watson, S. F., Apostolou, B. A., Hassell, J. M., & Webber, S. A. (2007). Accounting education literature review (2003–2005). Journalof Accounting Education, 25(1), 1–58.

Wise, S. L., & Plake, B. S. (1989). Research on the effects of administering tests via computers. Educational Measurement: Issuesand Practice, 8, 5–10.