14
This article was downloaded by: [UQ Library] On: 18 November 2014, At: 02:42 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK American Journal of Distance Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hajd20 The development of an instrument to measure student attitudes toward televised courses Paul M. Biner a a Associate Professor of Psychological Science, Department of Psychological Science , Ball State University , Muncie, IN, 47306 Published online: 24 Sep 2009. To cite this article: Paul M. Biner (1993) The development of an instrument to measure student attitudes toward televised courses, American Journal of Distance Education, 7:1, 62-73, DOI: 10.1080/08923649309526811 To link to this article: http://dx.doi.org/10.1080/08923649309526811 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever

The development of an instrument to measure student attitudes toward televised courses

  • Upload
    paul-m

  • View
    228

  • Download
    4

Embed Size (px)

Citation preview

Page 1: The development of an instrument to measure student attitudes toward televised courses

This article was downloaded by: [UQ Library]On: 18 November 2014, At: 02:42Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T3JH, UK

American Journal of DistanceEducationPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/hajd20

The development of aninstrument to measurestudent attitudes towardtelevised coursesPaul M. Biner aa Associate Professor of Psychological Science,Department of Psychological Science , Ball StateUniversity , Muncie, IN, 47306Published online: 24 Sep 2009.

To cite this article: Paul M. Biner (1993) The development of an instrument tomeasure student attitudes toward televised courses, American Journal of DistanceEducation, 7:1, 62-73, DOI: 10.1080/08923649309526811

To link to this article: http://dx.doi.org/10.1080/08923649309526811

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of allthe information (the “Content”) contained in the publications on ourplatform. However, Taylor & Francis, our agents, and our licensorsmake no representations or warranties whatsoever as to the accuracy,completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views ofthe authors, and are not the views of or endorsed by Taylor & Francis.The accuracy of the Content should not be relied upon and should beindependently verified with primary sources of information. Taylor andFrancis shall not be liable for any losses, actions, claims, proceedings,demands, costs, expenses, damages, and other liabilities whatsoever

Page 2: The development of an instrument to measure student attitudes toward televised courses

or howsoever caused arising directly or indirectly in connection with, inrelation to or arising out of the use of the Content.

This article may be used for research, teaching, and private studypurposes. Any substantial or systematic reproduction, redistribution,reselling, loan, sub-licensing, systematic supply, or distribution in any formto anyone is expressly forbidden. Terms & Conditions of access and use canbe found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 3: The development of an instrument to measure student attitudes toward televised courses

THE AMERICAN JOURNAL OF DISTANCE EDUCATIONVol. 7 No. 1 1993

The Development of an Instrumentto Measure Student Attitudes

Toward Televised CoursesPaul M. Biner

Abstract

This article describes a method for developing a customized, empirically-based attitudinal assessment instrument. Issues relating to the effectiveadministration of the instrument and to faculty resistance are discussed. Theauthor suggests that the structured assessment of student attitudes towarddistance delivery made possible by such an instrument is an important ini-tial step in the overall evaluation process.

With the continuing growth of university-level televised courses,researchers have focused their attention on the issue of program evalua-tion (Conners 1981; Keegan 1986; Zigerell 1991). The primarymotivations for this attention are to justify the often high technologicalcosts of implementing and maintaining these programs (Keegan 1986)and to demonstrate that educational delivery via television is not peda-gogically deficient (Weingand 1984). This emphasis on programevaluation has produced hundreds of studies evaluating the academicachievement of students enrolled in televised classes (see Eiserman andWilliams 1987).

Important to the present discussion is Kirkpatrick's (1967) contentionthat an assessment of participant reactions to a program should precedeany assessment of learning outcomes. Positive student reactions to tele-vised classes, of course, cannot be construed as a guarantee that learninghas taken place. On the other hand, negative reactions can both under-mine support for the program and detrimentally affect learning. Thus, asystematic evaluation effort should start with the assessment of studentattitudes and opinions. Once these assessments have been analyzed,changes can be made to rectify the facets of the program that producenegative reactions. It is at this point that the evaluation effort shouldbegin to focus on student learning and achievement.

A handful of researchers have concentrated exclusively on the assess-ment of student attitudes as a first step in the evaluation of universitytelecourses (e.g., Abel and Creswell 1983; Barker 1987; Barron 1987;

62

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 4: The development of an instrument to measure student attitudes toward televised courses

BINER

Harrison et al. 1991). With few exceptions, however (e.g., Harrison et al.1991), these researchers have developed their evaluation instruments onan ad hoc, non-empirical basis. Explanations of how or on what basis thesurvey questions are generated are either vague or nonexistent. The read-er is left to surmise that the questions are either based on the author'spersonal perception of relevance or chosen from an often unreferencedprior instrument.

The goal of the present research was to offer telecourse researchers apractical, yet psychometrically sound, method of constructing an attitu-dinal assessment instrument that would accommodate their institutionalinformation needs. With this aim in mind, a series of four investigationswas conducted to develop and test a systematic method of constructing acustomized, empirically-based instrument to evaluate the attitudes ofstudents in telecourses. The investigations are reported in the presentpaper as a series of steps. In Step 1, the primary concern was to generatean unrestricted list of possible items (i.e., factors that potentially couldaffect student attitudes regarding telecourses). The goal of Step 2 was toidentify the major dimensions underlying groupings of specific items.The results of this step would ultimately dictate the sections and sectionheadings of the instrument. In Step 3, a content validity analysis wasperformed on each of the items to determine which should be included inthe final version of the instrument. Finally, Step 4 involved writing aswell as pretesting the instrument using students currently enrolled in atelecourse. Figure 1 depicts the four-step process.

Step 1: Generating Items Related to Course Satisfaction

Step 2: Defining Dimensions Underlying Items

Step 3: Selecting Content Valid Items

Step 4: Writing and Pretesting the Instrument

Figure 1. The Four-Step Process

Step 1. Generating Items Related to Course Satisfaction

To ensure a comprehensive, relevant sample of attitudinal questions inany telecourse evaluation instrument, it is necessary to identify factors

63

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 5: The development of an instrument to measure student attitudes toward televised courses

THE AMERICAN JOURNAL OF DISTANCE EDUCATION

most closely related to course satisfaction in the minds of telecoursestudents, instructors, staff, and administrators. Thus, the primary goal ofthis first step was to empirically generate an unrestricted list of factorsthat potentially could affect the perceived quality of a telecourse. Withthis aim, a large-scale survey was conducted.

Method. Fifty individuals were surveyed in this phase of the project.The sample of subjects consisted of a stratified random subsample ofthirty graduate and undergraduate students who were currently enrolledin university-sponsored telecourses (stratification was based on the threeuniversity schools offering the courses—ten subjects from each school),nine professors who were currently teaching the televised classes, fourinstructional designers (all of whom had worked on the telecoursedesigns), three full-time distance education coordinators, and four tele-education and educational technologies administrators (executivedirector, director, production manager, and assistant dean of tele-educa-tion services). The sample was intentionally drawn with diversity inmind to ensure that data would be attained from representatives in eachof the major units of the university's distance education program.

All subjects were sent a brief one-question survey along with a shortcover letter describing the nature of the project. On the survey, subjectswere given the following instructions: "List as many factors as you canthink of that you personally believe could potentially affect the qualityof a televised course in any way. Try to be as specific as possible."

To ensure a reasonably high response rate among the students, eachwas telephoned prior to the mailing of their questionnaire. Although allof the students that were selected in the sample verbally agreed to returnthe questionnaire, only 63% percent ultimately did so. Of the faculty,staff, and administrators surveyed, 75% responded. The overall responserate for the survey was 68%.

Results and Discussion. An analysis of the questionnaire responsesyielded seventy-one relatively independent factors. Some examples ofthese factors are listed in Table 1. As can be seen from the examples inthe table, the factors varied widely in terms of their nature and degree ofspecificity. Despite their diversity, however, all were relatively clear inmeaning and, therefore, required little in the way of interpretation by theinvestigator.

64

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 6: The development of an instrument to measure student attitudes toward televised courses

BINER

Table 1. Examples of Reported Factors, Factor Groupings byDimension, and Accompanying Content Validity Ratios (CVR)

Reported Factors CVR

Dimension 1. Instruction/Instructor Aspects

The in-person or telephone accessibility of the instructoroutside of class .45

The extent to which the instructor made the site studentsfeel that they were part of the class and "belonged" .82

The degree to which the preprepared (computer-generated)graphics helped you gain a better understanding of thecourse material .45

Dimension 2. Technological Aspects

The quality of the television picture .45

The clarity of the tele-response system audio 1.00

The adequacy of the screen size of the television set that

receives the class broadcast .09

Dimension 3. Course Management/Coordination Aspects

The present means of material exchange between you

and the course instructor .45

The promptness of class material delivery to the site .64

Class enrollment and registration procedures .22Step 2. Defining Dimensions Underlying Items

A second investigation was performed to group the seventy-onefactors identified in Step 1 into overall major dimensions. From a

65

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 7: The development of an instrument to measure student attitudes toward televised courses

THE AMERICAN JOURNAL OF DISTANCE EDUCATION

practical standpoint, the identification of such groupings or dimensionsallows the investigator to format an evaluation questionnaire in whichrelated questions are clustered together. Because research in this area isscarce, the identification of dimensions related to the quality of a tele-course can be viewed as valuable in and of itself (Gibson 1987). In arecent review of the existing literature addressing this issue, Harrison etal. (1991) report that the most consistently mentioned dimensions areinstruction (i.e., aspects relating to the instructor, styles of instruction,and instructional materials), logistics (i.e., aspects pertaining to the qual-ity of the technical equipment, site support, and the instructionalenvironment), and management (i.e., aspects regarding program-studentcommunication, registration and progress record policy, and resourceuse).

Method. Seven content-matter experts were chosen to serve as sub-jects in this phase of the research. The subjects were selected primarilyon the basis of the extent and diversity of their experience with tele-edu-cation. These individuals had an average of 5.28 years of experience ineither tele-education instruction, production, or administration.

The dimensions of interest were determined using a card-sorting tech-nique described in a recent study by Biner et al. (in press).

Results and Discussion. In general, subjects were able to classify theseventy-one factor cards into three categories or dimensions with rela-tive ease. Each of the seventy-one factors was assigned to a dimension.Examples of factors categorized in each of the three dimensions can befound in Table 1.

To assess the reliability of the item classification data, a mean agree-ment percentage for those items assigned to a given dimension wascalculated. As an index of the overall reliability of the factor categoriza-tion, the percentage of agreement across raters and dimensions wascalculated. This analysis showed 81% agreement.

Although each of the subjects generated slightly different descriptivelabels for the dimensions, three general themes did emerge.Respectively, they were 1) Instruction/Instructor aspects, 2)Technological aspects, and 3) Management/Coordination aspects. It willbe recalled that the labels of the major effectiveness dimensions reportedby Harrison et al. (1991) in their literature review were Instruction,Logistics, and Management. While these labels differ moderately fromthose of the present study, the dimensions themselves do not. That is, themeaning and content of the dimensions identified in the prior researchclosely parallel those found here.

66

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 8: The development of an instrument to measure student attitudes toward televised courses

BINER

Step 3. Selecting Content Valid Items

Gable (1986) recently noted that "content validation should receivethe highest priority during the process of instrument development" (p.72). Content validity is defined as the extent to which a set of items on atest are a relevant, representative sample of the full domain of content(Anastasi 1968; Cronbach 1984; Jewell and Siegall 1990). That is, anevaluation instrument with good content validity should include onlyitems that assess relevant and essential attitudes and behaviors. Withregards to the present project, all of the seventy-one factors can be con-sidered relevant (at least to some degree) simply by virtue of the fact thatall were reported by subjects in the initial survey. This, however, doesnot necessarily mean that all are essential, and therefore should be repre-sented by questions in the final version of the evaluation instrument. Thepurpose of this third step was to identify the factors deemed essential bya group of content-matter experts using a technique adapted from onedeveloped by Lawshe (1975) for the assessment of content validity.

Method. Eleven individuals were selected as participants in the pre-sent investigation on the basis of the extent and diversity of their priorexperience with tele-education. Four of the eleven subjects had partici-pated in the prior investigation.

Following Lawshe's (1975) procedure, subjects were sent the list ofseventy-one factors and asked to rate each on a three-point scale (l="Itis not necessary that a question be asked of the students on this topic";2="It is useful, but not essential that a question be asked of the studentson this topic"; and 3="It is essential that a question be asked of the stu-dents on this topic"). All of the eleven subjects returned their ratingswithin a five-day period.

Results and Discussion. From the scale ratings, a Content ValidityRatio (CVR) was calculated for each of the seventy-one items. The CVRformula, as outlined by Lawshe (1975), is as follows:

N/2

where Ne = Number of subjects rating an item as "essential" and N =Total number of subjects.

Similar to the correlation coefficient, the CVR ranges from +1.00(where all judges rate an item as essential) through .00 (where 50% ofjudges rate an item as essential) to -1.00 (where none of the judges ratean item as essential). The CVRs for a few of the seventy-one items arelisted in Table 1.

67

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 9: The development of an instrument to measure student attitudes toward televised courses

THE AMERICAN JOURNAL OF DISTANCE EDUCATION

Step 4. Writing and Pretesting the Instrument

To accommodate the wide variability of assessment needs among thedifferent university colleges that would be using the questionnaire, itwas decided that all of the items with CVRs greater than zero would beincluded in the final instrument. To avoid wording biases, the thirty-fourfactors that were retained were kept, for the most part, in their originalform and each was accompanied by a five-point scale ranging from"Very Poor" to "Very Good" on which subjects were to rate them. Itemswere then clustered into three general sections on the instrument, eachrepresenting one of the dimensions identified in Step 2, and they wererandomly ordered within each section. In addition, several general (e.g.,overall satisfaction with the class) and demographic questions wereincluded as well as an open-ended question designed to solicit furtheropinions. A list of the questionnaire items can be found in Appendix 1.

Method. Ninety-eight graduate and undergraduate students enrolled infour classes, representing three colleges/schools at Ball State University(i.e., College of Sciences and Humanities, College of Business, andSchool of Nursing) participated as subjects. The classes were broadcastto a total of thirty-four individual sites via microwave and satellite trans-mission (one-way video and two-way audio) from a large campusbroadcast classroom in which on-campus students took the classes.

The questionnaire was administered during class time the last week ofthe semester during the first 15-20 minutes of class in the presence ofthe investigator. After greeting the class, the instructor introduced a shortvideo tape and then left the broadcast classroom for the entirety of theadministration period. In the video, which lasted approximately nineminutes, the investigator explained the purpose of the evaluation as wellas a code that was to be placed on the instrument. The code signified theclass, semester, year, and site. Otherwise, the respondents were anony-mous. After the tape, questionnaires were handed to students in thebroadcast classroom. All students were then asked to begin filling outthe questionnaire. Ultimately, 77% of the site student surveys and 86%of the on-campus student surveys were returned for analysis.

Results and Discussion. The pretesting of the questionnaire provideduseful and diverse information regarding our program. By way of exam-ple, the data collected here allowed us to 1) assess satisfaction bothoverall and with specific aspects of the courses, 2) compare site vs. cam-pus student satisfaction with the various facets of the courses, 3)identify, through correlational analyses, facets of the courses that were

68

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 10: The development of an instrument to measure student attitudes toward televised courses

BINER

most predictive of overall course satisfaction, and 4) pinpoint areas inthe program that were producing negative reactions.

Considerations and Conclusions

In retrospect, several issues regarding the instrument developmentprocess deserve discussion. All of these issues relate specifically to Step4; the procedures of Steps 1, 2, and 3 were basically free of problems.The first issue pertains to the timing of the administration of the finalquestionnaire. To be effective, the instrument must be administered in aclass session toward the end of the semester. Equally important is thatthe administration not take place in a session during which any otherinstrument is being administered (e.g., a departmental promotion andtenure assessment) nor during the session in which the final exam willtake place. The results of the attitude survey could be strongly affectedor biased in either case. Second, it must be remembered that class timeduring the last few sessions is a precious commodity, particularly forinstructors who are behind in their material. Thus, it is of utmost impor-tance that the administration of the survey be scheduled with theinstructor well in advance of the actual date of administration. A two-month lead time would probably be sufficient to avoid conflicts. A thirdand related issue concerns faculty resistance. It may be recalled that thequestionnaire included a section devoted solely to the assessment of theinstructor and instruction. Although none of the instructors whose class-es were surveyed in the present research was adamantly opposed to thesurvey, virtually all were reticent about the evaluation to one degree oranother. In a number of instances, this reluctance was minimized by 1)involving the faculty member in one or more of the first three steps ofthe process, 2) explicitly assuring the faculty member that only a limitednumber of individuals (e.g., program directors) would be privy to theresults, and 3) informing each member that the data for his/her coursewould be collapsed with data from other courses in the final analyses.Because faculty opposition could potentially undermine the process ofdata collection, other evaluation researchers are urged to proceed in asimilar fashion.

The present series of investigations, taken together, provide a poten-tially useful and practical methodology for constructing anempirically-based attitude evaluation instrument for televised courses.The data gathered with the instrument provided a multi-faceted pictureof student attitudes toward the program at the author's institution. It is

69

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 11: The development of an instrument to measure student attitudes toward televised courses

THE AMERICAN JOURNAL OF DISTANCE EDUCATION

hoped that the results of this project will encourage future telecourseevaluation researchers to begin structured assessments of student reac-tions as the initial step in their overall evaluation efforts.

References

Abel, J. D., and K. W. Creswell. 1983. A study of student attitudes con-cerning instructional TV. Educational and Industrial Television(Oct):72-79.

Anastasi, A. 1968. Psychological Testing. London: Macmillan.Barker, B. O. 1987. Interactive instructional television via satellite: A

first year evaluation. Journal of Rural and Small Schools 2(1):18-23.Barren, D. D. 1987. Faculty and student perceptions of education using

television. Journal of Education for Library and Information Science27(4):257-71.

Biner, P. M., D. L. Butler, T. E. Lovegrove, and R. L. Burns. In press.Window substitutes in the workplace. Environment and Behavior.

Conners, B. 1981. Assessment in the distance-education situation. InDistance Teaching for Higher and Adult Education, eds. A. Kaye andG. Rumble, 162-74. London: Croom Helm.

Cronbach, L. J. 1984. Essentials of Psychological Testing. Cambridge,MA: Harper and Row.

Eiserman, W. D., and D. D. Williams. 1987. Statewide EvaluationReport on Productivity Project Studies Related to Improved Use ofTechnology to Extend Educational Programs. Sub-Report Two:Distance Education in Elementary and Secondary Schools. A Reviewof the Literature. Logan, UT: Wasatch Intitute for Research andEvaluation. ERIC Document Reproduction Service ED 291 350.

Gable, R. K. 1986. Instrument Development in the Affective Domain.Boston: Kluwer-Nijhoff.

Gibson, T. (Ed.) 1987. Evaluation of teaching/learning at a distance.Proceedings of the 3rd Annual Conference on Teaching at a Distance.Madison, WI: University of Wisconsin-Madison.

Harrison, P. J., F. Saba, B. J. Seeman, G. Molise, R. Behm, and D. W.Williams. 1991. Dimensions of effectiveness: Assessing the organiza-tional, instructional and technological aspects of distance educationprograms. Paper presented at the Annual Conference of theAssociation of Educational Communications Technology, February13-16, Orlando, Florida.

70

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 12: The development of an instrument to measure student attitudes toward televised courses

BINER

Jewell, L. N., and M. Siegall. 1990. Contemporary Industrial/Organizational Psychology. St. Paul, MN: West.

Keegan, D. 1986. The Foundations of Distance Education. London:Croom Helm.

Kirkpatrick, D. L. 1967. Evaluation of training. In Training andDevelopment Handbook, eds. R. L. Craig and L. R. Bittel. New York:McGraw-Hill.

Lawshe, C. H. 1975. A qualitative approach to content validity.Personnel Psychology 28(4):563-75.

Weingand, D. E. 1984. Telecommunications delivery of education: Acomparison with the traditional classroom. Journal of Education forLibrary and Information Sciences 74(1):3-12.

Zigerell, J. 1991. The Uses of Television in American Higher Education.New York: Praeger.

Appendix 1. Telecourse Evaluation Questionnaire Items

The following items made up the questionnaire. Each item required aresponse on a scale as follows: very poor=l, poor=2, average=3,good=4, very good=5.

Instruction/Instructor Characteristics (For Site and Campus Students)

1. The clarity with which the class assignments were communicated2. Your reaction to the typical amount of time the preprepared graphics

(e.g., graphs, tables, pictures, outlines, notes, etc.) were left on thescreen to be copied down

3. The degree to which the preprepared (computer-generated) graphicshelped you gain a better understanding of the course material

4. The production quality of the preprepared graphics used for theclass

5. The timeliness with which papers, tests, and written assignmentswere graded and returned

6. The degree to which the types of instructional techniques that wereused to teach the class (e.g., lectures, demonstrations, group discus-sions, case studies, etc.) helped you gain a better understanding ofthe class material

7. The extent to which the room in which the class was held was freeof distractions (e.g., noise from adjacent rooms, people coming inand out, other students talking with each other, etc.)

71

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 13: The development of an instrument to measure student attitudes toward televised courses

THE AMERICAN JOURNAL OF DISTANCE EDUCATION

8. The extent to which the instructor made the site students feel thatthey were part of the class and "belonged"

9. The instructor's communication skills10. The instructor's organization and preparation for class11. The instructor's general level of enthusiasm12. The instructor's teaching ability13. The extent to which the instructor encouraged class participation14. The in-person/telephone accessibility of the instructor outside of

class15. The instructor's professional behavior16. Overall, this instructor was

Technological Characteristics (For Site Students Only)

17. The quality of the television picture18. The quality of the television sound19. The adequacy of the screen size of the television set that received

the class broadcasts20. The clarity of the tele-response system audio21. The brevity of the talkback delays when communicating with the

instructor over the tele-response system22. The promptness with which the instructor recognizes and answers

student calls over the tele-response system23. The degree of confidence you have that classes will not be tem-

porarily interrupted or cancelled due to technical problems orinclement weather

Course Management and Coordination (For Site Students Only)

24. Your reaction to the present means of material exchange betweenyou and the course instructor

25. The accessibility of science labs (answer only if laboratory workwas required for your class)

26. Your ability to access a library when, and if, needed27. Your ability to access a computer when, and if, needed28. The general conscientiousness of the site coordinator (e.g., in deliv-

ering materials, unlocking room doors, tuning in broadcasts)29. The accessibility of the site coordinator30. The degree to which the site class or someone at the site was able to

operate the television and tele-response system on the first day (ornight) of class

72

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014

Page 14: The development of an instrument to measure student attitudes toward televised courses

BINER

31. The promptness with which class materials were delivered/sent toeither you or the site

32. The promptness with which a back-up tape of a class session wasdelivered in the event of a broadcast failure or a poor broadcast

33. Your ability to access departmental program personnel when needed34. Class enrollment and registration procedures

General and Demographic Information (For Site and Campus Students)

35. Overall, the course was36. Compared to conventional classroom courses (i.e., classes that are

not televised), this course was:l=Much worse ... 5=Much better

37. The workload required by this course was:l=Too light2=Moderately light3=Just right4=Rigorous5=Too great

38. Would you enroll in another televised course?l=No 2=Yes

39. Would you still have been able to take this course if it had not beenoffered on TV?

l=No 2=Yes40. Including this course, how many televised classes have you taken to

date?1=1-22=2-33=4-54=6-75=8 or more

41. Year in school:l=Freshman2=Sophomore3=Junior4=Senior5=Graduate

42. Sex:l=Female 2=Male

73

Dow

nloa

ded

by [

UQ

Lib

rary

] at

02:

42 1

8 N

ovem

ber

2014