8
Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning Ann Wilkinson * , Julia Roberts, Alison E. While King’s College London, Florence Nightingale School of Nursing and Midwifery, James Clerk Maxwell Building, 57 Waterloo Road, London SE1 8WA, United Kingdom article info Article history: Available online 20 May 2010 Keywords: Instrument development Scale validation Test–retest Nurse education ICT skills Attitudes abstract Over the past 20 years self-report measures of healthcare students’ information and communication technology skills have been developed with limited validation. Furthermore, measures of student expe- rience of e-learning emerged but were not repeatedly used with diverse populations. A psychometric approach with five phases was used to develop and test a new self-report measure of skills and experi- ence with information and communication technology and attitudes to computers in education. Phase 1: Literature review and identification of key items. Phase 2: Development and refinement of items with expert panel (n = 16) and students (n = 3) to establish face and content validity. Phase 3: Pilot testing of draft instrument with graduate pre-registration nursing students (n = 60) to assess administration pro- cedures and acceptability of the instrument. Phase 4: Test–retest with further sample of graduate pre- registration nursing students (n = 70) tested stability and internal consistency. Phase 5: Main study with pre-registration nursing students (n = 458), further testing of internal consistency. The instrument proved to have moderate test–retest stability and the sub-scales had acceptable internal consistency. When used with a larger, more diverse population the psychometric properties were more variable. Further work is needed to refine the instrument with specific reference to possible cultural and linguistic response pat- terns and technological advances. Ó 2010 Elsevier Ltd. All rights reserved. 1. Introduction In line with other developed countries, the United Kingdom (UK) have increased both student nurse numbers and their diver- sity and encouraged universities to give students access to flexible and online modes of learning (DH, 2000; UKCC, 1999). The conse- quences of this development for different groups of students are unknown with a recent literature review (Wilkinson, While, & Roberts, 2009) indicating that there is an absence of large robust studies concerning student experience of computers and attitudes to e-learning in the health professions and in particular nursing. In addition, there are few accounts of the development and validation of instruments to measure experience, attitudes and anxiety in the context of e-learning in nursing education. Furthermore, the cur- rent evidence regarding students’ experience of computers and attitudes to e-learning is mainly based upon small ill-defined eval- uation studies which often included the measurement of student attitude as a secondary or incidental outcome. This paper reports the development and validation of an instrument to measure expe- rience with information and communication technology (ICT) and attitudes to e-learning of nursing students as part of a study inves- tigating pre-registration nursing students’ experience with com- puters and the Internet. 2. Study aim The development and validation of an instrument designed to measure nursing students’ reported skill and experience with ICT; confidence with computers and the Internet; attitude to com- puters; and attitude to ICT for education. 2.1. Background to instrument development Reviews have identified a continuing need for a reliable instru- ment to measure learners’ attitudes to e-learning (Hobbs, 2002; Lewis, Davies, Jenkins, & Tait, 2001). A recent review of the psycho- metric properties of instruments (n = 49) used in healthcare set- tings regarding ICT skills, experience and attitudes to the use of ICT for education (Wilkinson et al., 2009) found general measures of students’ ICT skills and attitudes and more recent developments of measures of attitudes to the use of ICT in education. Insufficient methodological detail was available to assess the validity of instru- ments or instruments had become dated with technological developments. Only a small number of studies demonstrated a systematic approach to developing survey instruments (Duggan, 0747-5632/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2010.04.010 * Corresponding author. Tel.: +44 20 7848 3708; fax: +44 20 7848 3506. E-mail address: [email protected] (A. Wilkinson). Computers in Human Behavior 26 (2010) 1369–1376 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

Embed Size (px)

Citation preview

Page 1: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

Computers in Human Behavior 26 (2010) 1369–1376

Contents lists available at ScienceDirect

Computers in Human Behavior

journal homepage: www.elsevier .com/locate /comphumbeh

Construction of an instrument to measure student information andcommunication technology skills, experience and attitudes to e-learning

Ann Wilkinson *, Julia Roberts, Alison E. WhileKing’s College London, Florence Nightingale School of Nursing and Midwifery, James Clerk Maxwell Building, 57 Waterloo Road, London SE1 8WA, United Kingdom

a r t i c l e i n f o

Article history:Available online 20 May 2010

Keywords:Instrument developmentScale validationTest–retestNurse educationICT skillsAttitudes

0747-5632/$ - see front matter � 2010 Elsevier Ltd. Adoi:10.1016/j.chb.2010.04.010

* Corresponding author. Tel.: +44 20 7848 3708; faE-mail address: [email protected] (A. Wilki

a b s t r a c t

Over the past 20 years self-report measures of healthcare students’ information and communicationtechnology skills have been developed with limited validation. Furthermore, measures of student expe-rience of e-learning emerged but were not repeatedly used with diverse populations. A psychometricapproach with five phases was used to develop and test a new self-report measure of skills and experi-ence with information and communication technology and attitudes to computers in education. Phase 1:Literature review and identification of key items. Phase 2: Development and refinement of items withexpert panel (n = 16) and students (n = 3) to establish face and content validity. Phase 3: Pilot testingof draft instrument with graduate pre-registration nursing students (n = 60) to assess administration pro-cedures and acceptability of the instrument. Phase 4: Test–retest with further sample of graduate pre-registration nursing students (n = 70) tested stability and internal consistency. Phase 5: Main study withpre-registration nursing students (n = 458), further testing of internal consistency. The instrument provedto have moderate test–retest stability and the sub-scales had acceptable internal consistency. When usedwith a larger, more diverse population the psychometric properties were more variable. Further work isneeded to refine the instrument with specific reference to possible cultural and linguistic response pat-terns and technological advances.

� 2010 Elsevier Ltd. All rights reserved.

1. Introduction

In line with other developed countries, the United Kingdom(UK) have increased both student nurse numbers and their diver-sity and encouraged universities to give students access to flexibleand online modes of learning (DH, 2000; UKCC, 1999). The conse-quences of this development for different groups of students areunknown with a recent literature review (Wilkinson, While, &Roberts, 2009) indicating that there is an absence of large robuststudies concerning student experience of computers and attitudesto e-learning in the health professions and in particular nursing. Inaddition, there are few accounts of the development and validationof instruments to measure experience, attitudes and anxiety in thecontext of e-learning in nursing education. Furthermore, the cur-rent evidence regarding students’ experience of computers andattitudes to e-learning is mainly based upon small ill-defined eval-uation studies which often included the measurement of studentattitude as a secondary or incidental outcome. This paper reportsthe development and validation of an instrument to measure expe-rience with information and communication technology (ICT) andattitudes to e-learning of nursing students as part of a study inves-

ll rights reserved.

x: +44 20 7848 3506.nson).

tigating pre-registration nursing students’ experience with com-puters and the Internet.

2. Study aim

The development and validation of an instrument designed tomeasure nursing students’ reported skill and experience withICT; confidence with computers and the Internet; attitude to com-puters; and attitude to ICT for education.

2.1. Background to instrument development

Reviews have identified a continuing need for a reliable instru-ment to measure learners’ attitudes to e-learning (Hobbs, 2002;Lewis, Davies, Jenkins, & Tait, 2001). A recent review of the psycho-metric properties of instruments (n = 49) used in healthcare set-tings regarding ICT skills, experience and attitudes to the use ofICT for education (Wilkinson et al., 2009) found general measuresof students’ ICT skills and attitudes and more recent developmentsof measures of attitudes to the use of ICT in education. Insufficientmethodological detail was available to assess the validity of instru-ments or instruments had become dated with technologicaldevelopments. Only a small number of studies demonstrated asystematic approach to developing survey instruments (Duggan,

Page 2: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

1370 A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 1369–1376

Hess, Morgan, Sooyeon, & Wilson, 2001; Jayasuriya & Caputi, 1996)and these originated in Australia and America, respectively.Perhaps as a consequence little is known concerning the ICT skillsand attitudes to e-learning of healthcare students (Chumley-Jones,Dobbie, & Alford, 2002; Greenhalgh, 2001; Kreideweis, 2005; Lewiset al., 2001) and, indeed, of students of other disciplines in the UK(Sharpe, Benfield, Lessner, & DeCicco, 2005).

3. Method

The development and validation of the instrument had fivephases (Fig. 1). Phase 1: Creation of item pool following a literaturereview and assessment of previous instruments. Phase 2: Reduc-tion of items following review by expert panel and constructionof draft scale. Phase 3: Pilot testing. Phase 4: Testing of refinedinstrument. Phase 5: Further tests of internal consistency withmain sample.

3.1. Ethical considerations

The university ethics committee granted permission. Whereparticipants were involved in the study they were provided withwritten information concerning the study and informed of theirright to withdraw at all stages.

4. Data analysis

Statistical analysis was conducted using SPSS for Windows SPSSv12–15 (2006) for Windows.

5. Phase 1

5.1. Creation of item pool

Existing instruments or potential items for inclusion in theinstrument were identified through an extensive literature review.Instruments measuring the use of computers in education datefrom early work in the 1980s (Allen, 1986). The primary focusof previous studies was: Attitudes to computers (Kay, 1993;Selwyn, 1997); knowledge of computers (Parks, Damrosch, Heller,& Romano, 1986; Sinclair & Gardner, 1999); computer self-efficacy(Barbeite & Weiss, 2004; Compeau & Higgins, 1995); attitudes tocomputers in nursing practice (Jayasuriya & Caputi, 1996; Stronge& Brodt, 1985); and computer experience (Garland & Noyes, 2004).However, each one of these instruments addressed multiple

Figure 1: Phases of instrument development Phase 1

Review previous workPhase 2

External review Phase

Pilot Literature Review (Wilkinson et al, 2009)

Panel of experts (n=16)

Pre-registrationursing stude(n=60; 67% response rate

Identified scale items, which could be modified or adapted to improve face validity.

Face validity Panel of students (n=3)

Face and convalidity Item trimmingAcceptability Coding

Concurrent validity – the items used bear a relationship to previously validated scales.

I T E M

P O O L

111 ITEMS

Content validity (Each item rated for relevance on a 10 point scale)

Reliability - Inconsistency Cronbach’s aInitial item an

Suggested domains - internal reliability Internal Reliability

(Each item linked to a domain).

1st

D R A F T

S C A L E

50 ITEMS

+ 4 demogra

phic

Fig. 1. Phases of instrum

constructs (Kay, 1993). A number of recent papers have describedthe validation and use of scales to measure the students’ attitudesto computers and the use of computers for education (Dugganet al., 2001; Steele, Johnson Palensky, Lynch, Lacy, & Duffy, 2002;Yu & Yang, 2006) but none of the existing scales were validatedwith healthcare students in the UK. Some were dated, such asthe Stronge and Brodt (1985) Nurses’ Attitudes Towards Comput-erisation instrument and Loyd and Gressard’s (1984) ComputerAttitude Scale. The majority of instruments were developed for adifferent context, for example: Student teachers (Kay, 1993; Loyd& Gressard, 1984); the healthcare workplace (Jayasuriya & Caputi,1996; Stronge & Brodt, 1985); 16–19 year old post-secondary stu-dents (Selwyn, 1997); psychology and economics students(Garland & Noyes, 2004, 2005); business professionals (Compeau& Higgins, 1995); or with a generic population of Computer orInternet users from a wide range of occupations (Barbeite & Weiss,2004; Maurer & Simonson, 1984). Additionally some scales in-cluded terminology which is not current in the UK.

5.2. Phase 1: Findings

5.2.1. Face validityNo existing instrument in its entirety was fit for purpose

although comparison of previous instruments demonstrated someoverlap in items. The potential items were created from scanningthe literature on e-learning and extracting key questions and is-sues. The resulting 111 items were edited to ensure that they wereusing contemporary English terms. Furthermore, only items re-lated to current computer use were included. For example, itemson programming computers were no longer relevant. An initial listof domains related to the research questions and the literature wasproduced.

6. Phase 2

6.1. Review by expert panel and construction of draft scale

Expert review was used to test face and content validity. Theprincipal components route was not followed for two reasons,namely: The ICT skills and experience items involved respondents’self-reporting of cognitive skills levels and time spent on activitiesand were, therefore, less likely to be multi-dimensional. Secondly,a lengthy development process involving repeated revision whenconceptually the use of ICT in education is a rapidly changing fieldwas not likely to result in increased validity of the affective items.

3 Phase 4 Test-retest

Main Study T1

n nts

)

Pre-registration nursing students (n=70, 78% T1, 65% T2 response rate)

F I N A L

Pre-registration nursing students (n=458, 29% response rate)

tent

Reliability - Stability Cohen’s Kappa adjusted proportion of agreement

S C A L E

ternal

lpha alysis

2nd

D R A F T

S C A L E

47 ITEMS

+ 4 demogra

phic

Reliability - Internal consistency Cronbach’s alpha

47 ITEMS

+ 4 demogra

phic

Reliability - Internal consistency Cronbach’s alpha

ent development.

Page 3: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

Table 1Panel of raters.

Profession of raters Number incategory

Number responding

Nurse researcher 1 1Nurse educator 3 3Midwife educator 1 1Learning technologist 4 3Information specialist 1 1HEA subject centre director 2 2Education technology

researcher2 1

Education researcher 2 1Total 16 13

Nursing students 3 2

A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 1369–1376 1371

A panel of expert raters was recruited (n = 16), with representa-tives from nursing and midwifery education, learning technologists,healthcare researchers, educational researchers and educationaldevelopers of whom thirteen responded (Table 1). The experts wereasked to rate the list of items and to place each item in a domain froma given list. The initial item pool and domain list was initially givento one expert to ensure the instructions were clear before beingamended in the light of feedback and released to the other experts.The suggested domains were reduced to seven.

In a separate exercise current students were recruited to reviewthe items for linguistic clarity and omissions. Three students re-sponded and two returned completed feedback documentation.One student made written comments and as a result one itemwas rejected and a new one added.

6.2. Phase 2: Findings

Following expert feedback, a coding frame was created in SPSSand the rankings of each expert added against the items. Thesewere ranked and then a mean rating for each item calculated(range 2.00–8.92). All items with a mean rating over seven were in-cluded. This resulted in a list of 68 items. One duplicate item wasidentified and two further items were the reverse of each other.The duplicates were removed leaving 66 items. Further adjust-ments to language were made as a result of free text commentsmade by the experts. A number of the items (n = 21) were similaror did not appear to match to a domain and, therefore, not all wereincluded in the pilot instrument (Table 2).

Items were then matched to domains. A coding sheet was cre-ated with the domain each expert had selected for an item. Wherenine or more agreed (>70%) the item was assigned to a domain. Adraft survey instrument was developed with 47 items grouped infive domains: ICT skills (16 items); Experience with computers (8items breadth of experience, 9 items frequency of use for activi-ties); Access to computers (2 items); attitude to computers (5items); attitude to computers in education (7 items). Three furtheritems, which related to computer experience, were included whichasked for information on: The length of time students had usedcomputers (1 item); average time using computers and the Inter-net per day (1 item); how comfortable they felt using computers(1 item). A final section contained a set of demographic items(4), namely: Gender; age; prior educational experience; and ethnicorigin (using the categories from the British Census (Bosveld, Con-nolly, & Rendall, 2006). There were three opportunities for unstruc-tured responses concerning experience with and attitudes tocomputers. Multiple item scales (Balnaves & Caputi, 2001) wereutilised, specifically Likert scales as they are designed to measureintensity of attitudes or feelings about a topic (Neuman, 2003).

7. Phase 3

7.1. Pilot testing

The aim of the pilot was to test:

1. The recruitment of students, the data collection procedure, theinstrument and transfer of data to SPSS.

2. The internal consistency of the sub-scales.

The new instrument, 50 items plus four demographic items andfour unstructured questions was tested online with a small popu-lation of graduate pre-registration nursing students (n = 60, re-sponse 67%) and amended in the light of responses and feedback.The literature review had demonstrated that there was no ‘goldstandard’ instrument against which to measure concurrent validityor equivalence.

7.2. Phase 3: Findings

Item response was good with minimal missing data. However,data from the pilot testing indicated that there was some itemredundancy in the ICT skills sub-scale and eight items were poordiscriminators. The sub-scale was thus reduced from 16 to 11items with an internal consistency of a = .91. This was the resultof advice regarding the need to be as parsimonious as possible withitems measuring a construct (Carmines & Zeller, 1979; Netemeyer,Bearden, & Sharma, 2003). Furthermore, some items were nega-tively worded to reduce response bias.

The term ICT may be familiar to education technologists andresearchers but is not widely used outside higher education. Fol-lowing feedback from respondents in the pilot study concerningterminology, use of computers and use of the Internet or webwere used for the main study. Respondents also identified confu-sion concerning the use of ‘home’ in some items and this wasamended to ‘Where they lived’. These changes were applied toall sub-scales of the instrument. Linguistic changes were alsomade to the computers in education sub-scale. Items were alsomoved between sub-scales following pilot work, thus; ‘I likeusing a computer for learning because I can work at my ownpace’ and ‘I have used computers to support my learning’ wereincluded in the attitude to computers for education sub-scalefor analysis. One item was trimmed from the computer experi-ence sub-scale and one added to the computer access sub-scaleconcerning access to the Internet where respondents lived. Fol-lowing pilot testing two different response scales were adopted:Strongly agree, agree, undecided, disagree, strongly disagree, andoften, sometimes, rarely, never.

8. Phase 4

8.1. Testing of refined instrument

The modified instrument was subsequently tested for reliabilitywith a convenience sample of graduate students (n = 70) enrolledon a diploma in higher education in nursing during their first sixweeks of study. The survey was conducted in paper format ontwo occasions, two weeks apart, to examine stability. While onlinedata collection would have been preferred, the risk of low respon-dent recruitment if they had been asked to move from a classroomto a computer room to complete the survey or alternatively askedto complete it in their own time meant that a paper format wasadopted. The response rate for T1 was 78% (n = 55) and for T265% (n = 46), with 46 respondents completing both T1 and T2 sur-veys with good data quality and few non-responses. Of the 46

Page 4: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

Table 2Ranking of items.

Rank Domain Item Remarks Used forpilot

1 8 I do not need help with using computers for learning U

2 5 I have access to a computer whenever I want U

3 8 I have used computers to support my learning U

4 8 I find learning using a computer and the web exciting Changed to: Interesting U

5 8 I like using a computer for learning because I can work at myown pace

U

6 8 I avoid using computers Opposite to: 21 I use a computer �7 8 I like to look things up using a computer More specific items used �8 8 I have a lot of self-confidence when it comes to working with

computersChanged to: I am confident using computers U

9 8 In the last six months how many hours per week – education U

10 8 My computer skills are excellent Changed to: ‘very good’ respondents may not always say they areexcellent

U

11 8 Level of comfort with computers and Internet: (scale: verycomfortable – very uncomfortable

U

12 4 I have been using computers and the Internet for: (interval list‘no time’ – ‘5 or more

U

13 8 Average hours per day spent on the Internet at home in the last6 months

U

14 8 I am comfortable using computers Same as: 11 and 43 �15 8 In the last six months how many hours/week – shopping

gathering product informationU

16 8 In the last six months how many hours per week –entertainment

Changed to: Leisure U

17 8 In the last six months how many hours per week – work/business

U

18 8 In the last six months how many hours per week –communication with others (not email)

U

19 8 In the last six months how many hours per week – gatheringinformation for personal need

U

20 8 In the last six months how many hours per week – Other U

21 8 I use a computer U

22 8 I feel confident making selections from an on screen menu Changed to: From popup menus U

23 8 I have used the Internet to look for information for example:Library catalogue/books/

U

24 8 Computers are useful to me Changed to: I use computers whenever I can U

25 8 I feel confident using the computer to write a letter or essay U

26 8 I feel confident escaping or exiting from a program or software Changed to: Closing down a software program U

27 8 In the last six months how many hours per week – wasting time Changed to: Surfing U

28 6 Working with a computer would make me feel very nervous Similar to 65 �29 8 I feel confident calling up a data file to view on the monitor

screenOut of date language �

30 7 The Internet is an integral part of the education process Similar to 37, complex language �31 8 I communicate with people using email U

32 8 I have never used a computer before Poor discriminator �33 3 I can locate a web site U

34 8 I am overwhelmed when I try to use the Internet for my study I feel disadvantaged by my lack of experience with computers –suggested by student rater replaced

U

35 8 I would recognise a Portable Document File (PDF) U

36 6 Computers do not scare me at all Similar to 65 �37 7 Knowledge of the Internet is essential to surviving in university U

38 8 Browsing the Internet confuses me Not specific to educational use �39 8 I use a computer at home U

40 3 I can send email U

41 8 I do not want to be the only one asking for help with computerskills

U

42 7 I wish I did not have to use the Internet for educational purposes Related to positive items 4, 5, 53, item redundancy �43 6 Computers make me feel uncomfortable Similar to 11 and 14 �44 8 I enjoy getting information from books and the Internet equally Changed post pilot U

45 3 I can attach a file to an email U

46 6 I do not feel threatened when others talk about computers Similar to 65 �47 6 Computers make me feel uneasy and confused Rejected by student rater �48 8 I feel anxious when learning computer terminology Not relevant �49 8 Access to the Internet for educational purposes is not important

to meLow discriminatory power �

50 3 I know how to print a document U

51 3 I can use a word processing application U

52 6 Computers make me feel uneasy and confused Duplicate 47 �53 7 Using the Internet makes learning fun U

54 7 I am indifferent about using the Internet for education Opposite to 4 �55 8 I have used computers at work/school U

56 4 I know how to create a folder U

57 6 I feel anxious about taking a course about the uses of computers Not relevant not part of curriculum �

1372 A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 1369–1376

Page 5: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

Table 2 (continued)

Rank Domain Item Remarks Used forpilot

58 7 The Internet is as informative as the lecturer Changed post pilot U

59 7 I do not look forward to using the Internet for assessed work U

60 7 Using the Internet is as convenient as using the library U

61 8 I can search a database U

62 4 I have used a website to find out about careers U

63 8 Other students have much better computer skills than I do Risk of social desirability bias used 41 �64 8 Other students seem to know much more about using the web

than I doRisk of social desirability bias and not related to education �

65 6 I get a sinking feeling when I think of trying to use a computer U

66 8 I have used computers for entertainment Changed to leisure U

67 8 I know how to use a mouse U

68 3 I know how to save a document U

3 = IT skills, 4 = experience with computers, 5 = access to computers, 6 = attitude to computers, 7 = attitude to web and Internet, 8 = other or poor agreement.

Table 3Comparison of respondents in test–retest with respondents in main study.

Test retest sample (n = 46) Main study sample (n = 287)

Age 6 27 31 (57%) 155 (54%)Female 39 (85%) 244 (85%)White 36 (78%) 148 (52%)Graduate 46 (100%) 35 (12%)

A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 1369–1376 1373

paired sets of data, more than half (57%) of the respondents were27 years of age or under and the majority were White (78%), female(85%) and 85% had a degree and 15% had a post-graduate qualifica-tion (Table 3).

Table 4Kappa values for items.

No. Item

1 I know how to create a folder2 I feel confident closing down a software program3 I do not feel confident attaching a file to an email4 I can use a word processing application5 I know how to save a document to a folder6 I would not recognise a PDF (Portable Document File)7 I would have difficulty finding a web site8 I feel confident making selections from popup menus9 I would find it difficult to search a database of references to journal papers10 I need help with using computers for learning11 I feel confident using a computer to write a letter or essay12 I have used computers at work/school13 I feel disadvantaged by my lack of experience with computers14 I communicate with people using email15 I have used computers to look for information for example: Library catalogue/16 I have used computers to support my learning17 I have used computers to find out about careers18 I have used computers for leisure19 I have been using computers for20 In the last 2 weeks I have used the Internet on average per day for21 My level of comfort with computers is22 I rate my computer skills on a scale of 1–10 where 10 = excellent35 I have access to a computer whenever I want36 I use a computer where I live37 I have access to the Internet where I live38 I am confident using computers39 I do not want to be the only one asking for help with computer skills40 I get a sinking feeling when I think of trying to use a computer41 I use computers whenever I can42 Knowledge of computers is essential to surviving at university43 Using computers makes learning fun44 I like using a computer for learning because I can work at my own pace45 I enjoy getting information from books and computers equally46 Using web based learning can substitute for lecturer contact47 I do not look forward to using computers for assessed work48 Using computers is as convenient as using the library49 I find learning using computers interesting50 Web based learning can never substitute for face to face teaching

8.2. Phase 4: Findings

Responses to each item at T1 and T2 data collection were ana-lysed using Cohen’s Kappa (j) (adjusted proportion of agreement)which calculates the actual percentage of agreement and the per-centage within one category (Table 4) and measures how muchof the agreement between the two measurement points are notattributable to chance. A j = 1 represents perfect agreement andj = 0 indicates any agreement between items at T1 and T2 is onlyattributable to chance. A Kappa of .81 or above indicates almostperfect agreement, .61–.80 substantial, and .41–.60 moderate. Anitem with a Kappa of less than .4 should be treated as unreliable(Landis & Koch, 1977). Ten items had a Kappa less than .4 and these

Cohen Kappa Agreement (%) Agreement withinone category (%)

Note

.50 72 98

.38 80 100

.48 67 83 Range �4 to +4

.62 85 100

.83 87 100

.57 72 89 Range �2 to +4

.43 74 96

.43 63 89 Range �3 to +2

.51 59 87 Range �3 to +3

.44 61 94

.55 83 96 Range �4 to +4

.59 87 100

.68 80 96

.66 87 100books/ .38 76 98

.33 67 100

.57 82 89

.48 78 100

.46 82 98

.35 60 96 Range �3 to +1

.63 73 93 Range �1 to +4

.54 49 91 Range �2 to +2

.45 72 98

.51 72 91 Range �3 to +3

.53 70 89 Range �2 to +3

.68 80 100

.31 48 89 Range �3 to +1

.53 72 93

.41 61 94 Range �2 to +3

.49 78 100

.35 54 87 Range �2 to +3

.35 57 87 Range �2 to +3

.42 59 85 Range �2 to +3

.25 50 80 Range �2 to +4

.24 46 81 Range �3 to +3

.24 46 87 Range �2 to +2

.45 63 93 Range �2 to +2

.45 50 96 Range �4 to +1

Page 6: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

Table 5Cronbach’s alpha for sub-scales test–retest.

Sub-scale Numberof items

a T1n = 55

a T2n = 46

a if item removed

IT skills 10 .81 .79 Negatively phrased items 9 T1a = .813; T2 3 a = .793, 9a = .801

IT experience 7 .87 .76 No improvementIT use 9 .67 .84 T1 delete item 23 or 24 a = .69;

T2 no improvementIT access 3 .77 .88 No improvementAttitude to

computers4 .72 .70 T1 delete negatively phrased

item 39 a = .73 or 41 a = .75; T2delete item 39 a = .75

Attitude tocomputersineducation

10 .78 .77 T1 No improvement; T2 deletenegatively phrased item 50a = .82

1374 A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 1369–1376

items showed lower agreement between tests and seven had awide range of response.

The majority (n = 22) of Kappa values suggested moderateagreement and six items showed substantial agreement betweenthe tests. In a single case Kappa was highly consistent (j = .83)between tests. There was little evidence of positive or negative biasin the distribution of changes between T1 and T2. The items withthe lowest agreement between tests were those relating to usingcomputers for education.

Cronbach’s alpha coefficients for each sub-scale at T1 and T2 arepresented in Table 5. The alpha coefficient for all the sub-scaleswas greater than .70 which is regarded as adequate (Netemeyeret al., 2003) although others have advocated .80 while acknowl-edging that this was difficult to achieve (Carmines & Zeller,1979). In order to measure a construct satisfactorily items areneeded that adequately capture the domain, for example, ICT skills.Although increasing the number of items is known to increase reli-ability there are associated problems. Firstly, respondents will onlycomplete questionnaires if they are concise (Netemeyer et al.,2003). Secondly, adding more items might initially increase the al-pha coefficient but the more items that there are the smaller theincrease in alpha with each new addition (Carmines & Zeller,1979). Finally, the more items that there are the more likely it isthat there are multiple constructs. In the test–retest survey, ifitems were removed, improvement to the alpha for sub-scales:Attitude to computers and attitude to computers in educationwould be marginal (<.02). There did not, therefore, appear to bean excessive number of items or items which were not represent-ing the domain (Table 5).

9. Phase 5

9.1. Main study

The instrument collected data from a population of pre-regis-tration nursing students (n = 458) on three occasions during their

Table 6Cronbach’s alpha for sub-scales T1, main study.

Sub-scale Numberof items

a T1n = 135

a if item removed

IT skills 10 .75 Delete item 11 a = .75IT experience 7 .78 No improvementIT use 9 .77 No improvementIT access 3 .80 No improvementAttitude to

computers4 .62 Delete item 41 a = .68

Attitude tocomputers ineducation

10 .62 Delete item 42 a = .63 or negativelyphrased item 50 a = .64 or item 10a = .68

first year of university education. Data collection for the main sur-vey followed teaching events in either computer rooms or lecturetheatres. The internal consistency of the instrument was assessedusing responses at the second time-point, T2, where item responsewas high although unit response was low (n = 135%, 29%) (Table 6).The low unit response was a potential source of bias as some sub-groups were poorly represented. The sample comprised 44% White,87% female, 49% aged 27 years or under and 7% degree graduates(Table 3) and was significantly different from the pilot study sam-ple (78% White) in respect of ethnicity (p < .005).

9.2. Reliability testing with large sample

All the sub-scales demonstrated acceptable internal consis-tency; however, the attitude to computers and attitude to comput-ers in education were only just adequate at T2. The removal ofitems would have made marginal improvements to reliability.The items are listed in Appendix A; starred items would be deletedor amended in the refinement process.

10. Discussion

A new instrument was constructed and proved reliable on re-peated use. The instrument demonstrated acceptable reliabilityacross the sub-scales using a test–retest and furthermore, the reli-ability was reasonable when applied to the larger, more diversemain study sample. This indicates that the sub-scales might be en-hanced by further work regarding language and terminology. Someof the issues which emerged from the process of validating theinstrument included: Lack of conceptual clarity concerning e-learning; advances in technology; scaling issues; and cultural var-iation amongst respondents.

The new instrument contained both new and derived items andconsequently two related aspects of validity arose, namely, concep-tual clarity and the use of specific ICT terminology within items. Thefirst issue relates to the breadth of e-learning as a concept given itcurrently encompasses anything from presentation technologies inthe classroom, video, virtual learning environments and, as the studyprogressed, the additional new forms of social software used forlearning. Additionally, during the lifespan of the study ‘e-learning’was partially replaced in the literature by ‘technology enhancedlearning’. The literature also referred to online learning, blendedlearning and computer-assisted learning. Accordingly, as the pilotstudy findings demonstrated, the language used in the instrumenthad to be simple and not include any terms which might be unclear.

The experts were all employed in higher education and wereregular ICT users, a number of whom were also educational tech-nologists, and consequently were familiar with terminology in dai-ly use in the Higher Education (HE) environment and were able toextrapolate from one term to another. The use of experts to reviewitems is a well established practice in tool development but theconsultation of a wider group similar to the intended populationmight have identified other areas of misunderstanding. Any futureiteration would include a larger and more diverse group of studentraters in order to establish their understanding of instrument itemsand possibly generate new items which cover contemporary con-cerns about the use of ICT in education.

The measurement of attitudes to computers is complex concep-tually, combining emotion, cognition, control and intentions whilealso being dependent upon the context of the users (Kay, 1993).While some users might be confident using computers in theirdomestic settings, they may be less confident concerning theireducational use. This has been identified by two other studies(Garland & Noyes, 2004, 2005) concerning psychology students’attitudes to computers and books indicating the need for surveyitems to reflect the learning context.

Page 7: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 1369–1376 1375

It emerged that there were some potentially confusing or doublebarrelled items in the ICT education sub-scale. For example, ‘I enjoygetting information from books and computers equally’ which didnot distinguish whether students preferred one over the other sothat the item will need to be split into two, for example, ‘Booksare the best source of information for nursing education’ and ‘Com-puters give access to the best sources of information for nursingeducation’. Noyes and Garland (2005) adapted the Computer Atti-tude Measure (CAM) (Kay, 1993) to measure attitudes to booksand computers separately and then compared the responses con-firming that students in their study were more comfortable usingbooks but were not negative about using computers.

A revised experience sub-scale might also be improved byincluding additional items to establish for what purpose respon-dents are using ICT, for example, emailing tutor; using a discussionforum; keeping an online diary; and collaborative work groups. Inaddition, there may be a case for devising a small group of diagnos-tic items which discriminate where students might need support.These items would change over time. It is no longer enough toask students if they use email and word processing confidentlybut rather what needs to be measured is their confidence inachieving more advanced use of these in the educational settingand their overall computer literacy.

The test–retest sample were graduates, some in transition fromanother university to the study site, while some were mature stu-dents returning to study after an interval. While being computerusers, they may have found that greater use of computers for edu-cation was expected in their new program compared to their pre-vious experience as undergraduates. Furthermore, the first surveymay also have raised their awareness of computer use thus effect-ing a greater change between surveys resulting in lower stability ofthe instrument.

The instrument development included a rigorous process oftesting and validation with the results from the pilot and reliabilitysurvey suggesting that the instrument was reliable and valid. Theslightly lower internal consistency of some sub-scales observedin the main study suggests that where an instrument is used witha more diverse population comprising varying cultures then care-ful attention must be paid to the clarity and relevance of items(Netemeyer et al., 2003). There is evidence that there is culturalvariation in the understanding of items (Hemert, Baerveldt, &Vermande, 2001; Johnson et al., 2006; Warnecke et al., 1997), inparticular abstract items (Johnson et al., 2006; Lee, Jones, Miney-ama, & Zhang, 2002) and responses to Likert scales (Lee et al.,2002) and that this is independent of age or prior education. Fur-thermore, there is some evidence of varied resistance to admittingdeficits in skills and knowledge across cultures (Hemert et al.,2001; Lee et al., 2002). If the instrument is to be used in heteroge-neous populations, future refinement of the instrument should in-clude cross-cultural validation. These issues could be addressed bythe involvement of a more diverse group of experts.

10.1. Limitations

Constructing and validating measurement instruments requiresattention to rigor. In this field, where there are unstable conceptsand limited prior work, the challenges of ensuring validity are par-ticularly significant. In addition, a lack of similar measures limitsopportunities to establish equivalence. In consequence, furtheriterations may require item modification.

11. Conclusion

Validation of new instruments is traditionally a long process,however, the development cycle in this field may need to be con-densed to acknowledge the rapid advances in technology, and

while this is an additional challenge to rigor it may prove a key fac-tor in validation. The first stage of validation of this new instru-ment suggests that it is reliable and valid with nursing studentsand provides the basis for continuing refinement to yield a usefulcontemporary instrument.

The challenge of trans-cultural validity is important and not un-ique to this area of measurement. However, it is of particularimportance in relation to the healthcare workforce which is mobilewith a complex mix of cultures particularly in large metropolitanareas. There is an expectation that all healthcare staff will be infor-mation literate (DH, 2004, 2008) and there is a growing depen-dence on ICT at the point of healthcare delivery (DH, 2008a).Furthermore, healthcare cost containment means that there isincreasing demand for the use of ICT to provide continuing educa-tion. In order to design and develop high quality flexible and dis-tance learning resources, it is important to understand howhealthcare students’ ICT skills are developing and how they viewand adopt technology enhanced learning. Additional re-validationof this instrument is needed using larger samples across the UKif there is to be a national study of nursing students’ experiencewith computers and the Internet to inform educational develop-ments to meet workforce needs.

Acknowledgement

The authors acknowledge the support of Peter Milligan, SeniorApplications Analyst Advisor at King’s College London with statis-tical analysis.

Appendix A

ICT skillsI know how to create a folderI feel confident closing down a software programI do not feel confident attaching a file to an emailI can use a word processing applicationI know how to save a document to a folderI would not recognise a PDF (Portable Document File)I would have difficulty finding a web site*

I feel confident making selections from popup menus*

I would find it difficult to search a database of references tojournal papers

I need help with using computers for learninga

I feel confident using a computer to write a letter or essayI do not know how to find an article in an e-journal

Experience with computersI have used computers at work/school*

I feel disadvantaged by my lack of experience with computersI communicate with people using emailI have used computers to look for information for example:

Library catalogue/booksI have used computers to support my learningI have used computers to find out about careersI have used computers for leisure

Disparate itemsI have been using computers for (timescale)In the last 2 weeks I have used the Internet on average per day

for (timescale)My level of comfort with computers is (comfort scale)I rate my computer skills on a scale of 1–10. Where

10 = Excellent as:

(continued on next page)

Page 8: Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

1376 A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 1369–1376

Appendix A (continued)

In the last two weeks how many hours per week have youused computers and the web for:

EducationShopping/gathering product informationLeisureWork/businessEmailOther forms of communication (e.g. chat, forums,

videoconference)Gathering information for personal needsSurfing the web*

Other activity

Access to computersI have access to a computer whenever I wantI use a computer where I liveI have access to the Internet where I liveI use computers in the PAWS roomsI use computers on placement

Attitude to computersI am confident using computersI do not want to be the only one asking for help with

computer skillsI get a sinking feeling when I think of trying to use a computerI use computers whenever I can

Attitude to computers in educationKnowledge of computers is essential to surviving at

university*

Using computers makes learning funI like using a computer for learning because I can work at my

own paceI enjoy getting information from books and computers

equally*

Using web based learning can substitute for lecturer contactI do not look forward to using computers for assessed workUsing computers is as convenient as using the libraryI find learning using computers interestingWeb based learning can never substitute for face to face

teaching

*Further refinement required; aMoved to attitudes tocomputers in education for analysis

References

Allen, L. R. (1986). Measuring attitude toward computer assisted instruction.Computers in Nursing, 4(4), 144–151.

Balnaves, M., & Caputi, P. (2001). Introduction to quantitative research methods: Aninvestigative approach. London: Sage.

Barbeite, F. G., & Weiss, E. M. (2004). Computer self-efficacy and anxiety scales foran Internet sample: Testing measurement equivalence of existing measures anddevelopment of new scales. Computers in Human Behavior, 20, 1–15.

Bosveld, K., Connolly, H., & Rendall, M. S. (2006). A guide to comparing 1991 and2001 Census ethnic group data: Office for National Statistics.

Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment. NewburyPark: Sage.

Chumley-Jones, H. S., Dobbie, S., & Alford, C. L. (2002). Web-based learning: Soundeducational method or hype? A review of the evaluation of literature. AcademicMedicine, 77(10), S86–S93.

Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy development of ameasure and initial test. MIS Quarterly, 19, 189–211.

Department of Health (2000). The NHS plan: A plan for investment, a plan for reform.London: HMSO. [Retrieved from: http://www.dh.gov.uk/assetRoot/04/05/57/83/04055783.pdf on 13 May 2006.].

Department of Health (2004). The NHS knowledge and skills framework (nhs ksf) andthe development review process. London: Department of Health Publications.

[Retrieved from: http://www.dh.gov.uk/en/Publicationsandstatistics/Publica-tions/PublicationsPolicyAndGuidance/DH_4090843?IdcService=GET_FILE&dID=18018&Rendition=Web on 12 November 2008.].

Department of Health (2008). Health informatics review. DH Publications. [Retrievedfrom: http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/Publica-tionsPolicyAndGuidance/DH_086073 on 18 January 2009.].

Department of Health (2008a). NHS next stage review: A high quality workforce. DHpublications. [Retrieved from: http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_085840?IdcService=GET_FILE&dID=169125&Rendition=Web on 5 March 2009.].

Duggan, A., Hess, B., Morgan, D., Sooyeon, K., & Wilson, K. (2001). Measuringstudents’ attitudes toward educational use of the Internet. Journal of EducationalComputing Research, 25(3), 267–281.

Garland, K. J., & Noyes, J. M. (2004). Computer experience: A poor predictor ofcomputer attitudes. Computers in Human Behavior, 20, 823–840.

Garland, K., & Noyes, J. (2005). Attitudes and confidence towards computers andbooks as learning tools: A cross-sectional study of student cohorts. BritishJournal Of Educational Technology, 36(1), 85.

Greenhalgh, T. (2001). Computer assisted learning in undergraduate medicaleducation. BMJ, 322, 40–44.

Hemert, D. A. v., Baerveldt, C., & Vermande, M. (2001). Assessing cross-cultural itembias in questionnaires: Acculturation and the measurement of social supportand family cohesion for adolescents. Journal of Cross-Cultural Psychology, 32,381–396.

Hobbs, S. D. (2002). Measuring nurses’ computer competency: An analysis ofpublished instruments. Cin-Computers Informatics Nursing, 20(2), 63–73.

Jayasuriya, R., & Caputi, P. (1996). Computer attitude and computer anxiety innursing – validation of an instrument using an Australian sample. Computers inNursing, 14(6), 340.

Johnson, T. P., Cho, Y. I., Holbrook, A. L., O’Rourke, D., Warnecke, R., & Chavez, N.(2006). Cultural variability in the effects of question design features onrespondent comprehension of health surveys. Annals of Epidemiology, 16(9),661–668.

Kay, R. H. (1993). An exploration of theoretical and practical foundations forassessing attitudes toward computers: The Computer Attitude Measure (CAM).Computers in Human Behavior, 9, 371–386.

Kreideweis, J. (2005). Indicators of success in distance education. CIN: Computers,Informatics, Nursing, 23(2), 68–72.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement forcategorical data. Biometrics, 33(1), 159–174.

Lee, J. W., Jones, P. S., Mineyama, Y., & Zhang, X. E. (2002). Cultural differences inresponses to a Likert scale. Research in Nursing and Health, 25(4), 295–306.

Lewis, M. J., Davies, R., Jenkins, D., & Tait, M. I. (2001). A review of evaluative studiesof computer-based learning in nursing education. Nurse Education Today, 21,26–37.

Loyd, B. H., & Gressard, C. (1984). Reliability and Factorial validity of computerattitude scales. Educational and Psychological Measurement, 44, 501–505.

Maurer, M. M., & Simonson, M. R. (1984). Development and validation of a measure ofcomputer anxiety. Unpublished manuscript.

Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scaling procedures: Issues andapplications. Thousand Oaks: Sage.

Neuman, W. L. (2003). Social research methods: Qualitative and quantitativeapproaches (5 ed.). London Allyn and Bacon.

Noyes, J., & Garland, K. (2005). Students’ attitudes toward books and computers.Computers in Human Behavior, 21(2), 233.

Parks, P. L., Damrosch, S. P., Heller, B. R., & Romano, C. A. (1986). Faculty and studentperceptions of computer applications in nursing. Journal of Professional Nursing,2, 104–113.

Selwyn, N. (1997). Students’ attitudes toward computers: Validation of a computerattitude scale for 16–19 education. Computers in Education, 28(1), 35–41.

Sharpe, R., Benfield, G., Lessner, E., & DeCicco, E. (2005). Scoping study for thepedagogy strand of the JISC e-Learning Programme. [Retrieved from: http://www.jisc.ac.uk/uploaded_documents/scoping study final report v4.1.doc on 28March 2006.].

Sinclair, M., & Gardner, J. (1999). Planning for information technology key skills innurse education. Journal of Advanced Nursing, 30(6), 1441–1450.

SPSS for Windows, Rel. 15.0.1 2006. SPSS Inc., Chicago.Steele, D. J., Johnson Palensky, J. E., Lynch, T. G., Lacy, N. L., & Duffy, S. W. (2002).

Learning preferences, computer attitudes, and student evaluation ofcomputerised instruction. Medical Education, 36(3), 225–232.

Stronge, J. H., & Brodt, A. (1985). Assessment of nurses’ attitudes towardscomputerization. Computers in Nursing, 3(4), 154–158.

UKCC (1999). Fitness for practice the UKCC commission for nursing and midwiferyeducation. UKCC.

Warnecke, R. B., Johnson, T. P., Chavez, N., Sudman, S., O’Rourke, D. P., Lacey, L., et al.(1997). Improving question wording in surveys of culturally diversepopulations. Annals of Epidemiology, 7(5), 334–342.

Wilkinson, A., While, A. E., & Roberts, J. (2009). Measurement of information andcommunication technology experience and attitudes to e-learning of studentsin the healthcare professions: Integrative review. Journal of Advanced Nursing,65(4), 755–772.

Yu, S., & Yang, K. F. (2006). Attitudes toward web-based distance learning amongpublic health nurses in Taiwan: A questionnaire survey. International Journal ofNursing Studies, 43(6), 767–774.