Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

Embed Size (px)

Text of Construction of an instrument to measure student information and communication technology skills,...

  • en

    rk M

    Scale validationTestretestNurse educationICT skillsAttitudes

    lf-ren deed

    Literature review and identication of key items. Phase 2: Development and renement of items with

    untrierse nu

    rent evidence regarding students experience of computers andattitudes to e-learning is mainly based upon small ill-dened eval-uation studies which often included the measurement of studentattitude as a secondary or incidental outcome. This paper reportsthe development and validation of an instrument to measure expe-rience with information and communication technology (ICT) andattitudes to e-learning of nursing students as part of a study inves-

    ment to measure learners attitudes to e-learning (Hobbs, 2002;Lewis, Davies, Jenkins, & Tait, 2001). A recent review of the psycho-metric properties of instruments (n = 49) used in healthcare set-tings regarding ICT skills, experience and attitudes to the use ofICT for education (Wilkinson et al., 2009) found general measuresof students ICT skills and attitudes and more recent developmentsof measures of attitudes to the use of ICT in education. Insufcientmethodological detail was available to assess the validity of instru-ments or instruments had become dated with technologicaldevelopments. Only a small number of studies demonstrated asystematic approach to developing survey instruments (Duggan,

    * Corresponding author. Tel.: +44 20 7848 3708; fax: +44 20 7848 3506.

    Computers in Human Behavior 26 (2010) 13691376

    Contents lists availab


    eviE-mail address: (A. Wilkinson).sity and encouraged universities to give students access to exibleand online modes of learning (DH, 2000; UKCC, 1999). The conse-quences of this development for different groups of students areunknown with a recent literature review (Wilkinson, While, &Roberts, 2009) indicating that there is an absence of large robuststudies concerning student experience of computers and attitudesto e-learning in the health professions and in particular nursing. Inaddition, there are few accounts of the development and validationof instruments to measure experience, attitudes and anxiety in thecontext of e-learning in nursing education. Furthermore, the cur-

    The development and validation of an instrument designed tomeasure nursing students reported skill and experience withICT; condence with computers and the Internet; attitude to com-puters; and attitude to ICT for education.

    2.1. Background to instrument development

    Reviews have identied a continuing need for a reliable instru-1. Introduction

    In line with other developed co(UK) have increased both student nu0747-5632/$ - see front matter 2010 Elsevier Ltd. Adoi:10.1016/j.chb.2010.04.010expert panel (n = 16) and students (n = 3) to establish face and content validity. Phase 3: Pilot testingof draft instrument with graduate pre-registration nursing students (n = 60) to assess administration pro-cedures and acceptability of the instrument. Phase 4: Testretest with further sample of graduate pre-registration nursing students (n = 70) tested stability and internal consistency. Phase 5: Main study withpre-registration nursing students (n = 458), further testing of internal consistency. The instrument provedto have moderate testretest stability and the sub-scales had acceptable internal consistency. When usedwith a larger, more diverse population the psychometric properties were more variable. Further work isneeded to rene the instrument with specic reference to possible cultural and linguistic response pat-terns and technological advances.

    2010 Elsevier Ltd. All rights reserved.

    s, the United Kingdommbers and their diver-

    tigating pre-registration nursing students experience with com-puters and the Internet.

    2. Study aimKeywords:Instrument development

    approach with ve phases was used to develop and test a new self-report measure of skills and experi-ence with information and communication technology and attitudes to computers in education. Phase 1:Construction of an instrument to measurcommunication technology skills, experie

    Ann Wilkinson *, Julia Roberts, Alison E. WhileKings College London, Florence Nightingale School of Nursing and Midwifery, James Cle

    a r t i c l e i n f o

    Article history:Available online 20 May 2010

    a b s t r a c t

    Over the past 20 years setechnology skills have beerience of e-learning emerg

    Computers in H

    journal homepage: www.elsll rights reserved.student information andce and attitudes to e-learning

    axwell Building, 57 Waterloo Road, London SE1 8WA, United Kingdom

    port measures of healthcare students information and communicationveloped with limited validation. Furthermore, measures of student expe-but were not repeatedly used with diverse populations. A psychometric

    le at ScienceDirect

    man Behavior

    er .com/locate /comphumbeh

  • Hess, Morgan, Sooyeon, & Wilson, 2001; Jayasuriya & Caputi, 1996)and these originated in Australia and America, respectively.Perhaps as a consequence little is known concerning the ICT skillsand attitudes to e-learning of healthcare students (Chumley-Jones,Dobbie, & Alford, 2002; Greenhalgh, 2001; Kreideweis, 2005; Lewiset al., 2001) and, indeed, of students of other disciplines in the UK(Sharpe, Beneld, Lessner, & DeCicco, 2005).

    3. Method

    The development and validation of the instrument had vephases (Fig. 1). Phase 1: Creation of item pool following a literaturereview and assessment of previous instruments. Phase 2: Reduc-tion of items following review by expert panel and constructionof draft scale. Phase 3: Pilot testing. Phase 4: Testing of renedinstrument. Phase 5: Further tests of internal consistency withmain sample.

    constructs (Kay, 1993). A number of recent papers have describedthe validation and use of scales to measure the students attitudesto computers and the use of computers for education (Dugganet al., 2001; Steele, Johnson Palensky, Lynch, Lacy, & Duffy, 2002;Yu & Yang, 2006) but none of the existing scales were validatedwith healthcare students in the UK. Some were dated, such asthe Stronge and Brodt (1985) Nurses Attitudes Towards Comput-erisation instrument and Loyd and Gressards (1984) ComputerAttitude Scale. The majority of instruments were developed for adifferent context, for example: Student teachers (Kay, 1993; Loyd& Gressard, 1984); the healthcare workplace (Jayasuriya & Caputi,1996; Stronge & Brodt, 1985); 1619 year old post-secondary stu-dents (Selwyn, 1997); psychology and economics students(Garland & Noyes, 2004, 2005); business professionals (Compeau& Higgins, 1995); or with a generic population of Computer orInternet users from a wide range of occupations (Barbeite & Weiss,2004; Maurer & Simonson, 1984). Additionally some scales in-

    on programming computers were no longer relevant. An initial list

    ase Pilot stratiostude7% e rate

    d con


    ty - Inncy hs am an

    1370 A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 136913763.1. Ethical considerations

    The university ethics committee granted permission. Whereparticipants were involved in the study they were provided withwritten information concerning the study and informed of theirright to withdraw at all stages.

    4. Data analysis

    Statistical analysis was conducted using SPSS for Windows SPSSv1215 (2006) for Windows.

    5. Phase 1

    5.1. Creation of item pool

    Existing instruments or potential items for inclusion in theinstrument were identied through an extensive literature review.Instruments measuring the use of computers in education datefrom early work in the 1980s (Allen, 1986). The primary focusof previous studies was: Attitudes to computers (Kay, 1993;Selwyn, 1997); knowledge of computers (Parks, Damrosch, Heller,& Romano, 1986; Sinclair & Gardner, 1999); computer self-efcacy(Barbeite & Weiss, 2004; Compeau & Higgins, 1995); attitudes tocomputers in nursing practice (Jayasuriya & Caputi, 1996; Stronge& Brodt, 1985); and computer experience (Garland & Noyes, 2004).However, each one of these instruments addressed multiple

    Figure 1: Phases of instrument development Phase 1

    Review previous workPhase 2

    External review Ph

    Literature Review (Wilkinson et al, 2009)

    Panel of experts (n=16)

    Pre-reginursing (n=60; 6respons

    Identified scale items, which could be modified or adapted to improve face validity.

    Face validity Panel of students (n=3)

    Face anvalidity Item trimAcceptaCoding

    Concurrent validity the items used bear a relationship to previously validated scales.

    I T E M

    P O O L

    111 ITEMS

    Content validity (Each item rated for relevance on a 10 point scale)

    ReliabiliconsisteCronbacInitial ite

    Suggested domains - internal reliability Internal Reliability

    (Each item linked to a domain).


    D R A F T

    S C A L E

    50 ITEMS

    + 4 demogra

    phic Fig. 1. Phases of instrumof domains related to the research questions and the literature wasproduced.

    6. Phase 2

    6.1. Review by expert panel and construction of draft scale

    Expert review was used to test face and content validity. Theprincipal components route was not followed for two reasons,namely: The ICT skills and experience items involved respondentsself-reporting of cognitive skills levels and time spent on activitiesand were, therefore, less likely to be multi-dimensional. Secondly,a lengthy development process involving repeated revision whenconceptually the use of ICT in education is a rapidly changing eldwas not likely to result in increased validity of the affective items.

    3 Phase 4 Test-retest

    Main Study


View more >