Construction of an instrument to measure student information and communication technology skills, experience and attitudes to e-learning

  • Published on
    05-Sep-2016

  • View
    218

  • Download
    0

Transcript

<ul><li><p>en</p><p>rk M</p><p>Scale validationTestretestNurse educationICT skillsAttitudes</p><p>lf-ren deed</p><p>Literature review and identication of key items. Phase 2: Development and renement of items with</p><p>untrierse nu</p><p>rent evidence regarding students experience of computers andattitudes to e-learning is mainly based upon small ill-dened eval-uation studies which often included the measurement of studentattitude as a secondary or incidental outcome. This paper reportsthe development and validation of an instrument to measure expe-rience with information and communication technology (ICT) andattitudes to e-learning of nursing students as part of a study inves-</p><p>ment to measure learners attitudes to e-learning (Hobbs, 2002;Lewis, Davies, Jenkins, &amp; Tait, 2001). A recent review of the psycho-metric properties of instruments (n = 49) used in healthcare set-tings regarding ICT skills, experience and attitudes to the use ofICT for education (Wilkinson et al., 2009) found general measuresof students ICT skills and attitudes and more recent developmentsof measures of attitudes to the use of ICT in education. Insufcientmethodological detail was available to assess the validity of instru-ments or instruments had become dated with technologicaldevelopments. Only a small number of studies demonstrated asystematic approach to developing survey instruments (Duggan,</p><p>* Corresponding author. Tel.: +44 20 7848 3708; fax: +44 20 7848 3506.</p><p>Computers in Human Behavior 26 (2010) 13691376</p><p>Contents lists availab</p><p>u</p><p>eviE-mail address: ann.wilkinson@kcl.ac.uk (A. Wilkinson).sity and encouraged universities to give students access to exibleand online modes of learning (DH, 2000; UKCC, 1999). The conse-quences of this development for different groups of students areunknown with a recent literature review (Wilkinson, While, &amp;Roberts, 2009) indicating that there is an absence of large robuststudies concerning student experience of computers and attitudesto e-learning in the health professions and in particular nursing. Inaddition, there are few accounts of the development and validationof instruments to measure experience, attitudes and anxiety in thecontext of e-learning in nursing education. Furthermore, the cur-</p><p>The development and validation of an instrument designed tomeasure nursing students reported skill and experience withICT; condence with computers and the Internet; attitude to com-puters; and attitude to ICT for education.</p><p>2.1. Background to instrument development</p><p>Reviews have identied a continuing need for a reliable instru-1. Introduction</p><p>In line with other developed co(UK) have increased both student nu0747-5632/$ - see front matter 2010 Elsevier Ltd. Adoi:10.1016/j.chb.2010.04.010expert panel (n = 16) and students (n = 3) to establish face and content validity. Phase 3: Pilot testingof draft instrument with graduate pre-registration nursing students (n = 60) to assess administration pro-cedures and acceptability of the instrument. Phase 4: Testretest with further sample of graduate pre-registration nursing students (n = 70) tested stability and internal consistency. Phase 5: Main study withpre-registration nursing students (n = 458), further testing of internal consistency. The instrument provedto have moderate testretest stability and the sub-scales had acceptable internal consistency. When usedwith a larger, more diverse population the psychometric properties were more variable. Further work isneeded to rene the instrument with specic reference to possible cultural and linguistic response pat-terns and technological advances.</p><p> 2010 Elsevier Ltd. All rights reserved.</p><p>s, the United Kingdommbers and their diver-</p><p>tigating pre-registration nursing students experience with com-puters and the Internet.</p><p>2. Study aimKeywords:Instrument development</p><p>approach with ve phases was used to develop and test a new self-report measure of skills and experi-ence with information and communication technology and attitudes to computers in education. Phase 1:Construction of an instrument to measurcommunication technology skills, experie</p><p>Ann Wilkinson *, Julia Roberts, Alison E. WhileKings College London, Florence Nightingale School of Nursing and Midwifery, James Cle</p><p>a r t i c l e i n f o</p><p>Article history:Available online 20 May 2010</p><p>a b s t r a c t</p><p>Over the past 20 years setechnology skills have beerience of e-learning emerg</p><p>Computers in H</p><p>journal homepage: www.elsll rights reserved.student information andce and attitudes to e-learning</p><p>axwell Building, 57 Waterloo Road, London SE1 8WA, United Kingdom</p><p>port measures of healthcare students information and communicationveloped with limited validation. Furthermore, measures of student expe-but were not repeatedly used with diverse populations. A psychometric</p><p>le at ScienceDirect</p><p>man Behavior</p><p>er .com/locate /comphumbeh</p></li><li><p>Hess, Morgan, Sooyeon, &amp; Wilson, 2001; Jayasuriya &amp; Caputi, 1996)and these originated in Australia and America, respectively.Perhaps as a consequence little is known concerning the ICT skillsand attitudes to e-learning of healthcare students (Chumley-Jones,Dobbie, &amp; Alford, 2002; Greenhalgh, 2001; Kreideweis, 2005; Lewiset al., 2001) and, indeed, of students of other disciplines in the UK(Sharpe, Beneld, Lessner, &amp; DeCicco, 2005).</p><p>3. Method</p><p>The development and validation of the instrument had vephases (Fig. 1). Phase 1: Creation of item pool following a literaturereview and assessment of previous instruments. Phase 2: Reduc-tion of items following review by expert panel and constructionof draft scale. Phase 3: Pilot testing. Phase 4: Testing of renedinstrument. Phase 5: Further tests of internal consistency withmain sample.</p><p>constructs (Kay, 1993). A number of recent papers have describedthe validation and use of scales to measure the students attitudesto computers and the use of computers for education (Dugganet al., 2001; Steele, Johnson Palensky, Lynch, Lacy, &amp; Duffy, 2002;Yu &amp; Yang, 2006) but none of the existing scales were validatedwith healthcare students in the UK. Some were dated, such asthe Stronge and Brodt (1985) Nurses Attitudes Towards Comput-erisation instrument and Loyd and Gressards (1984) ComputerAttitude Scale. The majority of instruments were developed for adifferent context, for example: Student teachers (Kay, 1993; Loyd&amp; Gressard, 1984); the healthcare workplace (Jayasuriya &amp; Caputi,1996; Stronge &amp; Brodt, 1985); 1619 year old post-secondary stu-dents (Selwyn, 1997); psychology and economics students(Garland &amp; Noyes, 2004, 2005); business professionals (Compeau&amp; Higgins, 1995); or with a generic population of Computer orInternet users from a wide range of occupations (Barbeite &amp; Weiss,2004; Maurer &amp; Simonson, 1984). Additionally some scales in-</p><p>on programming computers were no longer relevant. An initial list</p><p>ase Pilot stratiostude7% e rate</p><p>d con</p><p>mingbility </p><p>ty - Inncy hs am an</p><p>1370 A. Wilkinson et al. / Computers in Human Behavior 26 (2010) 136913763.1. Ethical considerations</p><p>The university ethics committee granted permission. Whereparticipants were involved in the study they were provided withwritten information concerning the study and informed of theirright to withdraw at all stages.</p><p>4. Data analysis</p><p>Statistical analysis was conducted using SPSS for Windows SPSSv1215 (2006) for Windows.</p><p>5. Phase 1</p><p>5.1. Creation of item pool</p><p>Existing instruments or potential items for inclusion in theinstrument were identied through an extensive literature review.Instruments measuring the use of computers in education datefrom early work in the 1980s (Allen, 1986). The primary focusof previous studies was: Attitudes to computers (Kay, 1993;Selwyn, 1997); knowledge of computers (Parks, Damrosch, Heller,&amp; Romano, 1986; Sinclair &amp; Gardner, 1999); computer self-efcacy(Barbeite &amp; Weiss, 2004; Compeau &amp; Higgins, 1995); attitudes tocomputers in nursing practice (Jayasuriya &amp; Caputi, 1996; Stronge&amp; Brodt, 1985); and computer experience (Garland &amp; Noyes, 2004).However, each one of these instruments addressed multiple</p><p>Figure 1: Phases of instrument development Phase 1 </p><p>Review previous workPhase 2</p><p>External review Ph</p><p>Literature Review (Wilkinson et al, 2009) </p><p>Panel of experts (n=16) </p><p>Pre-reginursing (n=60; 6respons</p><p>Identified scale items, which could be modified or adapted to improve face validity. </p><p>Face validity Panel of students (n=3) </p><p>Face anvalidity Item trimAcceptaCoding </p><p>Concurrent validity the items used bear a relationship to previously validated scales. </p><p>I T E M </p><p>P O O L </p><p>111 ITEMS </p><p>Content validity (Each item rated for relevance on a 10 point scale) </p><p>ReliabiliconsisteCronbacInitial ite</p><p>Suggested domains - internal reliability Internal Reliability </p><p>(Each item linked to a domain). </p><p>1st </p><p>D R A F T </p><p>S C A L E </p><p>50 ITEMS </p><p>+ 4 demogra</p><p>phic Fig. 1. Phases of instrumof domains related to the research questions and the literature wasproduced.</p><p>6. Phase 2</p><p>6.1. Review by expert panel and construction of draft scale</p><p>Expert review was used to test face and content validity. Theprincipal components route was not followed for two reasons,namely: The ICT skills and experience items involved respondentsself-reporting of cognitive skills levels and time spent on activitiesand were, therefore, less likely to be multi-dimensional. Secondly,a lengthy development process involving repeated revision whenconceptually the use of ICT in education is a rapidly changing eldwas not likely to result in increased validity of the affective items.</p><p>3 Phase 4 Test-retest </p><p>Main Study T1 </p><p>n nts </p><p>) </p><p>Pre-registration nursing students (n=70, 78% T1, 65% T2 response rate) </p><p>F I N A L </p><p>Pre-registration nursing students (n=458, 29% response rate) </p><p>tent </p><p>Reliability - Stability Cohens Kappa adjusted proportion of agreement </p><p>S C A L E </p><p>ternal </p><p>lpha alysis </p><p>2nd </p><p>D R A F T </p><p>S C A L E </p><p>47 ITEMS </p><p>+ 4 demogra</p><p>phic </p><p>Reliability - Internal consistency Cronbachs alpha </p><p>47 ITEMS </p><p>+ 4 demogra</p><p>phic </p><p>Reliability - Internal consistency Cronbachs alpha cluded terminology which is not current in the UK.</p><p>5.2. Phase 1: Findings</p><p>5.2.1. Face validityNo existing instrument in its entirety was t for purpose</p><p>although comparison of previous instruments demonstrated someoverlap in items. The potential items were created from scanningthe literature on e-learning and extracting key questions and is-sues. The resulting 111 items were edited to ensure that they wereusing contemporary English terms. Furthermore, only items re-lated to current computer use were included. For example, itemsent development.</p></li><li><p>was rejected and a new one added.</p><p>sponse 67%) and amended in the light of responses and feedback.</p><p>uma6.2. Phase 2: Findings</p><p>Following expert feedback, a coding frame was created in SPSSand the rankings of each expert added against the items. Thesewere ranked and then a mean rating for each item calculated(range 2.008.92). All items with a mean rating over seven were in-cluded. This resulted in a list of 68 items. One duplicate item wasidentied and two further items were the reverse of each other.The duplicates were removed leaving 66 items. Further adjust-ments to language were made as a result of free text commentsmade by the experts. A number of the items (n = 21) were similaror did not appear to match to a domain and, therefore, not all wereincluded in the pilot instrument (Table 2).A panel of expert raters was recruited (n = 16), with representa-tives fromnursing andmidwifery education, learning technologists,healthcare researchers, educational researchers and educationaldevelopers of whom thirteen responded (Table 1). The experts wereasked to rate the list of itemsand toplace each item inadomain froma given list. The initial item pool and domain list was initially givento one expert to ensure the instructions were clear before beingamended in the light of feedback and released to the other experts.The suggested domains were reduced to seven.</p><p>In a separate exercise current students were recruited to reviewthe items for linguistic clarity and omissions. Three students re-sponded and two returned completed feedback documentation.One student made written comments and as a result one item</p><p>Table 1Panel of raters.</p><p>Profession of raters Number incategory</p><p>Number responding</p><p>Nurse researcher 1 1Nurse educator 3 3Midwife educator 1 1Learning technologist 4 3Information specialist 1 1HEA subject centre director 2 2Education technology</p><p>researcher2 1</p><p>Education researcher 2 1Total 16 13</p><p>Nursing students 3 2</p><p>A. Wilkinson et al. / Computers in HItems were then matched to domains. A coding sheet was cre-ated with the domain each expert had selected for an item. Wherenine or more agreed (&gt;70%) the item was assigned to a domain. Adraft survey instrument was developed with 47 items grouped inve domains: ICT skills (16 items); Experience with computers (8items breadth of experience, 9 items frequency of use for activi-ties); Access to computers (2 items); attitude to computers (5items); attitude to computers in education (7 items). Three furtheritems, which related to computer experience, were included whichasked for information on: The length of time students had usedcomputers (1 item); average time using computers and the Inter-net per day (1 item); how comfortable they felt using computers(1 item). A nal section contained a set of demographic items(4), namely: Gender; age; prior educational experience; and ethnicorigin (using the categories from the British Census (Bosveld, Con-nolly, &amp; Rendall, 2006). There were three opportunities for unstruc-tured responses concerning experience with and attitudes tocomputers. Multiple item scales (Balnaves &amp; Caputi, 2001) wereutilised, specically Likert scales as they are designed to measureintensity of attitudes or feelings about a topic (Neuman, 2003).The literature review had demonstrated that there was no goldstandard instrument against which to measure concurrent validityor equivalence.</p><p>7.2. Phase 3: Findings</p><p>Item response was good with minimal missing data. However,data from the pilot testing indicated that there was some itemredundancy in the ICT skills sub-scale and eight items were poordiscriminators. The sub-scale was thus reduced from 16 to 11items with an internal consistency of a = .91. This was the resultof advice regarding the need to be as parsimonious as possible withitems measuring a construct (Carmines &amp; Zeller, 1979; Netemeyer,Bearden, &amp; Sharma, 2003). Furthermore, some items were nega-tively worded to reduce response bias.</p><p>The term ICT may be familiar to education technologists andresearchers but is not widely used outside higher education. Fol-lowing feedback from respondents in the pilot study concerningterminology, use of computers and use of the Internet or webwere used for the main study. Respondents also identied confu-sion concerning the use of home in some items and this wasamended to Where they lived. These changes were applied toall sub-scales of the instrument. Linguistic changes were alsomade to the computers in education sub-scale. Items were alsomoved between sub-scales following pilot work, thus; I likeusing a computer for l...</p></li></ul>

Recommended

View more >