3
Editorial Editorial review: Small-scale evaluation studies: Why publish these in Nurse Education in Practice? q As Editor I am continually faced with decisions concerning the issue of whether a paper submitted to the journal has merit or value to the wider international nursing and midwifery community if it reports itself as an evaluation study of a small-scale develop- ment in the UK or elsewhere and if it is of value to developing the evidence-base of nursing and healthcare education? These decisions, and others, usually have to be made prior to providing feedback to authors as a result of either preliminary reading and review by the editor, or feedback from the reviewers about a papers quality and potential to make a differenceto others practice as educators and/or add to the body of knowledge on the topic. The reviewers consider this using a set of criteria with which to make a judgement (for all submissions including specic criteria for reported research studies), which provides me the editor with a degree of condence that all papers are being assessed using the same baseline criteria. Of course, much like student feedback, each individual reviewer offers additional commentary to assist the author to develop their paper if revision is required, which to me is an invaluable contribution on their part to the scholarship of both author and the wider scholarly community. Other decisions of course are more complex and sometimes dis- tressing for the editor, reviewer and author, especially if plagiarism is suspected. This can either be a serious case of self -plagiarism such as the same paper or major content similar to another pub- lished work by the author in a different journal (this is one where no permission is granted for either journal/publication and is not to be confused when an article is published from chapters in a major funded report in the public domain which does have permission) or it can be a case where an author has quite clearly plagiarised someone elses work. This latter situation has in the past been picked up by the journals excellent reviewers who review papers in their specialist elds and of course know their own eld of liter- ature and their own work! Plagiarism has serious consequences and as an Editor I am grateful that the majority of this is identied prior to publication. Now of course Publishers have more sophisti- cated tools for identication of possible article plagiarism, in keeping with many Universities who use similar tools for student assignments. On a more positive note however let us return to the rst para- graph and focus on the rst issue for decision making, that of suit- ability of small-scale evaluation/research studies for the wider scholarly community. My reply would be: it depends. First of all it, would depend on whether the focus to the paper actually met the aims and scope of the journal, then of course would be how small a study?and whether or not it stood out as not being part of what would be classed as an expected evaluation, such as were the students happy with their module? For the remainder of this editorial I would like to focus on this issue of small-scale evaluation studies and their value, together with one of the issues that arises in the reviewer comments, and that is the reporting of ethical approval or ethically conducted evaluation studies. In the preparation for this editorial I came across guidance on the World Wide Web by a University as to the role of its School of Medicine Ethics Committee, which for me is an excellent example of good practice, not only for its staff and students but also for clarity about what the difference is between normal curriculum evaluation and development vs education research, as well as when ethical approval is required. Most importantly, it is clear that regardless of the type of evaluation, there is an essential pre-requisite for acting ethically. It is important to note that like most Universities, additional specic guidance will support this statement, and I would advise all authors to consult with appropriate advisory committees in their own organisations. School of Medicine Ethical Committee Guidance Curriculum evaluation and development vs education research Evaluation is a core requirement of all university courses; all medical courses are required by the GMC to undertake course evalua- tion. Where a project is an evaluation of an aspect of the curriculum which is a routine part of quality assurance, no ethics committee approval is required to undertake the activity. However, all evaluation should be conducted, ethically, and within University & School guide- lines. Evaluations are to be conducted only with the approval of the relevant Module leader(s) and Programme Director. Educational research Where a proposed project involves a study of any aspect of student learning or teaching, which is not part of the normal quality assurance activity of the School, SOMEC approval is required. Examples of such projects include research into a new educational intervention not incorporated as part of the curriculum, any intervention in which it is proposed to randomise students, observation or assessment of student learning which is not a normal part of QA e.g. additional ques- tionnaires or focus groups or observation or videotaping of tutorials. q See: NETNEP 2012 Conference Information and Call for Abstracts at: http:// www.netnep-conference.elsevier.com/ Contents lists available at ScienceDirect Nurse Education in Practice journal homepage: www.elsevier.com/nepr Nurse Education in Practice 11 (2011) 221223 1471-5953/$ see front matter Ó 2011 Published by Elsevier Ltd. doi:10.1016/j.nepr.2011.04.002

Editorial review: Small-scale evaluation studies: Why publish these in Nurse Education in Practice?

Embed Size (px)

Citation preview

lable at ScienceDirect

Nurse Education in Practice 11 (2011) 221–223

Contents lists avai

Nurse Education in Practice

journal homepage: www.elsevier .com/nepr

Editorial

Editorial review: Small-scale evaluation studies: Why publish these in NurseEducation in Practice?q

As Editor I am continually faced with decisions concerning theissue of whether a paper submitted to the journal has merit orvalue to the wider international nursing andmidwifery communityif it reports itself as an evaluation study of a small-scale develop-ment in the UK or elsewhere and if it is of value to developingthe evidence-base of nursing and healthcare education? Thesedecisions, and others, usually have to be made prior to providingfeedback to authors as a result of either preliminary reading andreview by the editor, or feedback from the reviewers about a paper’squality and potential to ’make a difference’ to other’s practice aseducators and/or add to the body of knowledge on the topic. Thereviewers consider this using a set of criteria with which to makea judgement (for all submissions including specific criteria forreported research studies), which provides me the editor witha degree of confidence that all papers are being assessed usingthe same baseline criteria. Of course, much like student feedback,each individual reviewer offers additional commentary to assistthe author to develop their paper if revision is required, which tome is an invaluable contribution on their part to the scholarshipof both author and the wider scholarly community.

Other decisions of course are more complex and sometimes dis-tressing for the editor, reviewer and author, especially if plagiarismis suspected. This can either be a serious case of self -plagiarismsuch as the same paper or major content similar to another pub-lished work by the author in a different journal (this is one whereno permission is granted for either journal/publication and is notto be confusedwhen an article is published fromchapters in amajorfunded report in the public domainwhich does have permission) orit can be a case where an author has quite clearly plagiarisedsomeone else’s work. This latter situation has in the past beenpicked up by the journal’s excellent reviewers who review papersin their specialist fields and of course know their own field of liter-ature and their own work! Plagiarism has serious consequencesand as an Editor I am grateful that the majority of this is identifiedprior to publication. Now of course Publishers have more sophisti-cated tools for identification of possible article plagiarism, inkeeping with many Universities who use similar tools for studentassignments.

On a more positive note however let us return to the first para-graph and focus on the first issue for decision making, that of suit-ability of small-scale evaluation/research studies for the widerscholarly community. My reply would be: it depends. First of all

q See: NETNEP 2012 Conference Information and Call for Abstracts at: http://www.netnep-conference.elsevier.com/

1471-5953/$ – see front matter � 2011 Published by Elsevier Ltd.doi:10.1016/j.nepr.2011.04.002

it, would depend on whether the focus to the paper actually metthe aims and scope of the journal, then of course would be ’howsmall a study?’ and whether or not it stood out as not being partof what would be classed as an expected evaluation, such as’were the students happy with their module’? For the remainderof this editorial I would like to focus on this issue of small-scaleevaluation studies and their value, together with one of the issuesthat arises in the reviewer comments, and that is the reporting ofethical approval or ethically conducted evaluation studies.

In the preparation for this editorial I came across guidance onthe World Wide Web by a University as to the role of its School ofMedicine Ethics Committee, which for me is an excellent exampleof good practice, not only for its staff and students but also forclarity about what the difference is between normal ’curriculumevaluation and development vs education research’, as well aswhen ethical approval is required. Most importantly, it is clearthat regardless of the type of evaluation, there is an essentialpre-requisite for acting ’ethically’. It is important to note that likemost Universities, additional specific guidance will supportthis statement, and I would advise all authors to consultwith appropriate advisory committees in their own organisations.

School of Medicine Ethical Committee Guidance

Curriculum evaluation and development vs education research

Evaluation is a core requirement of all university courses; allmedical courses are required by the GMC to undertake course evalua-tion. Where a project is an evaluation of an aspect of the curriculumwhich is a routine part of quality assurance, no ethics committeeapproval is required to undertake the activity. However, all evaluationshould be conducted, ethically, and within University & School guide-lines. Evaluations are to be conducted only with the approval of therelevant Module leader(s) and Programme Director.

Educational research

Where a proposed project involves a study of any aspect of studentlearning or teaching, which is not part of the normal quality assuranceactivity of the School, SOMEC approval is required. Examples of suchprojects include research into a new educational intervention notincorporated as part of the curriculum, any intervention in which itis proposed to randomise students, observation or assessment ofstudent learning which is not a normal part of QA e.g. additional ques-tionnaires or focus groups or observation or videotaping of tutorials.

Editorial / Nurse Education in Practice 11 (2011) 221–223222

Where there is any doubt, the School of Medicine Ethic Committee canadvise. Educational research based on routine evaluation and coursedevelopment. If data arising from routine evaluation and developmentactivities are submitted for publication in journals or presentation atmeetings, formal ethics committee approval is not required, withspecific provisos that are outlined in SOMEC guidelines. If data arisingfrom such work is to be published, it must be sent to the chair ofSOMEC to confirm that the study meets the criteria for evaluation, notresearch, prior to submission for publication (Web-citation: http://www.keele.ac.uk/health/schoolofmedicine/ethicscommittee/: Accessed15th April 2011).

In terms of publication of an article involving educationalresearch or evaluation data however, as an Editor I expect that thereis an acknowledgement of a review of the evaluation proposal anda decision made by an appropriate Ethical Committee/Board in thearticle text, and also the specifics of ethical protocols adopted, ora statement that it has formal Ethical Committee approval. Thelack of this evidence is often an issue highlighted by reviewers asan omission, and when this is communicated to the author in thereviewer feedback, the majority of authors have no difficulty inclarifying this in their revised papers.

If the evaluation studies involve NHS/Health organisations staffhowever, the reported studies would have had to obtain formalEthics and Governance Committee approval or guidance from theChairperson at least. Permission to use data in a publication is oftenmissing in many papers which explain the ethical aspects, which isanother issue, and this whole theme of ethics and ethical account-ability probably requires an article of its own.

Before considering the wider philosophical and methodologicalissues of whether or not it can be classed as research let us considerthe context in which I am asking the question.

Nurse Education in Practice as a journal has as part of its aims andscope the following criteria:

Nursing is a discipline that is grounded in its practice origins -nurse educators utilise research-based evidence to promote goodpractice in education in all its fields. A strength of this journal isthat it seeks to promote the development of a body of evidence tounderpin the foundation of nurse education practice .Case studies and innovative developments that demonstrate hownursing and health care educators teach and facilitate learning,together with reflection and action that seeks to transform theirprofessional practice will be promoted.

Given these aims and scope it is anticipated that manypotential authors will see key words such as ’evidence-basednursing education’, ’case studies and innovative developments’,and ’developing a body of evidence’, and know that to sharetheir own practice with regards to their area of educationalpractice may require evidence that is not only from large scaleevaluation that considers change over a period of time, e.g.Evaluation of Fitness for Practice Curricula for nursing andmidwifery students in Scotland (Holland et al., 2010), but alsofrom new and innovative developments which may focus ona much smaller level of application yet still has value to otherse.g. Evaluation of Clinical Teaching Models for practice (Croxonand Magginis, 2009) and could lead to much larger studies.Attree (2006) stresses the need for the larger evaluation studiesand also new methodologies for analysing data sets froma range of evaluation studies in order to demonstrate the valueof healthcare education research. She also makes a very validpoint for me, in that these larger studies take time and oftenno change happens when they are ended, but that in terms ofdeciding on resource allocation there is often a need forevidence ’ in the here and now ’ or at least made available in

a short period of time. I totally agree with her observationthat ’Cochrane ’ synthesis type of evidence is required asa matter of priority in health and social care education.

So why are these ’small-scale’ evaluation studies of educationalinnovations and development so important in the ’bigger picture’ ofevaluation research and the evidence-base to nursing education?Robson (1993) in his book Real World Research, has an excellentchapter on Designing Evaluations where he sets out clear guidanceand rationale for what is and isn’t evaluation research and mostimportantly stresses the ’politics’ of it. That is, there is an expectedoutcome which may or may not be what is anticipated, but also ineducation and health and social care services, an accountability forfinding out if something is working or not given often scarceresources (p. 171).He cites an excellent example by Freudenberg(1990: 295) of what is known and unknown about the fact that:’it has been clear that AIDS education was our most powerful toolfor preventing the spread of HIV infection. Unfortunately 10 years later,we know more about the biochemistry of HIV and T cell ratios atvarious stages of illness than we do about what makes AIDS educationprogrammes effective and how to successfully implement suchprograms.’

Despite this being about a reflection on a situation 20 yearsago, how often do we still hear the statement ’what impact doesthis programme of study have on patient care? ’ Lee (2011) reportsonwhat she calls an ’impact evaluation’ study on this kind of topic,in an attempt to consider the impact of Continuing ProfessionalDevelopment (CPD) programmes for staff and services postcompletion. Acknowledgement that this was a ’small-scale’ studydid not detract however from the fact that the study raised ques-tions, not only in terms of new ways of looking at a challengingarea for many university and service providers but also forresearch commissioners to support longer term impact evaluationstudies that could make a difference to service delivery and devel-opment and patient care. To be in a situation as described by Freu-denberg is no longer an option, when services and education areboth being affected by the ’political’ resource and deliveryagendas. Small-scale studies are often a ’trigger’ for ideas andwider scale studies and this journal is supportive of these studiesas long as they are innovative, developmental (not just a normalmodule evaluation process), ethical, clearly evidence-based usinginternational literature to support their reviews and ensure thatrationales and contexts of their studies are made clear to thisinternational community of educators and researchers alike.These studies are also happening in what Attree (2006) calls’real time and in changing contexts over which full control willnever be possible’.

However for many the evidence and findings are based ona rigorous methodology, despite its small-scale appearance, andevidence of that can be seen throughout the past 10 years in thisjournal. What I don’t know however is the global impact of thisbody of evidence and possibly this will be the challenge for 2012for us to consider as a way forward for the future of nursing andhealthcare education. I do know however that in general, the over-all evidence from authors in this journal is increasing its dissemina-tion through citations in other international journals, and althoughonly one indicator, it does at least suggest that the evidence writtenabout is of value to others which for me as an Editor is positive, asany journal is a vehicle for dissemination of scholarly endeavourwhich is meant to ’make a difference’.

I look forward to continuing to receive your evaluation studypapers, also your Issues for Debate articles, Editorials, theoreticaland review papers and reports of major research and evaluationstudies. The most important issue however is to encourage you topublish and disseminate the excellent innovations that are takingplace internationally.

Editorial / Nurse Education in Practice 11 (2011) 221–223 223

References

Attree, M., 2006. Evaluating healthcare education: issues and methods. NurseEducation Today 26, 640–646.

Croxon, L., Magginis, July 2009. CEvaluation of clinical teaching models for nursingpractice. Nurse Education in Practice 9 (4), 236–243.

Freudenberg, N., 1990. Developing a new agenda for the evaluation of AIDS educa-tion. Health Education Research, 5295–5298. Cited in Robson C (1993) RealWorld Research, Blackwell, Oxford.

Holland, K., Roxburgh, M., Johnson, M., Topping, K., Watson, R., Lauder, W.,Porter, M., 2010. Fitness for practice in nursing and midwifery education inScotland, United Kingdom. Journal of Clinical Nursing 19 (3–4), 461–469.

Lee, N.J., 2011. An evaluation of CPD learning and impact upon positive practicechange. Nurse Education Today 31 (4), 390–395. May 2011.

Robson, C., 1993. Real World Research. Blackwell, Oxford.

Karen HollandE-mail address: [email protected]