112
Student number: -0207586 Dissertation The training and development of Assistant Practitioners(AP): An action research project to develop a tool to evaluate the impact of the AP role in practice and inform service development within NHS and Non -NHS organisations. A dissertation presented in the University of Bolton in partial fulfilment of the requirements for the degree of M.Ed. (Professional Development)”. By David Andrew Morris July 2016 Total word count excluding acknowledgments, appendices and bibliography (21,898)

ubir.bolton.ac.ukubir.bolton.ac.uk/952/1/Morris David Andrew dissertati… · Web viewubir.bolton.ac.uk

  • Upload
    vohanh

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

Student number: -0207586

Dissertation

The training and development of Assistant Practitioners(AP): An action research

project to develop a tool to evaluate the impact of the AP role in practice and

inform service development within NHS and Non -NHS organisations.

“A dissertation presented in the University of Bolton in partial

fulfilment of the requirements for the degree of M.Ed. (Professional

Development)”.

By David Andrew MorrisJuly 2016

Total word count excluding acknowledgments, appendices and bibliography (21,898)

Student number: -0207586

Table of ContentsAcknowledgements.......................................................................................................3

Abstract............................................................................................................................4

CHAPTER 1: Introduction to the Study........................................................................5

Aims and Objectives.....................................................................................................5

Aims: -..............................................................................................................................5

Objectives: -....................................................................................................................5

Rationale..........................................................................................................................5

Intended Methodology..................................................................................................6

Chapter Two: Literature Review.....................................................................................9

Background...................................................................................................................10

Reviewing the evidence.............................................................................................11

Evaluation of the AP role...........................................................................................13

Workforce planning verses evolution.....................................................................16

Registration and Regulation of the AP...................................................................18

The AP: Foundation Degrees and Work-based Learning...................................20

Work-based learning theory......................................................................................22

Impact Evaluation Process........................................................................................24

Evaluating training Programmes for the AP role.................................................27

Summary........................................................................................................................29

Chapter Three: Methodology........................................................................................30

Ethics..............................................................................................................................30

Why an action research approach?.........................................................................31

Identifying the problem..............................................................................................32

The search cycles........................................................................................................32

Phase One of the research process........................................................................32

Phase Two Piloting the tool.......................................................................................34

Phase Three: Tool review and redesign.................................................................36

Future phases of the research. Making the Impact evaluation tool a resource for practice....................................................................................................................36

Summary........................................................................................................................36

Chapter 4: Findings and data analysis.......................................................................37

Phase one part one: Secondary research and Scoping the AP role and evaluation methods. (Objective 1)...........................................................................38

1

Student number: -0207586

Phase one part two: Developing an initial market research questionnaire to establish stakeholder’s priorities (Objective 2)....................................................40

Analyses of the questionnaire results....................................................................42

(Table 1 Roles of Respondents)..............................................................................43

(Table 2 Types of Organisations).............................................................................43

(Table 3 Service Area)...............................................................................................44

Table 4 (Financial considerations of developing the AP role)..............................45

Table 5 (impact on patient care)..............................................................................45

Table 6 (Training and education of the role)...........................................................45

Table 7 (Type of training /education programme)..................................................45

Table 8 (Impact on teams/service)..........................................................................45

Table 9 (Staffing of the roles)...................................................................................45

Table 10 (Useful resources on the AP role)...........................................................46

Table 11 (Compatibility with national priorities and workforce opportunities)....46

Phase Two: Developing and pilot of the draft impact evaluation tool............46

Analysis of the results of the draft impact evaluation tool and process.......46

Phase Three: Redesigning the impact evaluation tool and final version......49

Validity and Reliability................................................................................................50

Summary........................................................................................................................51

Chapter 5: Discussion and Analysis...........................................................................52

The AP role....................................................................................................................52

Sustainability and expansion of the role................................................................55

Education and training...............................................................................................55

Impact evaluation.........................................................................................................56

Summary........................................................................................................................57

Chapter 6: Conclusions and Recommendations.....................................................58

Recommendations.......................................................................................................60

Final thoughts...............................................................................................................60

Bibliography......................................................................................................................62

List of Appendices...........................................................................................................72

2

Student number: -0207586

Acknowledgements Many thanks to Alison Doyle, Interim Deputy Head of the Work-based Education Facilitators (WBEF) network, for her input as a core member of the action research group.

Carolyn Jackson, Interim Head of the WBEF network, for her input into future marketing of the impact evaluation tool.

Suzanne Pearson WBEF for piloting the impact evaluation tool and contribute to the redesign.

Chris Morris Service Manager for piloting the impact evaluation tool.

To Lynda Leighton, Paul Barber and Julia Stevenson WBEF Network for being part of a focus group to evaluate the newly drafted impact evaluation tool and offering their comments and suggestions for its design.

Finally, my partner Steve for his patience and support.

3

Student number: -0207586

AbstractThis research study looks at how resources can be produced to assist managers

making decisions about developing the role of the Assistant Practitioner(AP). It

investigates evaluations of the AP role to date and considers how an impact

evaluation tool can be produced to inform practice. The tool looks at how a mixed

methods approach can be utilised to create evidence that is both qualitative and

quantitative in nature. The aims of the study are to provide managers with evidence

based resources that can inform their decision making when contemplating the

training and development of non-registered staff into the role of AP and develop a

tool that will evaluate the impact of the AP role within a service area. The study

examines the stages taken to develop a tool that can be applied to the AP role but is

also generic in nature and can be utilised in any area considering role development

within their teams

4

Student number: -0207586

CHAPTER 1: Introduction to the Study

Impact and Evaluation of the AP Role in the National Health Service (NHS) and Non

National Health Service Organisations

The training and development of Assistant Practitioners(AP): An action research project to

develop a tool to evaluate the impact of the AP role in practice and inform service

development within NHS and Non -NHS organisations.

Aims and ObjectivesAims: -

1. To provide managers with evidence based resources that can inform their

decision making when contemplating the training and development of non-

registered staff into the role of AP

2. To develop a tool that will evaluate the impact of the AP role within a service

area.

Objectives: -1. To scope out the current literature in relation to impact evaluations of new

roles within service areas

2. Design an initial market research tool to ascertain stakeholder’s priorities

when considering the development and introduction of new roles within their

service.

3. Utilise the findings of the initial survey to construct an impact evaluation tool

to gather both qualitative and quantitative evidence of the impact of the role.

4. Assist managers to make informed decisions with regards to the future

training and development of their non-registered staff within their service

area (see appendix 1).

RationaleThe topic was highlighted as an area of investigation in the operational plan of the

Work-based education facilitator (WBEF) network. The WBEF network is

commissioned by Health Education England -North West Office. Its primary function

is the promotion, development and support of the Trainee Assistant Practitioner

(TAP) and AP across the North West. Its wider remit is in relation to Bands 1-4 1staff

within the NHS and non NHS organisations. The Network is unique to the North

1 Bands 1-4 refer to the grades of staff within the NHS who characteristically are non-registered practitioners. The AP role usually sits at Band 4 within Agenda for Change and level 4 of the careers framework.

5

Student number: -0207586

West; however, the introduction of the AP role is a national initiative which has been

implemented with varying success across the United Kingdom.

Smith and Brown (2012) consider the introduction of assistant practitioners into the

health and social care workforce stating, “The introduction in 2002 of the Assistant

Practitioner role in health and social care aimed to provide a new type of worker

who could provide direct health and social care under the direct supervision of a

Registered Professional” (p.6). NHS Employers (2015) describe the role as non-

occupationally specific and discuss how AP s work across a number of disciplines.

As Smith and Brown (2012) indicate, one of the primary functions of the AP role is

the provision of patient care and there is a vast array of literature evaluating the AP

role and the types of activities they are involved in. However, it could be argued that

the vast majority of the evidence is anecdotal and falls short in measuring the

impact of the role in terms of cost effectiveness, direct effect on patient throughput2,

the AP contribution in achieving local, national targets and its measure against the

original workforce vision or generally accepted definition. Their impact in relation to

cost benefits, service efficiencies and quantifiable effects on the patient experience

is lacking. Spilsbury et al. (2009) indicated that the AP role had not been introduced

in some Trusts because of the lack of evidence of the effectiveness. It was noted by

the WBEF network that it would be useful to develop a tool that not only gathered

qualitative information but also facts and figures in relation to the impact of the role,

particularly regarding cost and patient outcomes.

Intended Methodology An action research approach was agreed as fitting to facilitate the progress and

conclusion of the project. This approach was identified as an appropriate method in

line with several requests from managers that information centring on the impact of

the role be made available as a resource. Cohen et al. (2013) advocates that action

research is a robust method of problem solving and instigating change. The

development of an evaluation tool which measured the impact of the AP role, was

identified as a priority for the network, thus providing appropriate resources to

individuals with workforce planning and role design as part of their remit. Although

the primary focus of the tool design would be to evaluate the role of the AP, the

network considered a generic model might be useful. Working collaboratively with

colleagues would refine research methods and shape the design of the final tool.

Learning from previous inquiry, evaluating the knowledge gained and incorporating

2 This relates to the flow of patients as well as numbers of patients seen in a given period of time.

6

Student number: -0207586

its findings as the research progressed, would inform the end product. Winter and

Munn-Giddings (2002) acknowledge the role of action research as a process of

continuous inquiry and development. The action research group primarily consisted

of two permanent members and utilised input from others where appropriate.

An initial Political, Economic, Social and Technological (PEST) analysis was

conducted to focus on the potential design of an initial market research

questionnaire which would precede the eventual impact evaluation tool. PEST

analysis is often used in business to look at market potential (CIPD, 2015). Adapting

this model would assist in determining what areas to consider in the initial design of

the questionnaire to establish priority areas identified by stakeholders when

considering the development and introduction of new roles (see appendix 2).

A mixed method approach was utilised in the completion of the research study,

which was carried out in three phases. Phase one would include desk top research

assessing current evidence with regards to innovative roles within the NHS and non

NHS sector and the evaluation of said roles. A scoping exercise would be

conducted to establish how new roles had been evaluated with particular reference

to the AP role. This would also include identifying any potential tools that had

already been written and tested. Concurrently, a questionnaire would be devised to

establish the type of information considered most useful and so be explicit in the

design of the final tool. Participants would be identified through the WBEF data base

system and invitation to complete a questionnaire via survey monkey 3would be

disseminated to appropriate individuals. The issue of consent was addressed as

participants were asked to affirm consent to take part in the study and for inclusion

in this dissertation.

Phase one part two of the study analysed the results from the questionnaire. This

informed the design of the impact evaluation tool. The tool will be used to collate

both qualitative and quantitative evidence in service areas where the AP role is

established.

Phase two of the study took the opportunity to pilot the tool by selecting a service

area and using the tool to guide a member of the WBEF network through a semi-

structured interview with an identified stakeholder. To ensure objectivity and avoid

bias an impartial member of the team was asked to conduct the interview. Guidance

3 Survey Monkey is a web based resource that allows a variety of questionnaire designs and different formats often not available from paper based questionnaires

7

Student number: -0207586

notes were devised for both the interviewee and interviewer. A process map was

also devised to ensure consistency in approach with the final impact evaluation tool.

Phase three of the research study was the finalised tool to be ready for use by the

network.

Future use of the impact evaluation results would produce information sheets,

support briefing sessions, whilst offering managers who have introduced the role, an

opportunity to analyse its impact and efficacy. One aspect of an impact evaluation

assessment is to consider the future growth or reproduction of an intervention

(OECD, n.d.; Rogers, 2012).

Ethical consideration was given at all stages of the project, with ethical approval

being obtained from the University and employing organisation. This will be

considered more fully in chapter three.

8

Student number: -0207586

Chapter Two: Literature Review

The literature review constituted a broad based approach. The study would initially

involve a scoping exercise to establish whether there were examples of impact

evaluation tools in existence. In the first instance phrases such as ‘Designing an

impact evaluation tool for new roles’ produced poor or inappropriate results. Data

bases with a health focus such as Cumulative Index to Nursing and Allied Health

Literature (CINAHL), British Nursing Index (BNI) and Medline returned zero results

when this phrase was entered. However, entering less specific criteria such as,

‘evaluating new roles’, ‘impact of the Assistant Practitioner role’, ‘Assistant

Practitioners’, ‘developing support workers’ and ‘impact evaluation’ resulted in a

range of research studies that could help establish the current evidence base in

relation to the introduction of new roles in health and social care had been

evaluated. and its impact measured.

The concept of impact evaluation as a process was also perused. Much of the

literature examined was generated by commerce and charitable organisations.

However, the principles of ‘change’ or ‘programme theory’ were transferable to

evaluate the impact of the AP role in general. The impact evaluation process was

adapted to directly influence the design of the impact evaluation tool, in gathering

both qualitative and quantitative information. This would provide a fuller

understanding of how robust impact evaluation can be facilitated and effectiveness

of any new roles assessed. A historical review of the NHS was briefly considered to

appreciate the frequency of change in health and social care since its inauguration

on the 5th July 1948 (NHS Choices, 2015). The introduction of new roles in both the

NHS and Social Services in response to changes such as demand, public

expectation, demographics, staff shortages, technology to name but a few

influences have resulted in a dynamic ever evolving workforce.

The findings of the survey monkey questionnaire, elicited the priorities of

stakeholders when considering the development of new roles, question eight asked

them to consider the training/education programme that might underpin new role

development. Overwhelmingly participants agreed that any programme of study or

training should incorporate work based competencies and work based learning.

Equally, question ten confirmed that respondents believed information on ‘growing

your own’ and ‘recognising talent within teams’ was a high priority, whilst question

eleven demonstrating that contributors believed offering career progression and

9

Student number: -0207586

alignment to national agendas were highly influential in their decision making

process. Mindful of this, investigation into research on work based learning and

initiatives such as ‘the talent for care’ HEE (2014), and ‘widening participation’ HEE

(2014a), were also considered as appropriate components of the literature search.

Background The NHS Plan 2002, acknowledged the pressures on the NHS and the need for a

major shift in health care provision, identifying the need for nurses and other staff to

extend their skill set. It advocated the need to utilise the skills of all grades of staff in

the NHS, offer opportunities for career development and education and training.

Many of the commitments stated in the NHS Plan are still prevalent. Many of the

issues raised still challenge the provision of health and social care today. The plan

commits to the joined up working between health and social care as opposed to

working in silos. (DOH, 2002; Stewart- Lord et al., 2011) Many of the themes

discussed in the NHS Plan are echoed in the governments ‘Five Year Forward

View’, which commits to joined up integrated services between health, social care

and the tertiary sector4. (NHS England, 2014) The NHS Plan advocated the

breaking down of professional boundaries and optimising the talents of the

workforce. It can be argued that this philosophy underpinned the creation of the AP

role. (DOH, 2002)

The AP role was created in 2002 as part of a project entitled ‘Delivering the

Workforce’. As in many areas of the country, the North West was experiencing

significant challenges in maintain an effective health service workforce. Kilgannon

and Mullens (2008), proclaim that “Vacancies were increasing and pressure was

mounting for a more flexible and productive workforce” (p.523) Miller et.al. (2014)

reiterate that staff shortages, lack of registered professionals and growing emphasis

on skills mix, remain potent drivers for introducing the role. As part of delivering the

workforce project, the role was initially introduced in Greater Manchester, followed

by Cumbria and Lancashire and finally Cheshire and Merseyside.

Kilgannon and Mullens (2008) discuss the introduction of the AP role arguing that a

strategic approach to developing the AP in Greater Manchester ensured

consistency in standards. Prior to the initiative, training and development of the

support worker workforce was fragmented and inconsistent. The new role coincided

with the launch of foundation degrees whose characteristics of employer

4 Tertiary sector for the purposes of this study refers to the work of private, independent and voluntary sector organisations(PIVO).

10

Student number: -0207586

involvement, a combination of academic and work based learning, appealed as the

preferred vehicle to develop the role that was envisaged. Miller (2013) noted there

was a wide variance in the qualifications for APs, this was subsequently clarified by

Miller et al. (2015), noting that although there are still many qualification routes to

becoming an AP, currently the foundation degree is still the preferred option. They

do however air caution and note that many Local Education and Training Boards

(LETB) have diverted funding away from foundation degree qualifications, with

many employers feeling that on completion staff still required ‘top up’ training from

the organisation to be fit for purpose.

From the outset the intention was for the newly designed role to undertake extended

skills and greater responsibility. Kilgannon and Mullens (2008) articulate: “It was

also important to ensure that if the new role was to undertake some of the

responsibilities of a registered practitioner, and the education package was credible”

(p.513). This fundamental concept is still a driving principle in the current

development of the role and plays a significant part in the evaluation data available.

Wakefield et al. (2010) support such comments adding, “The rationale for

introducing the role was help sustain effective, efficient health care services across

the NHS and free up registered nurses to take on new expanding roles” (p.17).

Miller et.al. (2014) expand on this and indicate that: “Having simpler tasks

undertaken (under supervision) by Assistant Practitioners is one way in which the

throughput of patients can be increased” (p12).

Reviewing the evidence Since 2002 there have been numerous evaluations of the AP position across

different disciplines and from a national perspective. One aspect of the literature

review was to examine how the evaluations have been conducted, what methods

were used, what tools have been developed to capture the information. The primary

focus of the literature review concerns the AP role however; evaluations of other

innovative roles have also been considered with regards to transferable

characteristics that might assist in the production of a generic impact evaluation tool

with potential to be applied across numerous settings.

This study was particularly interested to consider how quantitative data was

collected as part of any of the studies, as this was an area that stakeholders

showed a particular interested in. Part of the project’s remit was developing a

resource to capture statistical evidence and qualitative narrative. This would offer

11

Student number: -0207586

stakeholders where the role has been developed, opportunity to evaluate objectively

how influential the role has been in their service.

Benson and Smith (2007) from the University of Manchester carried out an

evaluation of the role of the AP across pilot sites in Greater Manchester, evaluating

health and social care. Benson and Smith’s work can be seen as the first substantial

evaluation of the AP role. The study reported some positive results from the project

highlighting that some qualified APs were utilising their newly acquired skills.

Comments on working across professional boundaries and greater patient

satisfaction were also highlighted.

There were instances where the role had not been embraced or had caused

confusion for both the registered and non-registered staff. Overall the report was

positive given the expectations at that time. However, Benson and Smith noted that

there was uncertainty surrounding the role from some quarters with some

professions unsure about the AP role itself and level of responsibility that might be

delegated. Miller et al. (2015) highlight in some instances this is still the case,

resulting in underutilisation of the role with professional attitudes sometimes

hindering progress. Confusion over what an AP can and cannot do, remains. They

argue “Where there is a lack of clarity concerning these roles it is unlikely that

organisations will achieve the full benefits of these roles” (p.25). They state that this

sometimes is a result of registered staff being reluctant to delegate more

straightforward tasks and procedures. Wakefield et al. (2009) add to the discourse

noting the increasing number of patients with complex needs and pressures on

registered staff concluding, “In response to the predicated crisis in professional

workforce resources and freeing up registered practitioners to complete more

complex caring work, a new type of health care worker was proposed: the AP”

(p.227). However, Miller et al. (2015), argue that given the current economic

climate, many registered staff have lacked the opportunity to develop themselves

and this has compounded the difficulties in devolving more responsibility to APs.

Lack of clarity, confusion, misunderstanding of the AP role spurred Skills for Health

in 2009 to formulate a definition of an AP. The definition was widely accepted and to

a large extent still referred to. Coupled with the development of the AP core

standards, the intention was to grow confidence in the AP role. Therefore, it was

envisaged that a set of core competencies that all APs must attain, regardless of

specific disciplines, would provide added reassurance of the minimum skills of all

12

Student number: -0207586

post holders in England. Equally, this would offer some degree of transferability and

portability of the role. (Skills for Health, 2009) Consequently, APs were defined as: -

An Assistant Practitioner is a worker who competently delivers health and social care to and for people. They have a required level of knowledge and skill beyond that of the traditional healthcare assistant or support worker. The Assistant Practitioner would be able to deliver elements of health and social care and undertake clinical work in domains that have previously only been within the remit of registered professionals. The Assistant Practitioner may transcend professional boundaries. They are accountable to themselves, their employer, and more importantly, the people they serve (Skills for Health, 2009a p.1).

Evaluation of the AP role Benson and Smith (2007) favoured a mixed method of data collection the evidence

was predominately qualitative in nature, with a reliance on interviews as the

preferred research method in the evaluation. Although the study provided a

comprehensive overview of the ‘Delivering the Workforce’ initiative and thematic

analysis of the results helps evaluate the positives and the negatives of the AP role,

statistical information such as cost benefits, patient throughput, service efficiencies

that can be directly attributed to the role was sparse.

Following Benson and Smiths work there has been a plethora of evaluations at

regular intervals across the United Kingdom. Miller et al. (2015) commence their

very comprehensive evaluation of the AP role in the NHS stating the following;

“There is a growing recognition of the value of these posts. Stakeholders can clearly

articulate the benefits of introducing the Assistant Practitioner role which includes

improvements in quality, productivity and efficiency” (p.3). This sentiment is echoed

in many other studies of the AP role. (Wilson, 2008; Allen et. al., 2012; Skills for

Health,2016a) However, National Institute for Health and Care and Excellence

(NICE) safe staffing levels have focused on registered staff and patient ratios

(NICE, 2014). This has unnerved some managers regarding the utilisation of

support roles, conversely managers have also indicated a preference for an AP in

the clinical area as opposed to registered agency staff. (Miller et al. 2015)

A national survey by Spilsbury et al. (2009), elicited an 85% response rate from

Directors of Nursing (DoN). They confirmed that 46% of NHS Trusts had already

introduced the role of the AP with a further 22% planning to introduce the role. In

32% of the organisations that responded it was noted that there was resistance to

the role, with DoN highlighting that there was a lack of effective evidence to support

the introduction of the role. This study examined how APs were developed and

13

Student number: -0207586

deployed, their impact on organisations, patient management and transfer of

activities from the registered nurse (RN) to the health care support worker(HCSW)

role. They noted the uneven distribution of APs nationally, with 84% of trusts based

in the North West having APs in their organisations. The debate on differences

between bands 3 and 4 caused confusion, this was compounded by the current

tradition of extended roles for Bands 2 and 3, which in some Trusts led the DoN

failing to see the value of the AP. Most of the Trusts confirmed that the APs had

been developed from the existing workforce but they could see little opportunity for

development within the role, other than accessing pre-registration courses. One

DoN expressed that they could not understand why anyone would train to

foundation degree level and not want to become a qualified nurse. However, it was

acknowledged that the AP role was becoming more prevalent, often in response to

service demand. The Royal College of Nursing (RCN) concluded that the AP role

was not a threat to that of the RN but rather a complimentary role. They advocated

the use of APs but aired caution that they must not be introduced merely as a cost

cutting exercise. (RCN 2009)

Wakefield et al. (2010) carried out an evaluation of the role by comparing and

contrasting twenty-seven job descriptions across organisations, measuring against

policy vision identified as working under supervision and reporting concerns to the

registered staff. They confirmed a blurring of roles, with APs often working outside

of policy and taking on responsibilities of the registered professional. Mackey and

Nancarrow (2005) noted this as a cause of resentment, in their study of APs in

occupational therapy in Australia.

Allen et al. (2012) discuss their experiences of the introduction of APs in critical

care. Although the role was generally evaluated positively, with registered

professionals acknowledging the skills that some of the APs had, ambiguity around

the role was identified as a concern. Respondents commented on the excellent

standards of care demonstrated by APs. “…having the Assistant Practitioner on

duty helped the registered staff to provide care in a more efficient way” (p.17).

However, there was conflicting opinions by staff as to the level of responsibility the

APs should undertake. Some senior staff felt the APs took on too much

responsibility, conversely the APs themselves generally evaluated that they

considered it was about right. Senior staff felt that they could not develop the role

any further and were mindful that there were limitations as to what patients the APs

could take responsibility for. Some senior staff stated they felt the role had made no

14

Student number: -0207586

difference. Registration was highlighted as an issue and the lack of professional

accountability, this is still a recurring theme in evaluating the effectiveness of the AP

role. Miller et al. (2013) predicted the likely growth and expansion of the AP role.

She calls for more extensive research into numbers of APs along with greater

clarification of the roles. She identifies a lack of a national specification for the role

and a wide variance in the level of qualifications that APs possess, in some

instances holding no recognised qualification whatsoever.

Miller et al. (2014) embarked on a study of APs in Wales. They advocate that the

role had brought benefits to the health service in Wales. Simpler funding

arrangements had enabled developments across acute and community services.

They noted that the AP title was inconsistently used, with a variety of job roles that

could be cross referenced to the level four descriptor 5 of the careers framework for

the NHS (Skills for Health, 2010). They noted comments from a variety of

respondents that the APs had brought both cost efficiencies and increased capacity

within the sector. Such claims lacked tangible evidence of this with no indication of

exact figures in relation to cost efficiencies. Once again it was highlighted that there

was a lack of consistency in implementation of the roles and a variance in the tasks

and procedures the APs and equivalent level four practitioners undertook. This has

led to many organisations developing local guidance and implementation toolkits,

however, many organisations did not have this in place.

Miller et. al. (2015) unearthed a wide acceptance from those they interviewed, that

APs can have a very significant impact in their work areas. They acknowledge

currently most evidence of AP assimilation into the workforce has been in the acute

setting, however recommend the role is extended and embraced more widely into

community settings. The closer integration of health and social care with less

complicated funding arrangements, gives rise to new opportunities for APs.

Changes in technologies have resulted in more tasks and procedures becoming

straightforward enabling increased opportunities for level four practitioner. APs

carrying out more straightforward tasks result in greater patient throughput. Some

posts have been introduced in response to national targets and strategies, others it

appears were as a result of funding being available. Cost benefit analysis and

5 Career Framework Level 4 People at level 4 require factual and theoretical knowledge in broad contexts within a field of work. Work is guided by standard operating procedures, protocols or systems of work, but the worker makes judgements, plans activities, contributes to service development and demonstrates self-development. They may have responsibility for supervision of 4 some staff. Indicative or Reference title: Assistant/Associate Practitioner (Skills for Health 2010)

15

Student number: -0207586

impact on capacity is only carried out in a small minority of departments, so

hindering the true impact evaluation of the role to be robust.

Workforce planning verses evolutionMiller et al. (2015) identify the AP role as the way forward in addressing the

recruitment crisis currently faced in the NHS. They discuss the successful

implementation of the AP role concluding there are more positive results if the AP

role is introduced as part of workforce planning as opposed to evolution. In many

circumstances the AP role is seen as a development opportunity for staff as

opposed to establishing a clear vision of their impact on services. This can lead to

the delegation of duties being at the discretion of individuals rather than an agreed

vision of the AP’s scope of practice6(SoP). This can distort the true potential of the

role in clinical areas. They draw on an example from tissue viability reporting that

the APs were not allowed to perform any of the skills they had learnt under the

leadership of one specific individual. Once the individual had left they were allowed

to practice in line with their skills set and competencies.

Perceived risk also played a significant role in restricting the APs SoP. In many

organisations medicines administration has been a problematic area, with Trusts

reluctant to delegate this procedure to non-registered staff. However, Miller et al.

(2015) highlight a pilot study where APs were administering medication, reports

indicated that APs were more cautious when administering medication, highlighting

in the course of the pilot there had been zero medication errors by them. They

confirmed that one occasion, the AP identified a medication error made by the

registrant. Therefore, perception of risk and delegation of duties based on

personality, in contrast to objective decision making as part of a planned strategy,

reinforces that the role is more successful introduced as part of workforce planning.

Miller et al. (2015) highlight radiography as an area where the AP role is part of a

national strategy around workforce planning and considered to have a positive

impact. Radiography as a profession have made great strides to shape and define

the AP role within its discipline and acknowledge this level of practitioner as part of

strategic view. The Society of Radiographers 7(SoR) have produced a SoP for APs

in this clinical area. This has help establish the role within radiographic services and

offered guidance to the registered professional on delegation (SoR, 2012).

6 Scope of practice refers to the procedures and tasks health care workers can undertake.7 Society of Radiographers refers to the professional body and union which supports most radiographers.

16

Student number: -0207586

Stewart-Lord et al. (2011) discuss the introduction of the AP role in radiography and

changes to the workforce programme in response to expectations of supply and

demand within the profession. Evaluation of role design between 2001 and 2005

established the role of the radiographer taking on some of the responsibilities of the

radiologist and so offering opportunities for APs to assimilate some of the

radiographer’s roles into their SoP. The profession developed a four tier approach

embracing four levels of practitioners, level one APs, two practitioners, three

advanced practitioners and four consultant radiographers. The extended use of the

AP would provide the catalyst for the progression of the registered practitioner

offering development opportunities within the profession, whilst also addressing the

increasing expectation of public demand. Appropriate opportunities were identified

within the diagnostic and therapeutic fields. Their study analyses the effectiveness

of the workforce strategy and utilisation of the role. They established that despite

professional guidance and documentation being available to guide the

implementation and use of the AP, in some areas and in some circumstances, it

was still based on individual’s personal experience and perceptions of the role. They

argued that there was still a need for more systematic approach to implementation

of the AP.

This study initially identified 226 radiography sites of which 121 employed APs, 85

were in diagnostic and 27 therapeutic. There was an overall response rate of 83

(74.1%). In diagnostic radiography the majority of respondents worked in general x-

ray, theatre and nuclear medicine were the least likely area of practice. The study

looked at whether the SoP developed by SoR was consistently applied. The

majority of APs had been developed from the existing workforce. The results

concluded that 9.7% in diagnostic services always worked outside their scope of

practice, whilst 47.8% felt they never did. In therapeutic radiography 8.5% identified

themselves as working outside of the recommendations with 31.9% assessing this

was never the case. Regarding confidence in their own competence, 79% of APs in

diagnostic services and 75% in therapeutic servicers felt secure in their skills to

carry out their duties. The vast majority reported that they were involved in decision

making. The area where APs were least supervised in diagnostic radiography was

plain film. In therapeutic environments this was patient support, information and

quality assurance. Significantly, those working outside of their scope of practice felt

they were performing duties that exceeded their AP role. Stewart- Lord et al. (2011)

concluded “APs in radiography continue to work in areas outside their scope of

17

Student number: -0207586

practice and without direct supervision” (p.198). In much of the literature from other

disciplines the opposite appears to be true, with individuals regularly identifying the

lack of opportunity to perform all their skills being the case (Miller et al. 2015).

The SoP developed by the SoR does show commitment from the profession in

attempting to regulate APs and establish the role as part of a national strategy,

however evidence would suggest it is not always being consistently applied.

Therefore, it might be argued that the role of workforce planning is inconclusive

(Stewart-Lord et al., 2011).

Registration and Regulation of the AP Registration and lack of regulation is consistently highlighted as a barrier to the

optimum use of APs (Wakefield et al., 2009; Steward-Lord et al., 2011; Allen et al.,

2012; Allen and Wright, 2012; Miller et al., 2015). The Francis Report in 2013

recommended the registration of support staff but this was rejected by the then

coalition government their response concluded “‘Regulation is no substitute for a

culture of compassion, safe delegation and effective supervision. Putting people on

a centrally held register does not guarantee public protection” (DoH, 2013, p.72).

Currently there are no plans by government to register staff below band 58 despite

80% of HCSWs, including APs, who according to Unison (cited in Miller et al.,

2015), believing they should be. Many professionals highlighted the lack of

registration being an obstacle to confident delegation of duties and ultimate

accountability. Miller et al. (2015) discuss professional concerns over regulation and

registration and stipulate: “…continuing concern regarding the non- registration of

Assistant Practitioners is known to have impeded progress in some areas” (p.20).

Conversely, Miller et al. equally argue their lack of registration can be seen as a

positive, meaning APs can work across professional boundaries and not be

restricted to one profession. Vaughn et al. (2014) contribute stating whilst most

articles they had considered in their literature review had called for registration of

APs, there was little evidence that this would support patient safety.

Some professionals remarked that educational programmes which did not lead to

registration lacked credibility. Educational content and training programmes were

not routinely standardised. The level of study and content varied considerably

adding to a lack of confidence in the abilities of APs. From an employment

perspective, transferability of the qualification was not as clear as it was for

registered staff. As a result of Francis (2103) and Cavendish (2013) many

8 Band 5 staff are usually registered professionals within the health service such as a staff nurse

18

Student number: -0207586

employers have been apprehensive with regards to inadequate training of staff

(Miller et al., 2015)

Skills for Health (2015) examined the broad range of roles that support staff were

engaged in. They estimated that 2.1 million individuals were employed by the NHS

mostly in professional roles. However, approximately 40% of that figure are in

support roles, with around 17% providing direct patient care. They identify staffing

as the largest major expenditure within the health service. Transference of duties is

nothing new in health. The move to all graduate nursing has helped move this along

with more and more work being delegated to the support staff. They identify “…with

the correct governance and clarity of roles and responsibilities; as well as

recognition of competence, support workers and Assistant Practitioners can

enhance quality and efficiency of care” (p.13). They consider the demographic

profile of the country and project that by 2037 the United Kingdom will have a

population of approximately 73,000,000. This will be combined with longer life

expectancy and an increase of people living with long term conditions. This 30% of

the population will account for 70% of the health service’s spending. They suggest

given the small difference in wages between bands 3 and 5 (which is approximately

£6,000) alternative thinking is often ignored locally. However, this differential on a

national scale amounts to significant savings. “Making better use of support staff

can also make a significant contribution to saving money and help improve patient

care” (p.14).

Skills for Health advocated that with good planning and support the AP role can

carry out many of the roles of their registered colleagues. They estimate that if 1%

of work was transferred from registrants to APs and support workers this could

result in £100,000,000 saving across the NHS. They champion the Band 4 role

suggesting that most of these staff members can work with minimal supervision.

Development of support staff can have a positive impact on both economics and

quality. (Skills for Health, 2016)

A subsequent publication by Skills for Health investigate the possibilities of

optimising the use of support workers, examines the need to look outside of working

in traditional cultures. They recognise the clear case for developing the support

workforce. They suggest that work is needed to look at future workforce

requirements and review of skills mix need to be carried out at a deeper level. They

19

Student number: -0207586

advocate the need for support workers to have parity of esteem in recognition of

their contribution to health care. (Skills for Health, 2016a)

Powel et al. (2016) researched the impact APs in GP practices, discovering that the

introduction of the role had reduced appointments for patients with the practice

nurse from twenty minutes to ten.

The AP: Foundation Degrees and Work-based LearningThere is no single route or programme to develop the AP role. Levels of

qualification underpinning the role varies considerably. The title of AP is not

protected and as such staff can be defined as an AP holding any number of

qualifications or none. Apprenticeships, diplomas, national vocational qualifications

and foundation degrees are some of the development programmes leading to the

title of AP. Miller et al. (2015) report that Skills for Health indicated that the role

should be underpinned by a level five qualification as indicated in the qualifications

and credit framework, sitting just below bachelor’s degrees. (Accredited

Qualifications, 2012). There is the suggestion that as foundations degrees have in

many instances become more generic, the AP is emerging unfit for practice. This in

itself has led to a lack of confidence amongst some employers. Uncertainty around

course content or qualification level has led to issues of transferability amongst

employers. However, in juxtaposition bespoke foundation degrees, tailored

specifically to service needs can be limiting when compared with a generic model.

Some Higher Education Institutes(HEI) have discontinued their foundation degree in

health and social care completely, assessing them as economically unviable. To this

extent organisation are considering the development of ‘professional diplomas’ as a

viable, more practical alternative. Trailblazer apprenticeships9 are also being hailed

as an appropriate method of developing APs. The literature indicates that there is

great discussion surrounding development opportunities for the AP. (Miller et al.,

2015)

Seagraves et al. (1996) carried out a study of work-based learning in small

companies. Their research highlighted a great deal of anomalies surrounding this

style of learning and structure of the programme. They identify that work-based

learning has improved access opportunities to learning and improvements on

performance and economic success.

9 Trailblazer Apprenticeships are developed with employers working in that particular sector to create apprenticeship standards for particular roles.

20

Student number: -0207586

Defining work-based learning they conclude it as learning that improves an

individual’s ability to do their job. They acknowledge that the application of the term

varies widely and is utilised to describe a host of different learning situations.

Therefore, they conceptualise said term under three distinct headings:

a) learning for work

b) learning at work

c) learning through work (p.6)

The study highlights that in many instances the success of work-based learning

initiatives have rested on the tenacity and enthusiasm of individuals who champion

the cause within individual organisations. Many organisations however, appeared

reluctant to change their working practices or reshuffle workloads to allow for

successful progression and implementation. This was often compounded with

inadequate or inappropriate mentorship. Perception of the programmes amongst

some organisations envisaging a speedy way to qualify staff, were in most cases

incorrect. They identify these as a major reason for attrition amongst the learners.

Boud et al. (2001) describe work-based learning as an approach to education that

involves bringing together HEIs with employers to develop learning opportunities in

the workplace. They discuss the wide variation in design, in some instances

indicating only minor differences to established programmes, whereas others they

claim have “…developed new pedagogies of learning” (p.5). They identify six

characteristics that they feel all work-based leaning programmes share; partnership

working; earners are usually employed; infrastructure needs to be present in the

workplace and the organisational needs form part of the curriculum; learner’s needs

are established and reflected in the curriculum; a substantial element of the learning

should be in the workplace; and academic standards maintained by the HEI.

Richards (2002) claimed there was a new interest in work-based learning, with HEIs

considering how best to prepare students for the world of work. She noted that

widening access had led to new relationships with students, employers and HEIs,

along with a more balanced approach to the integration of academia and vocational

goals set within programmes. Smith and Brown (2012) consider the emergence of

foundation degrees, identifying the qualification as work based learning and applaud

the flexible approach, characteristic of the programme, in meeting employer needs.

21

Student number: -0207586

The programme embraced both academic standards and work-based

competencies.

Harvey (2009) conducted a comprehensive review of a wide variety of research and

maintained “Lack of understanding of foundation degrees amongst employers is a

major challenge for institutions attempting to develop partnerships with employers”

(p. 35). He maintains that employers would engage in programme design, if they

could clearly see the benefits to their business. Mentorship is acknowledged as a

cornerstone of work-based learning. however, difficulties around consistent

mentorship was a constant theme with some learners reporting very poor standards

leading to problematic assessment of work based elements.

Wright et al. (2010) join the discourse examining the situation in Scotland identifying

relationships with HEIs and stakeholders had changed with the expansion of work-

based learning. They too note with the progression of the widening participation

agenda, learner centred approach, flexibility in programme delivery and adapting to

the demand for skills in the workplace, HEIs have extended their repertoire of work-

based learning programmes. They acknowledge that work-based learning means

different things to different people and that this results in confusion. Equally their

research established that whilst the relationship between HEI and employer was

important, it also had its difficulties, with different concepts of what actually

constitutes knowledge and learning. Accreditation of the courses where still HEI led

with an agenda for academic bias. Philips (2012) considers her thoughts on work-

based learning. She purports that they are attractive to employers as the learners

are not excluded from the work place. She agrees with previous claims that there is

a lack of clarity surrounding definition, but concludes that the usual model reflects a

tripartite relationship between the student, employer and HEI. This leads to learning

that can be directly applied to practice and personalised to the individual learner.

Work-based learning theoryRaelin (2008) discusses the concept of work based learning theory. His initial

thoughts on traditional learning echoes much of the literature available he

advocates “…unfortunately, we have become conditioned to separate the classroom

model that separates theory from practice…” (p.1-2). As previously acknowledged

work-based learning is intended to bridge the theory/ practice gap and views the

workplace as a positive environment for learning where by practice uses theory in

harmony and vice versa. He visualises three key elements as integral to the work

based learning process:

22

Student number: -0207586

1. It views learning as acquired in the midst of action and dedicated to the task

in hand.

2. It seeks knowledge creation and utilization as collective activities where

learning becomes everyone’s job.

3. Its users demonstrate a learning to learn aptitude, which frees them to

question underlying assumptions in practice. (p.2)

He discusses how work-based learning differs from conventional learning as it is

engaged with real life experience. He advocates that the concept of metacognitivism

is fundamental to the process whereby it is insufficient to merely look at what we

learnt but views it in a much wider context, ensuring that we fully understand the

ramifications of the learning, thus assisting us to analyse out current knowledge

base and rethink what we know. In doing so provide a framework to develop and

synthesise new knowledge.

Raelin relays that for many individuals the concept of work-based learning has

become synonymous with vocational study, which in turn has become tantamount to

saying that it is most suited for individuals who dislike classroom or academic study.

He reiterates that this should not be the case “…work-based learning is not

antagonistic to theory it respects and uses theory” (p.69). He recognises that all too

often practice and theory are developed devoid of each other’s contribution whereby

theory is determined as the thinking and practice the action, with both parties

holding somewhat derogatory ideas about the other. “Theory is often constructed as

impractical or as ‘academic’ or ‘ivory towerish’. Meanwhile, practice is viewed by

academics as banal and a theoretical” (p.64).

Raelin although reflecting on Kolb’s experiential learning model (1984), concludes

that work-based learning is much more, it is multi-layered with practice well capable

of producing theory. Raelin’s model of work-based learning initially incorporated two

dimensions’ theory and practice and explicit and tacit knowledge. He suggests that

theory offers a framework to challenge assumptions that when combined with action

creates a model of learning. Practice is viewed as the process by which

practitioners develop their skills and experience. Raelin highlights that positivism,

whereby knowledge is produced under scientific paradigms, is more credible due to

its objectivity and unbiased nature. It was deduced therefore, that theory be

developed outside of the influences of practice. As a result, theory was developed in

isolation and outside of context, leaving the learners to make sense of theory back

23

Student number: -0207586

in practice. Some schools of thought feel that this approach has produced a

framework even further removed from practice. He speculates that teaching became

disjointed from learning, teaching imparting knowledge and learning being the

storage, retrieval and recapitulation of the subject matter, leading to theory based

teaching with little regard to context. Once again the learner has to make sense of

this on their own out in the field. He argues that we now know that our

understanding is changes, we construct our knowledge and it is influenced by many

factors, knowledge that is abstract is of limited use in the real world. He concludes

“Theory makes sense only through practice, and practice makes sense only through

reflection as enhanced by theory” (Raelin, 2008 p.67). Work based learning relies

on a blend of both.

Raelin (2008) consequently explores the role of both explicit and tacit knowledge.

Work-based learning is more than just the knowledge and procedures passed on

from one individual to another. It also involves tacit knowledge not typically taught

but gained through experience and constitutes deep-rooted understanding

expressed through contextualised action often difficult to put into words. He

describes this as the difference between ‘knowing how’ and ‘knowing that’ (p.67)

Raelin argues that tacit knowledge can be transferred by observation and modelling

of others. He reiterates that conventional theory based learning can leave the

practitioner ill prepared in the workplace, unable to think independently and problem

solve. Tacit knowledge is what aids us in difficult situations or to engage in complex

problem solving. This can be built on by the collective knowhow of the environment

as a whole by the proximity of others and sharing of experiences. Theory may well

be developed as a living experience than that which is preordained. Work based

learning requires both explicit and tacit learning to have true impact. (Raelin, 2008;

Philips, 2012) Therefore, by utilising theory and practice, coupled with explicit and

tacit learning, Raelin advocates a conceptual model of work-based learning can be

constructed. In addition, he also considers a third dimension that of learner activity,

each individual learns at their own pace and from the people around them. (Raelin,

2008)

Impact Evaluation Process The aim of the project engaged in for this study was to develop a tool that could

measure the impact of new roles introduced into service. There have been many

evaluations of the AP role as previously stated.

24

Student number: -0207586

Wilson (2008) project managed and developed the East Midlands AP tool kit. She

considered how the role might be measured and how the role must be identified in

the business plan which should “…identify all the benefits expected from developing

and implementing the AP role and how they will be measured” (p.14) She

recommended that outcomes in the workplace should look at value, costs and

savings. They considered the impact of the role from a number of perspectives:

1. The service: Have strategic targets been met? Had the patient experience

improved? What affect had there been on key performance indicators?

2. Care Team: Had it allowed practitioners to work differently? Had it affected

capacity within the team?

NHS Wirral (2011) developed a number of fact sheets. including evaluating a project

or service. They identify three types of evaluation:

1. Formative: Carried out prior to the project commencing

2. Process: Begins at the start of the project can be used to look at delivery

and implementation of the project and whether it delivered to the original

plan?

3. Impact/Outcome: Did it meet its aims and objectives?

They cover a number of important issues; considering the purpose for evaluation

and who is the audience posing the question “Is the main impetus one of

demonstrating the benefits of the service to other potential users? “(NHS Wirral

2011) A key aspect of the impact evaluation project the researcher was engaged in,

was to provide tangible information that both the participant and others could use.

NHS Employers (2012) discussed their rationale for ‘evaluating an AP project’ and

how this helps establish the effectiveness of the intervention. They believe that a

project’s impact and success must be measured in relation to the original objectives

to assess its validity. They employed a number of methods in their process,

including interviews, surveys, staged assessments and the use of feedback forms.

Stern, et al. (2012) discussed the design of impact evaluation tools and states,

“Impact evaluation (IE) aims to demonstrate that development programmes lead to

development results” (p.i). They consider three elements to be essential in IE

design, the evaluation questions, appropriate design and method and programme

attributes. Stern et al. advocates five different types of impact evaluation

experimental, situational, theory based, case based and participatory.

25

Student number: -0207586

The Organisation for economic co-operation and development (OECD) claim that

impact evaluation is an assessment of how interventions have affected outcomes,

“… the proper analysis of impact evaluation requires a counterfactual of what those

outcomes would have been in the absence of the intervention” (OECD, n.d., p.1).

Counterfactual is not necessarily a before verses after, however this can be seen as

a valid method of impact evaluation. Robust impact evaluation will highlight both

successful and unsuccessful aspects, where there is potential for redesign by

establishing which objectives have been met and what lessons can be learnt along

the way. In turn this can influence decisions on whether future investment is

worthwhile. OECD discuss the importance of base line assessments and how this

will develop programme theory. They recommend a mixed method approach

declaring “Good evaluations are almost invariably mixed method evaluations”

(OECD, n.d., p.5). They conclude that impact evaluation surrounds specific

interventions and set in a specific context.

Rogers (2012) considers impact evaluation and echoes OECDs perspective;

“Impact evaluation investigates the changes brought about by an intervention” (p.2).

She suggests that expected results are an important aspect of impact evaluation,

whilst exploring unexpected results as part of the process. She discusses some

common reasons why impact evaluations are conducted:

1. Decisions around continuing to fund the intervention.

2. Whether to continue or expand the intervention.

3. Whether to replicate the intervention in other areas.

4. Whether it can be successfully adapted to suit other areas.

5. Reassurance to stakeholders that it is a valid use of funds.

The rationale is readily transferred to the AP role and its impact on services.

Rogers discusses the importance of establishing a theory with which to measure the

impact. Programme theory or change theory as she refers to, develops a hypothesis

of the expected outcomes that the interventions should achieve. She articulates “It

is often helpful to base an impact on a theory or model of how the intervention is

understood to produce its intended outcomes” (2012, p.6). She continues by noting

that credible evidence is needed. Equally how well the programme has been

implemented, to distinguish between implementation failures and theory failures.

Productive impact evaluation helps make sense of the intervention, “…it does not

just gather evidence that impacts have occurred but tries to understand the

26

Student number: -0207586

interventions role in producing them” (2012, p.9). Perrin (2012) discusses change

theory identifying it as referred to as programme theory, result chain, programme

logic model or attribution logic and the series of assumptions. He examines the link

between inputs, activities, intermediate outcomes and the intended impact. There

needs to be a logically constructed counterfactual, as in there is no other logical

reason for the identified impact other than the intervention itself. Quality impact

evaluation according to Rogers (2012), must be utilised, be accurate paying

attention to both intended impacts and unintended impacts, positive and negative

and have propriety, that is ethically sound recognising any potential harmful effects.

Rogers highlights that impact evaluation can be influenced by the characteristics of

participants and the environment. She claims impacts can take many years to fully

emerge, on some occasions results are needed before enough time has elapsed to

gain the true picture (2012).

Bamberger (2012) assesses the benefits of mixed methods and impact evaluation

advocating this as the preferred model. He suggests that quantitative results give

breadth of the impact whist qualitative inquiry adds depth to the evaluation. He

claims that no single method on its own can fully explain the impact of an

intervention in the real world and that a mixed method is a truer reflection. He notes

that whilst quantitative evaluation can provide information such as how many, how

much, significant differences, qualitative evaluation can provide evidence on how

the changes were experienced. He recommends that a multi-level, mixed method is

the most robust.

Evaluating training Programmes for the AP role Equally important in assessing the role of the AP in practice is evaluating the

training programme itself. Work-based learning and in particular the foundation

degree, has both academic aspects to evaluate and those in the work place.

Seminal work by Kirkpatrick originally in 1959, defines his four levels of evaluation

model. Now in its third edition, and written in collaboration with his son he discusses

the four levels:

1. Reaction of the student

2. Learning

3. Behaviour

4. Results

(Kirkpatrick and Kirkpatrick, 2010)

27

Student number: -0207586

At the first level the programme is evaluated in relation to the learner’s experience

of the programme. Did they enjoy it? Was it at a suitable pace? Could they see

application of the training back in practice? The second level looks at the learning

that took place as part of programme. This examines whether knowledge or

capacity to learn has increased for participants. Did participants have more

understanding as a result of the programme than they did before? Did the students

learn what was intended to be taught and experience what was intended in the

programme? Level three concerns itself with the behaviours of the participants and

their ability to interpret what they have learnt back in the workplace. Have they been

able to apply their learning? Has it resulted in a change of behaviour and practice?

Are the confident to pass on skills to others? The fourth level looks at the results of

the training on the organisation itself. Has the programme delivered on the

expectations of the business? Are there measurable impacts within the

organisation? Does the performance of the participant live up to the expectations

within the business case? (Kirkpatrick and Kirkpatrick, 2012).

Kirkpatrick emphasised that training needs to reflect the demands of the market

place and that it is not enough for educator to only concern themselves with the first

two levels of his model. He discusses the need for training to be practical,

interesting, enjoyable and relevant to the job in hand. He advocates that all four

levels must be explored to truly evaluate how effective a training programme has

been, claiming much of the learning is embedded through work once the training

programme is over (Kirkpatrick and Kirkpatrick, 2010). Kirkpatrick and Kayser

Kirkpatrick (2009) reflected on the model, suggesting that in many instances the

model had been misinterpreted and viewed too simplistically, creating an inability in

to understand the inter-relationship of the four levels.

A fifth level has been suggested by Phillips (2003) looking at return on investments

(ROI), he argues that in many circumstances ROI is intrinsically linked to

accountability and justification for time and money spent on training and

development. Phillips debates that whilst executives majorly agree that training is

needed for organisations who are developing and expanding and can result in

greater productivity or customer satisfaction, there is a lack of robust method to

evaluate an accurate ROI for many training programmes. Phillips claims that

attention to ROI has resulted in a paradigm shift within training from that of an

activity model to one of results. (Phillips, 2003)

28

Student number: -0207586

Kirkpatrick’s model can be applied in the most part to the development of the AP

and training programme. For the vast majority of APs, the foundation degree

qualification has proved to be the programme of choice. Students are requested to

evaluate their experience of learning at an individual level through local evaluation

procedures and the national student survey. Level two is evidenced in many

evaluations of the foundation degree programme by the students who prudentially

comment on the improvements they have seen in both their academic ability,

learning capacity and confidence levels. (Bungay et al., 2015) Level three of the

evaluation model can be measured by the students’ performance in practice.

Foundation degrees are characterised by their work based learning content;

students are expected to show competence in practice as well as academic rigour.

Level four it might be argued is somewhat compromised with managers reporting

that some APs lacked the desired skills in the workplace and that often the content

of the foundation degree had been too generic. Investigation into the fifth level

suggested by Philips reflects the spirit of this research investigation. The

development of an impact evaluation tool would consider return on investment as

one of its primary domains.

SummaryThis chapter has examined the literature from a broad base to support the different

phases of the action research project. It has looked at the current evaluation of the

AP role nationally and considered the methodologies deployed to conclude said

evaluations. It has evaluated whether any tangible tool had been developed to

assist the research claims. It has investigated the merits of work force planning in

implementing the role successfully. The education and training of APs has been

interrogated including the use of work-based learning as a model for programme

delivery. Consideration to the use of foundation degrees had been assessed and

the move towards generecism verses specialism has been discussed. The theory of

work-based learning g has been explored, along with identified characteristics of

work-based learning programmes. As this research study involves the creation of an

impact evaluation tool for the AP role, the process of impact evaluation has been

analysed. Finally, Kirkpatrick’s model of evaluation training programme has been

examined in relation to the underpinning process in developing the AP role.

29

Student number: -0207586

Chapter Three: Methodology

This dissertation discusses an action research approach, taken to develop an

impact evaluation tool that would provide resources for stakeholders to make

informed decisions supporting role development. The research is carried out in

three phases, ultimately leading to the development of the tool itself. A mixed

methodology was considered as the most appropriate way of conducting the

research study. Triangulation of both quantitative and qualitative methods were

employed within the study, to reap the benefits from both approaches. Research is

considered within two broad paradigms, quantitative methods, aligned to a

positivist /post positivist tradition and qualitative methods aligned to a naturalistic

tradition (Bowling, 1997; Bell,2006; Gray,2009; Ross, 2012). Silverman (2010)

identifies that traditionally quantitative methods have dominated proclaiming,

outside of the social sciences their prevalence still exists. Ross (2012) identifies that

quantitative research is prevalent in health care and is “considered more scientific

and trustworthy” (p.43). This has led to greater influence in shaping policy and

interventions. Ross (2012) continues by examining the contribution of qualitative

investigation in the field of health care, considering the impact and feelings that

research may have on its participants. Ross considers the mixed method approach

and argues that this is now defined by many as the third paradigm. She suggests,

“There are very strong arguments for combining approaches in order to capitalize

on the strengths and produce a more holistic view of the phenomena being

investigated” (p.133). Cohen et al. (2013) argues that robust research relies on the

ability of the researcher to used mixed methods where appropriate and not remain

steadfast to one method or another. He advocates that mixed methodology offers a

new paradigm in the field of research. Johnson and Onwuegbuzie (2004), discuss

mixed methods and declare that it is a “paradigm whose time has come” (p.1).

30

Student number: -0207586

Ethics The need for ethical approval was considered throughout the process and sought.

Ethical principle such as voluntary participation, informed consent, risk of harm and

confidentiality was maintained (Trochim,2006). The health research authority

decision tool was used in the first context, concluding that it was not designated as

research in line with their principles (Health research authority, n.d.). Proposal for

the research was made via the University and the researchers employing

organisation. It was deemed not necessary from both quarters. The employing

organisation considered the study to constitute a service review and as such did not

require ethical approval (see appendices 6,7). Information from the initial survey

provided an anonymous return and participants agreed approval for use in this

dissertation by providing an affirmative answer at the commencement of the study.

Equally, on the pilot documentation the stakeholder is explicitly asked to sign to give

permission for use of their information.

Why an action research approach?Action research is considered a very flexible approach that can be adapted and

applicable to many situations and for many different purposes. It is a very powerful

approach that requires both action and reflection to improve practice and decision

making (Cohen et al., 2013). Bell (2006) proclaims that it can be used in any context

where specific knowledge is required for specific problems. Bowling (2009)

highlights action research as a means of developing knowledge whilst

simultaneously changing it, and identifies two distinct features of improvement and

involvement, sentiment which is echoed by Gray (2009). Stringer (1996) proposes a

simple three stage model of action research of looking, thinking and acting. Ross

(2012) comments on the cyclical nature of action research and reports on the

discourse that surrounds it. She suggests a five stage model: identifying the

problem, fact finding, planning, action, evaluation. She argues that action research

has gained popularity with in health care as it is able to respond more readily in an

environment of rapid change. Bowling (2009) concludes, “Action research is a

popular technique for attempting to achieve improvements by auditing processes

and critically analysing events” (p.367). Bowling continues by suggesting that action

research often uses many different methods and may often use evidence generated

from both qualitative and quantitative methods, considering a variety of data

collection tools to inform the process. There are many different models of action

research. Gray (2009) comments that although there may be different approaches

they share three common characteristics:

31

Student number: -0207586

1. Research subjects are themselves researchers or involved in a democratic

partnership with researchers.

2. Research is seen as an agent of change.

3. Data are generated from the direct experience of research participants.

(p.313)

This study can be argued is aligned itself, to many of the different models of action

research. Cooperative inquiry model which focuses on research with people, as

opposed to on people, underpins the approach of the project undertaken. Although

cooperative inquiry is identified as a particular type of participatory research it

acknowledges its overlap with action research in general. (Herron, 1996). This study

is characterised by a small core group of two researchers with input from a variety

of individuals when and where appropriate.

Identifying the problemInitially, a problem was highlighted through employers concerning resources

available to support them in developing their bands 1-4, in particular the role of the

AP. Stakeholders were increasingly requesting more statistical evidence in relation

to the impact of the AP role to compliment the qualitative data available. Although

some evidence relating to cost effectiveness and direct effects on patient put

through and experiences were available, it remained limited. The action research

team believed that a possible solution to the problem was to create an information

gathering tool that could evaluate the impact of new roles within organisations. The

action research team consisted of two core members from the WBEF network with a

mission to produce a more holistic impact evaluation tool that would yield both

qualitative evidence and statistical data. Contributions from a number of sources

culminated in the final design of the impact evaluation tool.

The search cycles The research was carried out in phases utilising a variety of methods to produce the

information gathering tool, otherwise referred to as an impact evaluation tool. With

respect to this particular study the main focus was consideration of the AP role. The

first aspect was to look at the problem at hand. The research team identified the

projects objectives and considered how these would be addressed. A PEST

analysis was carried out by the team to help identify the areas that might be

important to managers when considering the investment and development of the AP

role. The aim was to create a tool that could be used by stakeholders to evaluate

the impact that introducing the AP (or any new role) would have in practice, whilst

32

Student number: -0207586

also considering the training and qualifications that might underpin role

development. The project was identified as having two objectives:

1. To develop a tool to determine the impact of a new role within a service

area.

2. Conduct research to determine the impact of a new role with a service area

– analysing results obtained from research.

Phase One of the research processThe first phase concentrated on two aspects. Initially desk top research was

conducted in line with the literature review, to investigate what evaluations had

already been conducted, methods used and what evaluation tools if any, had been

developed for said evaluations and measuring consequent impacts. Secondly,

running concurrently, a market research questionnaire via survey monkey. The

questionnaire was designed in conjunction with the PEST analysis, to establish

what were the highest priorities for stakeholders concerning the development and

introduction of AP roles within their organisation. This took a predominately

quantitative approach to gathering information but did include some qualitative

aspects providing opportunity for stakeholder comment.

Desk top research, also referred to as secondary research was conducted to scope

out the current situation with regards to the AP role and also ascertain what tools

had been used to gather the evidence. Utilizing the research finding of others in the

field would help identify areas that were successful and also problematic with the

AP role. Equally it would inform the research project at an early stage what methods

and resources might already exist and whether these could be adapted to meet the

needs of the project. Gray (2009) acknowledges that whilst some scepticism must

be deployed when using secondary sources, arguing some data cannot be

effectively replicated, this method can be efficient in both time and cost. It was

considered by the research team that analysis of secondary data would in this

instance help benchmark current knowledge and the findings would influence the

direction of the design for the impact evaluation tool.

Concurrently a questionnaire was designed to identify what managers and

stakeholders felt would be the most important information to have and what were

their greatest priorities were when considering the development of new roles within

their service(see appendix 4). The use of a questionnaire was identified as an

appropriate method of data collection at this point in the life of the research project.

The questionnaire was designed so that answers needed to be ranked in order of

33

Student number: -0207586

importance to help establish which domains should be captured in the design of the

impact evaluation tool whilst disregarding areas of least interest. This would

produce quantitative data identifying numerically which statements on the

questionnaire where of highest importance. A final section offered free text for

respondents to add comments they felt were significant to the topic.

Bell (2006) offers a reminder that attention needs to be paid to the design of the

questionnaire and the questions asked. The design should match up to the

objectives, the researcher must avoid ambiguity, provide a tidy questionnaire (which

receives a better response) and asks “…what do I really need to know” (p.140). She

advocates “It requires discipline in the selection of question writing, in the design,

piloting, distribution and return of the questionnaire” (p.136). Ross (2012) adds that

to some extent a questionnaire needs to be targeted at participants who have some

knowledge of the subject matter, seeing no advantage in targeting those who have

no understanding of the topic. She discusses that unless participants are selected it

can compromise the validity and reliability of the study, as respondents may well be

just guessing rather than providing a considered response. Bowling (2009) states

advantages of structured questionnaires to be useful in that they can remain

anonymous, economical and have the potential to access a relatively large

audience. Disadvantages may arise if participants are obliged to choose options

that do not reflect their true opinion or may not understand the instruction or

questions themselves. Equally response rates may prove disappointing. Ross

(2012) echoes the concerns raised by Bowling, whilst supporting the advantages

highlighted.

The researchers were aware of both arguments but as this would not be the sole

way of data collection felt that for market research purposes, this would be a valid

method. Participants were selected on the basis of their current interest in the AP

role and their positions within organisations to influence workforce development.

Gray (2009), offers a reminder that successful completion of questionnaires often

relies on the participants having a vested interest in completing them and cautions

that they should not be too lengthy. Piloting questionnaires is deemed good practice

(Bell, 2006; Ross 2012), therefore a small pilot was conducted prior to full

distribution to ascertain user friendliness, appropriate questioning and clarity of

accompanying instructions. The use of survey monkey was employed and the

questionnaire distributed using contacts held on a data base within the WBEF

network and resent four weeks later to encourage greater response, Consent to use

34

Student number: -0207586

data was embedded within the questionnaire itself and returns remained

anonymous maintaining confidentiality.

Phase Two Piloting the tool Data produced from the results of the survey and combined with secondary

research were reflected on by the action research team. The questionnaire

identified the priorities managers had indicated as the most pertinent in influencing

their decisions around role development. This was used in the design and content of

a draft impact evaluation tool. An appraisal of secondary research highlighted that

the majority of evidence surrounding AP evaluation was indeed qualitative in nature

with repeated use of questionnaires and semi-structured interviews as the dominate

data collection tools. Research into impact evaluation processes and work based

learning theory were also considered in the construction of the design of the impact

evaluation tool (see appendix 8).

The tool offered opportunity for stakeholders to consider the impact from both a

qualitative and quantitative perspective and produce data that had both anecdotal

and statistical qualities. Its design aimed to focus the stakeholders experience of the

role within the service area and consider whether the training and development

programme underpinning the role had resulted in individuals fit for practice. Equally,

it offered opportunity to consider the original vision for the role and assess the

actual role against the original concept. This is seen as a fundamental principle of

impact evaluation. Any deficits could be identified and an action plan established. A

meeting was held with the WBEF who would conduct the follow up interview to go

through the tool itself and also the guidance notes. A discussion also took place

around the process map provided to help the identified WBEF choose the area they

felt would be suitable for conducting the study with (see appendices 9,10).

The tool was then piloted with a service manager and guidance notes disseminated

(see appendix 11). An opportunity prior to the semi-structured interview stage was

offered to the manager to consider the evidence requested and allow time to

consider how they might best gather the evidence and prepare for the follow up

interview two weeks later. The follow up semi-structured interview conducted face to

face by a member of the WBEF network would then interrogate and consolidate the

information provided. Bowling (2009) highlights the advantages of face to face

contact and the ability to probe, clarify ambiguities, check out any inconsistencies

and gain a greater depth of knowledge. She continues by suggesting that with a

skilled interviewer the use of open ended questions can result in richer texts,

35

Student number: -0207586

especially useful in the pilot stage of any study. However, she also acknowledges

that they can be time consuming and expensive.

Bell (2009) cautions the need to consider bias when interviewing, therefore a WBEF

was identified to engage with the pilot phase who had not been involved in the

design of the impact evaluation tool, therefore minimising opportunity for bias when

conducting the follow up interview. Although the impact evaluation tool would guide

the interview there was opportunity for free discussion in the different domains of

the documentation and for open discussion around additional comments and

experiences. Ross (2012) comments that semi- structured interviews are often seen

as the most democratic method of interviewing as it offers the prospect for both

interviewer and interviewee to have some degree of free exchange during the

process. Issues of informed consent were also considered, Bell (2006), maintains

that during interviewing this is extremely important. Therefore, participants were

asked to consent by signing that they were happy for the information to be shared

and with whom, equally offering the opportunity for participants to give consent to

share one aspect but decline consent for its use in another context.

Phase Three: Tool review and redesignAfter completion of the pilot phase data was analysed to see if it had produced the

quality of information anticipated. The WBEF who carried out the interview were

contacted by the researcher. This was to ascertain the user friendliness of the tool,

highlight areas they felt were difficult to complete and what changes they would

recommend in the final design of the tool. This will be discussed in more fully in

chapter four.

Future phases of the research. Making the Impact evaluation tool a resource for practiceDue to the logistical restrictions of timescale the final production of the impact

evaluation tool is the point of conclusion for the purposes of this study. Further

recommendations for further use will be discussed in chapter six when

recommendations for further investigation are discussed.

SummaryThis chapter has discussed the methodology of the research study. It has examined

the rationale for why an action research model was chosen as an appropriate

approach for guiding the research process. The use of mixed methods has been

discussed in relation to data collection and a justification offered for the use of a

variety of data collection methods employed at each of the three phases of the

36

Student number: -0207586

study. It has explained how each phase of the research process has been used to

develop an impact evaluation tool to be used in practice and so contribute to the

resources available to stakeholders when considering the development of AP roles

within their service. It has offered the opportunity to discuss the importance of

piloting the initial market research questionnaire and the draft impact evaluation tool

that was developed.

Chapter 4: Findings and data analysis

Findings of the research are discussed with regards to the three phases of the

study. It is appropriate to refer back to the overall aims and objectives of the project

to rationalise the approach and finding

Aims: -

1. To provide managers with evidence based resources that can inform their

decision making when contemplating the training and development of non-

registered staff into the role of AP.

2. To develop a tool that will evaluate the impact of the AP role within a service

area.

Objectives: -

1. To scope out the current literature in relation to impact evaluations of new

roles within service areas.

2. Design an initial market research questionnaire to ascertain stakeholder’s

priorities when considering the development and introduction of new roles

within their service.

3. Utilise the findings of the initial survey to construct an impact evaluation tool

to gather both qualitative and quantitative evidence of the impact of the role.

4. Assist managers to make informed decisions with regards to the future

training and development of their non-registered staff within their service

area.

37

Student number: -0207586

The initial stage was threefold: -

Firstly, to scope out current evidence and conduct market research to illicit the

priorities of managers in relation to the design and content of the impact evaluation

tool itself.

Secondly, conduct research to analyse the current evidence and formulate a picture

of the main themes surrounding the AP role, equally establish what methods or

tools if any, had been developed when collecting the data.

Finally, to test the initial hypothesis of the action research team that the majority of

available evidence was qualitative in nature and lacked quantifiable data, which

stakeholders were now demanding. This lack of evidence justified the need to

develop an impact evaluation tool, that would enable the gathering of qualitative and

quantitative data regarding the AP role, to illustrate the extent of patient, service and

economic outcomes

Phase one part one: Secondary research and Scoping the AP role and evaluation methods. (Objective 1)Gray, (2009) discusses the role of secondary research of both qualitative and

quantitative evidence. He notes that in quantitative terms this might be the reference

to official statistics or documents where as in qualitative terms this would concern

itself with research done by others and often analysed by others. He advocates that

the purpose of secondary research in both methods is “…to perform additional, in-

depth analysis of a sub-set of the original data; or to apply a new perspective or

conceptual focus to the data” (p.497). He proclaims secondary research does come

under criticism, arguing that the context of the original research may be

compromised. Duffy (2009) contributes discussing the analysis of documentary

evidence and acknowledges this as a valid contribution when evaluated with other

forms of evidence. Clifford (1997) discusses how written text can be subject to

content analysis and is a useful method to gain perspective of the current evidence

base. The results of secondary research combined with the results of the

questionnaire would be incorporated into the content and design of the impact

evaluation tool. Duffy (2009) identifies two approaches to scrutinising documents a

‘source-orientated approach’ and a ‘problem-orientated approach’ (p.123). The latter

is deemed appropriate in this circumstance. Also the credentials of the documents

need to be considered, including the authors and content. The sources for this

particular analysis were subject to peer review and published in reputable journals,

or carried out in connection to professionally established organisations. In relation

38

Student number: -0207586

to the stated purpose for this phase of the research project, which was

fundamentally a scoping exercise it is deemed an appropriate method.

A content analysis approach was taken allowing the researcher to determine

themes that would influence the focus of the study and impact evaluation tool. In the

first instance the approach of role evaluations was considered. An appraisal by the

researcher of the vast majority of the research projects analysed in an extended

literature review, provided a qualitative profile of the AP role. Ross, (2012) confirms

that although not considered as scientific as quantitative methods, qualitative

investigation should not be “less vigorous” (p.114) in its approach. However, when

considering the evidence, a mixed method approach was often identified in the

methodology. Questionnaires featured heavily in the research and more often than

not this was followed up with semi-structured interviews. In some circumstances

there had been some investigation into statistical analysis most regularly around the

numbers of APs and organisations in which they were deployed. In most instances

there was no evidence of direct impact of the role on patient throughputs or the

economic benefits the role had brought about. Equally, there was no evidence of

any specific tool developed for capturing the impact of the role in practice.

The validity and reliability of the literature analysed from this section was considered

in relation to where it was published and by whom. The research papers were from

reputable journals and professional bodies. The credentials of the authors provided

reassurance to the vigour of the investigation. In depth interrogation of the literature

enabled recurrent themes to be established which would also be reflected in the

finalised impact evaluation tool. In the second instance the national distribution of

APs was interrogated. It emerged that in most instances evaluation of the role had

been conducted in NHS organisations and within the acute setting. However, there

was acknowledgment that the role should be extended to community settings and

the non NHS. This was taken into account when designing the impact evaluation

tool, ensuring that its design could be adapted to a number of different settings.

Equally, the evidence supported the view that the AP role had great potential and

was likely to become more widespread across NHS and non NHS organisations.

The evidence from several investigations in the literature review suggested:

1. Increased numbers of Assistant Practitioners in the future.

2. The extension of the AP role across different clinical settings.

3. The expansion of the AP role in response to new initiatives.

39

Student number: -0207586

4. Future investment in the role in addressing increasing demographic

challenges in service provision.

These observations reinforced the concept that developing an impact evaluation

tool and offer opportunity to fully evaluate the impact of the AP role, was a

worthwhile venture.

Barriers to the role were also highlighted in several of the studies as areas for

concern. A thematic approach identified the following concerns:

1. Confusion surrounding the role: - lack of clarity in what the AP could and

could not do and what tasks could be delegated appropriately to the post

holders.

2. Lack of opportunity for the skills of the AP to be fully utilised: - instances

whereby qualified APs were not allowed to carry out the skills and

competencies they were trained to do.

3. Inconsistencies in the level of qualification and course content: - There was

no singularly recognised qualification for the AP role. The title is not

protected and as such there is a wide variance in the level of training

received. Although the foundation degree was recognised as by far the most

popular route for qualification and AP status there was no standardisation of

content across these programmes.

4. Lack of registration and regulation: - This was envisaged as an obstacle to

confidently delegating tasks by the registered professionals and gave rise to

concerns with regards to accountability of the AP for their acts and

omissions

5. APs deemed not fit for practice: - There was evidence from some managers

that the AP could not carry out the tasks the manager required on

completion of the programme leading in some instances to managers having

little confidence in the role.

6. Limitations of the role and level of responsibility: - Managers reported that

delegation to an AP was limited worrying it exceed their remit and scope of

practice.

It was important that these concerns were incorporated into the design of the impact

evaluation tool to identify whether they had been addressed in other services and if

so, how? When constructing the impact evaluation tool, sections where therefore

included as to the type of service and area of practice. An opportunity to consider

40

Student number: -0207586

the tasks and responsibilities of the role, including the scope of practice,

competencies achieved, what training programme or qualification had been used to

underpin the role development were included in the body of the impact evaluation

tool. Equally, how well the role measured up to the original vision, along with

anticipated benefits, were incorporated into the text of the tool.

Phase one part two: Developing an initial market research questionnaire to establish stakeholder’s priorities (Objective 2)In conjunction with the secondary research of the literature an initial market

research questionnaire was also constructed and launched. Ross (2012) confirms

that surveys are a popular method of collecting data and the structured

questionnaire a preferred instrument to investigate opinion. It was agreed that

survey monkey would be deployed to distribute the questionnaire. Survey monkey

ensures that the questionnaire looks professional which Bowling (2009) highlights

as being important. Survey monkey also permits participants to engage

anonymously protecting individual’s identity and assuring confidentiality, which is

considered the basis of good practice (Bowling 2009). The results are automatically

calculated for the researcher ensuring that data analysis is more straightforward.

Choosing the right sample is considered fundamental to the success of a

questionnaire. A non-randomised sampling strategy described as purposive

sampling was employed as it was deemed necessary for those selected to have

similar characteristics, knowledge of the subject and have an insight into the issues

of role development (Bowling 2009; Gray, 2009; Ross, 2012). The sample group

would therefore be in one context from a homogenous group who all had an interest

in the AP role, however, heterogeneous in that within the sample group there would

be stakeholders from a number of different service areas, both clinical and non-

clinical, different professional backgrounds, holding different posts and from NHS

and non NHS organisations.

Determining the right questions to ask was an important factor in considering the

validity of the questionnaire in ascertaining the information that was of most

significant importance to stakeholders, therefore the results in each section with the

highest value could be incorporated into the impact evaluation tool. This also was

factored into the design of the questionnaire and format. Stakeholders where

presented with a range of statements that they were asked to rank in accordance

with the most important and least important information they would require when

considerations the implementation of new roles. This could be likened to a semantic

41

Student number: -0207586

differential scale with numerical options representing the participant’s opinions

around priority areas (Ross, 2012).

The questionnaire was piloted with three people prior to full distribution to ascertain

whether if it was too complicated, targeted the right information and how long to

complete. Ross (2012) indicates that this enhances the validity of the tool.

Feedback confirmed that the questionnaire was use friendly, asked the right

questions and that it took approximately ten minutes to complete. These three

returns were removed from the final results. For efficacy and economy, it was

agreed that the contacts already held by the WBEF network would be an

appropriate sample for this survey and would become the population for this survey.

Population refers to any grouping that has been chosen for the purposes of the

research (Ross, 2012). This would give the potential of accessing 292 participants.

The campaign function of 10Fulcrm was utilised as this allows one e mail to be sent

to multiple recipients at the same time.

Analyses of the questionnaire results The questionnaire was initially distributed to 292 individuals on the 11th December

2015, (n=292), with a covering e mail which is considered important (Bowling, 2009)

(see appendix 3). 28 e mails were subject to delivery failure from the original

distribution due to inaccurate details being recorded in the system. This meant a

9.52% reduction in the original population of the study. This resulted in 264

participants successfully receiving the e mail and questionnaire link (n=264).

Response rate was low in the first instance and by the 22nd December 2015, 33

individuals had responded this represented a 12.5% participation rate. There had

also been communication from a small number of individuals who had not been able

to open the link or had reported that the questionnaire had not worked. The survey

was resent on the 22nd December with a second accompanying e mail (see

appendix 5). A further 22 individuals responded. As responses are anonymous it

was impossible to know if any of the individuals experiencing problems had now

completed the questionnaire. This totalled 55 individuals who had responded,

representing a 20.8% engagement (when n=264).

Question one requested consent from participants to use the data produced in

relation to this dissertation,100% of participants agreed to this. Throughout the

survey the results are calculated to two decimal points.

10 Fulcrm is a data management system utilised by the WBEF network. It contains contact data as part of the functions.

42

Student number: -0207586

Question two asked the person to identify their role. This would enable the

researcher to gain an understanding of the breadth and diversity of the sample

group. The researcher was able to code the respondents into categories therefore

identifying what positions were held by those participating. There were ten

categories identified. The greatest response rate was from first line managers,

equally other managerial roles also featured heavily. It was thought that this was an

appropriate group as they often were the individuals with decision making

responsibilities in relation to new staff. Significantly ten of the participants were

categorised as ‘Specialist Practitioners’, these tended to be lead nurses or specialist

practitioners. The researcher’s knowledge of the AP role acknowledged that there

were significant numbers of APs in specialist services (see table 1).

Your Role (55 Respondents: n=55)

Role Number % Rate

Director of Services 2 3.36%

Educational leads 6 10.90%

Managers 19 34.54%

Matrons 3 5.45%

Practice Managers 3 5.45%

Service managers 8 14.54%

Specialist Practitioners 10 18.18%

Practitioners 3 5.45%

Other 1 1.82%

(Table 1 Roles of Respondents)

Question three related to the types of organisations employing the participants. This

would assist the researcher in quantifying different types of organisations to

establish a broad range of responses. It was acknowledged that the impending

impact evaluation tool would capture the interests of both NHS and non NHS

organisations. The researcher coded this section into five distinct categories. The

majority of respondents were from NHS Trusts; however, a quarter of respondents

were from organisations outside the of NHS Trusts. This enabled consideration to

be given to the needs of individuals across services, acknowledging that the trend

for AP roles to be developed outside of the NHS (see table 2).

Types of Organisation (55 Respondents: n=55)

43

Student number: -0207586

Organisation Type Number %Rate

NHS Trust 41 74.54%

GP Practices 3 5.54%

PIVO 7 12.72%

Social Services 3 5.54%

Other 1 1.82%

(Table 2 Types of Organisations)

Question four related to the service areas that the participants practiced. This would

allow the researcher to examine whether there was representation from the different

bases to identify whether practitioners in different circumstances, who may have

different priorities were represented in the survey. Once again this would offer

intelligence with regards to developing the AP role across a divergence of services.

The researcher coded this section into eight distinct categories. This data indicated

that there was participation across many different settings. The development of the

AP role in community is envisaged as a growth area for the new role (see table 3).

Service Area (55 Participants: n=55)

Area Number %Rate

Acute 15 27.27%

Community 24 43.63%

Primary Care 4 7.27%

Integrated Care 5 9.09%

Education 1 1.81%

Secure Services 1 1.81%

Other 5 9.09%

(Table 3 Service Area)

The results of questions two to four enabled the questionnaire to be placed into

context. It had established the designation held by the participants and so their

positions within organisations and as such, the influence they would have in

developing staff within their organisation. The types of organisations and

environments, said individuals worked in, helped understand the needs of those

organisations where APs might be developed. Therefore, input was provided from a

divergence of roles, employment cultures and circumstances for service delivery.

44

Student number: -0207586

Questions five to twelve would establish the priority individuals had given to a

variety of statements which would be embodied into the structure and text of the

impact evaluation tool. The statements receiving the highest responses would be

represented in the tool itself enabling the researcher to concentrate on priority areas

and eliminate areas that participants had indicated were less influential factors.

Participants were asked to rank in order of importance for each statement

dependant on the number of options available. The highest priorities have been

summarised in the tables below and ‘n’ defined in relation to the number of

participants completing each question. The top two statements in each instance are

identified (For full results see appendix 4).

Q5 When considering the financial benefits of the development of a new role, I want

information on: - (45 participants: n=55 participation rate 81.81%)

Highest identified priority statements out of 5 options

The impact on patient’s /service users.

Cost effectiveness (value for money).

Table 4 (Financial considerations of developing the AP role)

Q6 When considering the impact the new role might have on patient care, I want

information on: - (38 participants: n=55 participation rate 69.09%)

Highest identified priority statements out of 6 options

Patient / service user satisfaction.

Number of patient / service user interventions.

Table 5 (impact on patient care)

Q7 When considering the training/ education of the staff in the new role, I would want information on: - (38 participants: n=55 participation rate 69.09%) Highest identified priority statements out of 4 options

What skills could be acquired/developed by the training /education provider.If there is opportunity to develop the staff after their initial training/education programme was complete.

Table 6 (Training and education of the role)

Q8 When considering the training/ education programme itself, I feel it would be

45

Student number: -0207586

important that:- (36 participants; n=55 participation rate 65.45%)Highest identified priority statements out of 5 optionsIt had work based competencies included in the programme.It was work based.

Table 7 (Type of training /education programme)

Q9 When considering the impact of the role on the functioning of the team/service, I would want information on: - (36 participants: n=55 participation rate 65.45%)Highest identified priority statements out of 5 optionsThe potential to maximise skills mix.Where the role could support service development and overall performance.

Table 8 (Impact on teams/service)

Q10 When considering staffing the new role, I would want information on (36 participants n =55 participation rate 65.45%)Highest identified priority statements out of 4 optionsGrowing your own? (Recognising talent in your own staff and developing them).Examples of how similar roles had been successfully integrated in other organisations.

Table 9 (Staffing of the roles)

Q11 When considering developing the new role, I would find it useful to have: - (35 participants: n=55 participation rate 63.63%)Highest identified priority statements out of 6 optionsCompetencies and competency frameworks used in other organisations for similar roles. Examples of job descriptions and person specifications from other organisations.

Table 10 (Useful resources on the AP role)

Q12 When considering developing the new role, I would want information on how: - (35 participants: n=55 participation rate 63.63%)Highest identified priority statements out of 5 optionsFits in with national and local health and social care priorities.Offers career opportunities for my staff.

Table 11 (Compatibility with national priorities and workforce opportunities)

Questions thirteen to fifteen considered wider human resource issues in relation to

roles within the organisation (see appendix 4 p.16-18). Question sixteen thanked

the participants for their time and offered free text to add comments they felt were

important (see appendix 4 p.19). Five participants made comment representing

9.09% of respondents from the original 55.

It was noted that a number of participants had ‘skipped’ questions as they had

progressed through the survey, totalling 35 of the respondents fully completing the

entire questionnaire This gave a 13.25% response rate when n=264. Ross (2012)

highlights that response rates from questionnaires can be low. However, in this

46

Student number: -0207586

instance the results would be triangulated and used in connection with the

secondary research conducted. Triangulation is deemed an effective way of

demonstrating validity in research studies (Polit and Beck, 2006; Gray, 2009; Ross,

2012). Analysing the responses to the questionnaire and the themes identified by

secondary research, would allow content and design of the draft impact evaluation

tool and so reflect the types of information that stakeholders would want.

Phase Two: Developing and pilot of the draft impact evaluation toolThe action research team considered the results of the research carried out in

phase one and incorporated the findings into the design and content of the draft

impact evaluation tool. This was piloted with a service manager who had introduced

APs into end of life services. An individual from the WBEF network, was identified to

carry out the process with the service manager. Guidance notes to both parties

were distributed.

Analysis of the results of the draft impact evaluation tool and processInitial feedback indicated that the impact evaluation tool and the process itself had

the potential to be time consuming. Feedback from the interviewer indicated that the

tool was very useful in the semi-structured interview as it gave focus to the meeting.

The interviewer also noted that the wording of some of the questions could be seen

as ambiguous and needed rethinking. It was also highlighted that in section three of

the tool, examples would be useful to guide the participant in completing this

section. The interview also commented that the manager had filled out parts of the

questionnaire prior to the semi-structured interview as was envisaged in the original

process map and participant’s guidance. However, this resulted in a reluctance to

change any responses previously noted. The interviewer did highlight that she felt

the follow up interview was a vital part of the process as she had been able to use

this opportunity to clarify certain points and help refocus some of the content. She

concluded that the tool had the potential to work well and was comprehensive in its

approach, however the results may have been more useful if the interviewee had

held a more vested interest in the outcome of the process.

The results from the impact tool and process were inconclusive. The actual tool is

not included in this dissertation to ensure confidentiality of the participant but the

findings are discussed. In the first instance the individual originally highlighted as

the most appropriate to engage in the process and gather the evidence, was

unavailable and so a second individual agreed to complete the process. The tool

had gathered valuable information in relation to the profile of the service and its

47

Student number: -0207586

intentions. It was noted that in the first instance the manager had discussed the

service as a whole and this would prove too large a team to evaluate. During the

semi-structured interview, the WBEF was able to narrow this down to the end of life

section of the service. Gray (2009) highlights that semi-structured interviews allows

the interviewer to probe more deeply and gain greater clarity. The evidence

provided was predominantly qualitative in nature, with the service manager offering

comment on the impact of the role and outcomes for service users. She indicated

that the service had improved patient choice, reduced hospital admissions and

brought together closer working with the district nursing teams. The service

manager refers to the skills mix of the team and extended skill set of the staff.

Section three of the Impact evaluation tool was specifically designed to illicit

quantitative data from the participant. It asks the individual to consider evidence

available to support claims of cost efficiencies and service improvements. In this

section the service manager indicated all activities of the support worker role as

opposed to those specific to the end of life service. This might have been as a result

of misinterpretation of the section. The section did not produce usable quantitative

data as expected. Equally the manager had used generic data compiled from the

internet as opposed to utilising the data available within the service itself. Therefore,

it was difficult to evaluate effectively the direct impact of the role within that service

in relation to cost efficiencies and patient throughput. The service manager did

indicate that there was data gathering tools available to quantify impacts such as

spreadsheets and district nursing time lines but did not investigate this data for the

completion of the impact evaluation tool. They also indicate that with increased skills

mix amongst the staff, handover times were reduced and number of visits reduced

but does not indicate by how much. The WBEF who carried out the interview

discussed these areas, however the service manager felt that to interrogate the

evidence fully would be too time consuming. Overall the impact evaluation tool had

gathered a great deal of valuable data around the service itself but had not

produced the quantifiable evidence that could calculate savings or efficiencies

specific to that service or be attributed to the introduction of the role itself. This was

considered in the redesign of the impact evaluation tool.

The rationale of piloting the impact evaluation tool and the process was to learn

lessons, gain feedback and make adjustments to the final version of the tool.

Piloting is deemed a vital process when researching as it gives opportunity to

examine all aspects of the project (Gray 2009). The researcher conducted an open

48

Student number: -0207586

interview with the WBEF who had piloted the tool within service and results of the

completed tool were discussed. Each section was examined for relevance and

appropriateness. The WBEF indicated that the service manager was reluctant to

revisit many of the answers. Also it was noted that originally a different individual

who had greater involvement in the service, and was identified to engage in the

process, was likely to be unavailable for a considerable time which would have

delayed the piloting of the tool. Having a vested interest in the process is important

especially if the process does have the potential to be time consuming (Ross 2012).

Discussion also took place around the appropriateness of the service for piloting the

impact tool, concluding that retrospectively it might not have been the best choice.

This indicated that the effective use of the tool would be enhanced by a request

from services themselves to identify their need and provide motivation in fully

evaluating the introduction of new roles thus completing the impact evaluation tool

fully. The researcher also discussed whether the use of examples within the

document would be useful, especially in relation to section three. It was agreed this

would be a great benefit. Discussion took place as to whether the tool was a useful

guide to conduct the semi-structured interview? The WBEF confirmed that it gave

focus to the interview, allowed them to interrogate answers and seek clarification.

She felt that this was an essential part of the process and had the potential to

produce quality, holistic data surrounding the impact of the role.

Phase Three: Redesigning the impact evaluation tool and final version Initial changes were made in line with this first set of feedback and taken back to the

action research group for further reflection and readjustment. It was noted in the

action research group that once completed it was also difficult to distinguish who

had made what comments, which was the original text and what was the

participants responses? These considerations were also taken into account, the use

of italics to identify the pre-set text of the tool was introduced into the design.

Although the guidance notes issued to stakeholders prior to completing the tool

request them to consider how they would produce the evidence for section three it

was felt that this might be reiterated in the body of the tool itself. The researcher

also discussed the usefulness of an example of how cost savings could be

calculated would also be added to the agreed template. Additional notes on

completion were also added.

Amendments were made to the tool in line with the feedback and discussion. The

redesigned impact evaluation tool was then presented to a small focus group for an

49

Student number: -0207586

objective analysis of the final template. The membership consisted of three

members of the WBEF network who have an understanding of the topic under

consideration. Bell (2009) feels that having knowledge in the area can have an

advantage when considering the usefulness of a focus group. The group was asked

to read through the impact evaluation tool and were asked questions on its format,

the relevance of questions, topics covered, any sections they felt were misleading or

confusing and whether it would produce both qualitative and quantitative data that

would be useful. They were asked if the examples were helpful and whether the

way in which cost savings were calculated was accessible and straightforward to

use.

The response was very positive with the group making only small recommendations

to wording and lay out of the tool. It was identified that the notes contained within

the template were useful but should appear at the top of the tool as opposed to the

end. This final adjustment was made to the impact evaluation tool. The design of the

tool was updated in line with the findings of the focus group and final process map

devised (see appendices 12,13).

Input from the management team suggested that comprehensive completion of the

tool would be more likely if the individual engaged in the process had a high desire

to evaluate the role. It was therefore suggested a flyer be produced so that

stakeholders interested in evaluating the impact of a new role, had some indication

of time commitment and expectation on their part and the support that the WBEF

network as a whole could offer in the process (see Appendix 14).

Validity and Reliability Gray (2009) discusses validity identifying “…a research instrument must measure

what it was intended to measure” (p.55). This sentiment is echoed by multiple

authors (Bowling, 2009; Ross, 2012). Gray identifies seven different types of validity

internal, external, criterion, construction, content, predictive and statistically valid.

Internal validity is identified as correlation, cause and effect, are the findings of the

study a reflection of the overall objective. Whereas external validity refers to the

extent to which the findings can be generalised to a larger population. External

validity can often be determined by what particular characteristics the population

group of the study have in relation to other groups of individuals (Bell,2006; Cohen,

2007; Gray, 2009; Ross, 2012). Reliability refers to the dependability or repeatability

of the research method. Ross (2012) identifies three levels of reliability, stability

measuring, how often a tool consistently produces the same results, equivalence,

50

Student number: -0207586

whereby data produced by different researchers is compared and internal

consistency, whereby different sub divides of the tool point to the same conclusion

(Ross, 2012).

The mixed method approach can be argued as both valid and reliable. Secondary

research conducted, considered evidence from different studies, across different

disciplines and at different intervals. The themes identified from this research were a

recurrent throughout and characterised by their investigation into evaluation of the

AP role. The market research questionnaire was piloted with three individuals who

had experience of AP roles, prior to its launch, who concluded that the research

questions were appropriate to the study and so offer assurances of reliability and

internal validity. The sample who completed the questionnaire shared similar

characteristics to the wider target population and so could confirm external validity

to the method. With reference to objective one and two of the project, the methods

did offer an opportunity to scope out the current situation and enable the design of a

market research questionnaire, which justify the methods in this circumstance as

valid. The findings were reflected on by the action research team in the construction

of the draft impact evaluation tool.

The draft impact evaluation tool was then piloted and did produce qualitative and

some quantitative evidence. The semi-structured interview did assist in to clarify the

information obtained. However, the motivation of the participant did compromise

some of the findings. Although the WBEF who conducted the interview probed the

service manager to justify their claims, the service manager was reluctant to revisit

sections and investigate the evidence available within their specific service. The

WBEF was clear about the information that should have been produced but did feel

that there might be some ambiguity.

Feedback from the WBEF along with the consideration of the action research team

did lead to a content review, redesign and reformatting of the actual finalised tool.

Input from the focus group on the finished product indicated that the final version of

the impact evaluation tool, if used and filled out correctly, would produce the holistic

results that were anticipated. The final development of a flyer to ensure that the

participants were motivated to complete the impact evaluation tool and had a vested

interest in engaging with the process, was considered an appropriate method of

future selection of service areas and participants. It could be argued that the impact

evaluation tool is valid in that the final version would collate the anticipated

51

Student number: -0207586

information if completed by a highly motivated participant. The final version is

complete and ready for use but had not been completed in other service areas at

this stage of the research project.

SummaryThis chapter has examined the results and findings of the research project and the

phases which were carried out to date. It has considered how both secondary

research and a market research questionnaire were utilised to develop a draft

impact evaluation tool. The results of this first phase have been analysed and

incorporated into the draft design of the tool. Phase two of the project has been

discussed. Piloting the tool and process within a service manager from end of life

services and the experiences of the WBEF who conducted a follow up semi-

structured interview, focusing on the impact evaluation tool have been evaluated.

Feedback from the WBEF along with recommendations from a focus group and the

thoughts of the action research group itself have been discussed and how they

influenced the final action research tool ready for use. Phase three considered how

the finalised tool has now been produced, supported by an information flyer to

provide a resource for managers to both evaluate the impact of a new role in

practice and for managers who were considering the development of roles within

their service.

Chapter 5: Discussion and Analysis

The researcher had a particular interest in the AP role. From a personal perspective

the researcher had been involved with the development of APs since 2002 and

employed in one of the first fourteen pilot sites engaged with “Delivering the

Workforce”. The researcher had first-hand experience of the emerging role, he had

been a participant in the Benson and Smith research project in 2007. Over fourteen

years the researcher had directly supported TAPs, been involved as an operational

team manager and lectured on the foundation degree. Equally, the researcher on

many occasions presented locally, regionally (North West) and nationally,

establishing a sound knowledge and understanding of the type of information

stakeholders were requesting. The original concept of the research study was: ‘The

training and development of APs: An action research project to develop a tool to

evaluate the impact of the role in practice and inform service development within

NHS and Non -NHS organisations.’ The hypothesis formulated by the action

research team and utilisation of personal experience of the author, concluded

52

Student number: -0207586

although there was a plethora of qualitative evidence evaluating the AP role, there

was a lack of quantitative evidence available. Equally, a tool to collate evidence and

measure impact was not apparent. The project aims in summary were to enhance

the evidence base with more comprehensive data and therefore assist stakeholders

make informed decisions about the development of their non-registered staff. As a

member of the WBEF network, the author has a prime focus on the development

and promotion of APs, particularly in the North West. Discussion and analysis will

be conducted under the following headings.

The AP roleThe literature supports the view that the AP role has gathered momentum since it’s

conceptualisation in 2002. Benson and Smith (2007) had evaluated the role and

concluded mixed results. They comment on the impact of the role however, report

from a predominately qualitative paradigm. They convey confusion around the role,

lack of trust, reluctance by registered staff to delegate and uncertainty of the

potential the role might have. Allen et al. (2012) had noted that ambiguity of the AP

had influenced the effective integration of the role in critical care. Miller et al. (2015)

subsequently echoed many of the concerns that Benson and Smith had raised.

Lack of clarity and vision were still dominating concerns and lack of consistency in

the deployment of AP nationally was still evident. The literature in almost every case

had positive examples of how the AP role could be utilised but equally had the

commonality that there were still barriers to optimising their use. Stewart-Lord et al

(2011) examined the role specifically in radiography, where the AP role had been

introduced as part of a national strategy, however they reported that despite a SoP

being established for the role written by the SoR, there were still discrepancies in

what the APs were doing. Confusion is a recurrent trend (Benson and Smith, 2007;

Spilsbury et al, 2009; Stewart-Lord et al. 2011; Allen,2012; Miller, 2013; Miller et al.

2015).

The results from the market research questionnaire confirmed that participants

viewed economic impact and effects on patients to be of great importance when

introducing new roles into their service. These priorities needed to be addressed in

the impact evaluation tool. Indeed, at stakeholder briefings this type of information

was commonly requested. Wilson (2008) supports the notion that this needs to be

taken into account. She developed a comprehensive tool kit focusing on the

implementation of the AP role in the East Midlands. This offers guidance around on

what stakeholders can do. She suggests that managers need to “…identify all

53

Student number: -0207586

benefits expected from developing and implementing the AP role and how these will

be measured” (p.14). She continues by emphasising the importance of

“Assessments of outcomes in the workplace” (p.15), and reiterating this sentiment

identifying that the impact of the AP role must evaluate their effectiveness in

service. Skills for Health (2015) identify staff costs as the largest expenditure in the

NHS, value for money is high on the agenda, concluding “Making better use of

support workers can also make a significant contribution to saving money and

helping improve patient care” (p.14). The impact evaluation tool developed offers

one document that enables stakeholders to consider this and engage in cost benefit

analysis of the role.

Stakeholders identified that the contribution to skills mix and effects the role would

have on meeting national targets, were factors in their decisions around developing

the AP role. Miller et al. (2015) supports this view by recognising the great

contribution the AP role has to service delivery. They acknowledge that APs are

generally a stable workforce. Equally stakeholders also reported how engagement

with other organisations who had developed APs would be extremely helpful. Miller

et al. (2015) clearly identify that: “Several employers commented that they were

keen to learn from the experience of others” (p. 98), which echoes these findings.

Participants in the questionnaire highlighted that the sharing of job descriptions and

competency frameworks would be beneficial in developing new roles. The impact

evaluation tool developed in conjunction with the findings of the market research

questionnaire, offers the opportunity for intelligence sharing amongst stakeholders.

Results of the market research questionnaire indicated 40% of participants, always

looked at skills analysis when recruiting to vacancies, and although 77.43%

indicating workforce planning did play a part when recruiting future staff, this still

indicated that over a fifth did not. The researcher also acknowledges that workforce

planning can mean different things to different people. A definition of what is

considered workforce planning may have been helpful in establishing how many

had considered this as part of a formal process. Miller et al. (2015) also consider the

use of workforce planning and highlight in their findings that this was lacking in

many cases although seen as important to the successful implementation of the AP

role.

The action research group did take the view that gathering market research

intelligence was a vital part of this process. Justifiably, the areas that people

54

Student number: -0207586

developing the roles deemed important would need to be reflected in the finalised

impact evaluation tool. On reflection there are limitations to the use of the

questionnaire and the sample chosen, with a final response rate of 13.25%. As

participants had to prioritise their responses, there could have been the potential for

some aspects to be seen as all having importance, some of the scores were indeed

close. The action research team where however, very keen to encourage

stakeholders to think about their priorities, as they realised that there was potential

for the document to become too unwieldly. Also the population of the research

questionnaire where from the North West and were familiar with the AP role being

developed through commissioned foundation degrees, therefore funding may not

have been highlighted as a major issue, as funding of the programme already

existed for their particular trainees. Miller et al. (2015) highlight that funding is an

issue for managers they had interviewed and highlight the availability of funding to

be influential in the decision to develop the role. This was not evident in the findings

of this particular study. The majority of respondents to the questionnaire did come

from NHS trusts which is the trend nationally (Miller et al., 2015), however there is a

recognition that the AP role can be utilised in many different settings and whilst

PIVOs and non NHS are represented their contribution could have been

investigated further.

Sustainability and expansion of the role The market research questionnaire elicited responses from different roles and

across different service areas. The AP role can be adapted to meet the needs of a

variety of disciplines. The Royal College of Nursing (2010) reported the growth of

APs across the country and acknowledged the experience of the North West, who

had developed the greatest number with sustained interest. Miller (2013) concluded

that there would be a likely rise in the number of APs nationally. Miller et al. (2014)

identified that the AP role was being deployed in a number of settings with growing

interest. Miller et al. (2015) project the increased demand for the AP role concluding

the role can continue to be embedded in the future. Skills for Health (2016; 2016a)

supports the development at utilisation of the support worker role including APs,

they advocate that the use of band 4 roles can work with minimal supervision and

continued development of the role will assist in meeting demographic change and

service demand. The evidence strongly indicates that the AP role has great

potential for the future. The impact evaluation tool will support the continued

development of the role by producing more holistic evidence to inform managers in

developing their bands 1-4.

55

Student number: -0207586

It is recognised by the author that one of the limitations of the tool is that it can be

time consuming and has the potential to become a lengthy document. The action

research group concluded that there is need for such a tool and to be robust, this

would have to be a process that takes time and consideration on behalf of the

stakeholder and the WBEF conducting the follow up interview. Equally, the pilot

highlighted that motivation of the stakeholder to generate meaningful data from that

service area, is vital to ensure that the impact of the role can be evaluated

effectively. It is acknowledged that the area highlighted for piloting may not have

been the most appropriate and so test the tool most thoroughly. As the tool is

disseminated more widely and used more extensively, the action research group will

continue to review its use and gain more depth of understanding of its effectiveness.

It is anticipated that future developments will produce a shorted version of the tool

that can be utilised solely to examine cost effectiveness and patient impact.

Education and training Participants in the market research questionnaire confirmed that their preferred

model of training was through work based learning, they indicated that work based

competencies were key in this. The involvement of employers is fundamental to

work-based learning as Raelin (2008) discusses. Philips (2012) also discussed the

relationship between learner, employer and HEI in her discussion on work-based

learning. Participants confirmed they were less concerned whether the programme

lead to a nationally recognised qualification or at what academic level it would be

delivered at. This suggests that fitness for practice and meaningful competencies

were of far greater importance.

When piloting the impact evaluation tool the stakeholder indicated that their staff

had followed a foundation degree programme but that there had been a need for

additional competencies to be achieved, which had resulted in their APs not being fit

for practice. Miller et al. (2015) also commented that they too had found managers

indicating that their APs were not fit for practice and that in some areas the

qualification had become too generic, resulting in some managers losing confidence

in the programme. The impact evaluation tool asks managers to consider deficits of

the role and invites them to action plan to address these potential problematic

areas.

Employer feedback on the training programme was deemed important to the action

research group. Therefore, the impact evaluation tool developed offers opportunity

for stakeholders to comment on the training programme they had utilised in

56

Student number: -0207586

developing the AP role. Equally it invites stakeholders to examine ROI both from an

economic perspective and patient experience. Considering the work of Kirkpatrick

and Kirkpatrick (2012) evaluation of training programmes must capture the opinions

and observations of employers. With Philips (2003) also considering ROI being

argued as critical in establishing the true worth of development programmes This

information should be shared with programme provides to influence the curriculum

content and design of their courses.

Impact evaluation The impact evaluation tool is envisaged to provide a resource for stakeholders in

their decision making process as was the aim of the study undertaken. It was

deemed important that the tool offered a mixed method approach to impact

evaluation of the role in services. This approach is supported by Bamberger (2012)

who advocates that mixed methods approach to impact evaluation produces the

most reliable results. The literature surrounding impact evaluation identifies it as a

process. Robust impact evaluation relies on establishing a counterfactual and

measuring the end product with original vision. This will help identify programme or

change theory and develop a hypothesis to measure against (OECD, nd; Rogers,

2012). Findings from piloting the impact evaluation tool helped identify areas that

needed adjustment and whether the questions were appropriate to the intended

use. Rogers (2012) highlights that asking the right questions is imperative to good

impact evaluation. The action research group considered feedback from different

sources when construction the finalised tool.

The impact evaluation tool reflects these positions in both content and design.

However, it is limited in that its deployment is to examine a before and after

comparison. The researcher is aware that a comprehensive study would provide a

more robust evaluation, however rationalises that logistically this would be a difficult

and time consuming process for all participants. Feedback from the service

manager who engaged in the pilot study indicated that she felt it could be very time

consuming and therefore dissuade them from engaging in the process. The

researcher acknowledges that the finalised tool does have limitations in this respect.

There is scope to look at conducting a more extensive impact evaluation with

organisations willing to participate over a period of time, which would produce a

more comprehensive appraisal of the impact new roles have had in that service.

Currently, the finalised tool has been agreed by the management team of the WBEF

network and is available for future use. Continuous evaluation by the action

57

Student number: -0207586

research group will monitor the effectiveness of the tool and reflect on its usefulness

within the practice area.

SummaryThis chapter has discussed the findings of the research study and how the literature

supports or compromised the results. It has examined the data extrapolated from a

market research questionnaire and compared its findings with those of others

researching in the field. The findings of each phase of the study has been evaluated

in relation to current evidence. The development of an impact evaluation tool has

been debated and how the tool will be used to add to the evidence base

surrounding the development of the AP role. A justification for a mixed method

approach has been offered and the intention for the tool to be available for use by

stakeholders. It has explored the research methodology and acknowledged any

limitations that exist.

Chapter 6: Conclusions and Recommendations

In conclusion this research study came from the initial position of how an action

research approach could develop an impact evaluation tool to support managers in

developing their bands 1-4 staff. It was carried out in line with a project identified by

the WBEF network to enable both qualitative and quantitative evidence to be

gathered to meet the demands for information requested by stakeholders. It also

offers the opportunity for managers who have introduced new roles, evaluate the

holistic effectiveness and influence of that role in their own service. Considering

evidence from research articles reviewed regarding the evaluation of APs, it was

found that the findings were predominantly qualitative in nature. This confirmed the

hypothesis of the action research group that there was a lack of quantifiable data in

relation to the cost benefits of the AP role and its direct effects on patient

throughput. Information received from an initial market research questionnaire also

provided an indication of what priorities stakeholders had when considering the

58

Student number: -0207586

development of new roles in their service. There was very little evidence when

considering the literature review that any tangible tool had been developed. The

action research team concluded that the development of a tool that would

holistically evaluate the impact of the AP role, would be a useful resource for

stakeholder to use in their service. Furthermore, it was anticipated with permission,

their information could be incorporated into information sheets and shared with

others.

The research was carried out in phases and anticipated future phases identified.

Each phase was evaluated by the action research team to assist in the design of the

final tool. The initial market research questionnaire did return results however these

were limited and the population of the survey could have been more wide-reaching.

It did provide a baseline to work from and assisted in identifying what needed to be

incorporated into the impact evaluation tool.

The draft tool was piloted with a service manager and the results and methodology

analysed. It was evident from the pilot that there were limitations to the tool. The

pilot indicated that the tool needed redesigning, reformatting and be more explicit if

it was to produce the data deemed most useful to stakeholders and potential future

stakeholders. The results from the pilot had mostly captured qualitative data and not

a mix as expected. Some of the data gathered was unusable as the service

manager had used the internet as a source of evidence which meant it did not relate

to her service. Time restraints was also highlighted as an issue and whether it would

be realistic to expect busy managers to give the attention the tool needed. An

interview with the WBEF who carried out the follow up interview with the service

manager, also raised issues around motivation of the participants and highlighted

that the service area completing the impact evaluation tool, must have an intrinsic

interest in the process and results. Comments on terminology were also considered

as was the necessity to conduct an interview with the participant. The action

research group also considered further adaptation to the impact evaluation tool. The

redesigned tool was then scrutinised by a small expert focus group with vast

experience of supporting APs before it was finalised. The issue of ensuring high

motivation of the participants was addressed by the creation of a flyer which

highlighted the expected time scale for completion and the benefits of conducting

the impact evaluation itself. The flyer will ensure that only managers who wished to

participate in the process would come forward in the first instance. For logistical

reasons the impact evaluation tool has to date only been piloted with one

59

Student number: -0207586

stakeholder. It is now necessary to work with other organisations, using the newly

designed tool to compare results. The finalised tool has been developed and is now

ready to be used not only to evaluate the AP role but any new role that has been

developed.

The Aims of the project were twofold:

1. To provide managers with evidence based resources that can inform

their decision making when contemplating the training and development

of non-registered staff into the role of AP.

2. To develop a tool that will evaluate the impact of the AP role within a

service area.

The impact evaluation tool will add a different dimension to the information available

and so add to the resources that stakeholders will have access to. As the impact

evaluation tool requires participants to consider what evidence they have to support

their claims, it will offer real life data that can then be used to consider future

decision making. Equally, stakeholders will be able to objectively evaluate the

effectiveness of new roles they have introduced themselves, including potential cost

efficiencies, effects on patients and impact on key performance indicators by using

the tool. Although it is acknowledged that the tool has limitations and may not have

wide appeal, it is concluded the aims of the research study have been achieved.

The true potential of the finalised impact evaluation tool will only become evident

with future use.

Recommendations The researcher makes the following recommendations in the future use of the

finalised impact evaluation tool: -

1. That the WBEF network identifies suitable areas that might be appropriate

and are motivated to carry out an impact evaluation of new roles they have

introduced into service. These should include both NHS and non NHS

organisations and cover a variety of service setting, to establish the flexibility

of the tool and its adaptability.

2. The members of the WBEF network are trained in the use of the impact

evaluation tool and the purpose of the follow up semi-structured interview.

3. Stakeholders who currently have AP positions are encouraged to fully

evaluate the impact these roles have had in service.

60

Student number: -0207586

4. The tool is used to evaluate new roles other than that of the AP.

5. The evidence gathered from carrying out the impact evaluation are

converted into case studies that highlight both qualitative and quantitative

data which can be shared to a wider audience. The evidence can be used to

evaluate whether the initiative has had a direct impact on services and so

can be used to justify future investment into the roles.

6. The impact evaluation tool is used as a method of showcasing excellence

and sharing good practice.

7. Data collected around the education or training programme is fed back to the

education/training provider to influence the curriculum content

8. An abridged version of the impact evaluation tool is developed as an on line

resource that stakeholders can fill out themselves in the future.

9. The action research continues to monitor the use and usefulness of the

impact evaluation tool and reviews its effectiveness at regular intervals.

10. The impact evaluation tool evolves to continue to capture the evidence that

is useful to all stakeholders both current and potential.

Final thoughts The aim of this dissertation was ‘An action research project to develop a tool to

evaluate the impact of the AP role in practice and inform service development within

NHS and non NHS organisations. To this extent the aim has been recognised. The

action research model has provided an effective way to investigate the research

question and ultimately produced an impact evaluation tool that can be used by

current service areas and potential service areas. It encourages services to focus

on the evidence base and share good practice with other practitioners, whilst

offering stakeholders an additional resource to evaluate their service.

61

Student number: -0207586

Bibliography

Accredited qualifications. (2012). Qualifications and Credit Framework. [Online]

Available from: http://www.accreditedqualifications.org.uk/qualifications-and-credit-

framework-qcf.html. [Last accessed 30th April 2016].

Allen, K. McAleavy, J.M. and Wright, S. (2012). An evaluation of the role of the

Assistant Practitioner in critical care. British Association of Critical Care Nurses. 18

(1), pp.14-23.

Bamberger, M. (2012). Introduction to mixed methods of impact evaluation. [Online]

Available from: https://www.interaction.org/sites/default/files/Mixed%20Methods

%20in%20Impact%20Evaluation%20(English).pdf. [Last accessed 14th March

2016].

62

Student number: -0207586

Basit, T. N., Eardley, A. Borup, R., Shah, A. and Hughes, A. (2015). Higher

education institutions and work-based learning in the UK: employer engagement

within a tripartite relationship. Higher Education. 70 (6), pp.1003-1015.

Bell, J. (2006). Doing your research project. A guide for first-time researchers in

education, health and social care. 4th ed. Berkshire: Open University Press.

Benson, L. and Smith, L (2006) Delivering The Workforce. Evaluation of the

introduction of Assistant Practitioners in seven sites in Greater Manchester second

report May 2006.Manchester: Centre for Public Policy and Management University

of Manchester.

Bonbright, D. (2012). Use of impact evaluation results. [Online] Available from:

https://www.interaction.org/sites/default/files/Use%20of%20Impact%20Evaluation

%20Results%20-%20ENGLISH.pdf. [Last accessed 14th March 2016].

Boud, D., Solomon, N. and Symes, C. (2001). New Practices for New Times. In:

Boud, D. and Solomon, N.(eds.) (2003) Work-based learning: a new higher

education? Buckinghamshire: Open University Press. 3-17.

Bowling, A. (2009). Research methods in health: Investigating health and health

services. Maidenhead: Open University Press.

Brown, L., Hedgecock, L., Simm, C. and Swift, J. (2011). Advanced Paramedics

deliver on the front line. Health Service Journal. 25 (None available). pp. 24 - 26.

Bungay, H., Jackson, J. and Lord, S. (2015). Exploring assistant practitioners’

views of their role and training. Nursing standards. 30 (30), pp.46-52.

Burke Johnson, R. and Onwuegbuzie, J. (2004) Mixed method research: A research

paradigm whose time has come. Educational researcher, [Online] 33 (7), pp. 14-26.

Available from: http://www.jstor.org/stable/3700093. [Last accessed 6th February

2016]

Cavendish, C. (2013). The Cavendish Review an independent review into

Healthcare Assistants and Support Workers in the NHS and social care settings.

[Online] Available from:

63

Student number: -0207586

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/

236212/Cavendish_Review.pdf .[ Last accessed 5th April 2016].

Chapman, A. (n.d.). Pest market analysis tool [Online] Available from:

http://www.businessballs.com/pestanalysisfreetemplate.htm. Last accessed 30th

November 2015.

Chivite-Matthews, N. and Thornton, P. (2011). Guidance on evaluating the impact

of interventions on business. [Online] Available from:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/

212318/11-1085-guidance-evaluating-interventions-on-business.pdf. [Last accessed

3rd April 2016].

CIPD. (2015). PEST analysis: Resource summary. [Online] Available from:

http://www.cipd.co.uk/hr-resources/factsheets/pestle-analysis.aspx . [Last accessed

15th March 2016].

Clifford, C. (1997). Qualitative research methodology in nursing and health care.

London: Churchill Livingstone.

Cohen, L., Manion, L., and Morrison, K. (2013). Research Methods in Education.

7th ed. Oxon: Routledge.

Creswell, J.W. (2007). Qualitative inquiry and research design: Choosing among

five approaches. 2nd ed. London: Sage.

Department of Health. (2010). Equity and excellence: Liberating the NHS.

[Online] Available from:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/

213823/dh_117794.pdf. [Last accessed 11th March 2015].

Department of Health. (2013). Patients first and foremost the initial government

response to the report of the Mid Staffordshire NHS Foundation Trust public inquiry.

[Online] Available from:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/

170701/Patients_First_and_Foremost.pdf. [Last accessed 18th April 2016].

Duffy, B. (2006). The analysis of documentary evidence. In: Bell, J.(ed.) Doing your

research project: A guide for first-time researchers in education, health and social

sciences. 4th ed. Berkshire: Open University Press. 123-133.

64

Student number: -0207586

Francis, R. (2013) Report of the Mid Staffordshire NHS Foundation Trust Public

Inquiry: Executive Summary. London: The Stationery office

Garbarino, S. and Holland, J. (2009). Quantitative and qualitative methods in

impact evaluation and measuring results. [Online] Available from:

http://www.gsdrc.org/docs/open/eirs4.pdf. [Last accessed 11th March 2016].

Gray, D. E. (2009). Doing research in the real world. London: Sage.

Health Education England. (2014). Widening participation, it matters: Our strategy

and initial action plan. [Online] Available from:

https://www.hee.nhs.uk/sites/default/files/documents/WES_Widening-Participation-

Strategy_Booklet.pdf. [Last accessed 2nd April 2016].

Health Education England. (2016). Building capacity to care and capability to treat a

new team member for health and social care in England. [Online] Available from:

https://hee.nhs.uk/sites/default/files/documents/Response%20to%20Nursing

%20Associate%20consultation%2026%20May%202016.pdf. [Last accessed 5th

July 2016].

Health research authority. (n.d.). Determine whether your study is research. [Online]

Available from:

http://www.hra.nhs.uk/research-community/before-you-apply/determine-whether-

your-study-is-research/. [Last accessed November 2015].

Health research authority. (n.d.). Is my study research? [Online] Available from:

http://www.hra-decisiontools.org.uk/research/. [Last accessed 15th November

2015].

Heron, J. (1996). Co-operative inquiry: Research into the human condition. London:

Sage

Hickson, M. (2008). Research handbook for health care professionals. Singapore:

Blackwell Publishing.

Howat, C. and Lawrie, M. (2015). Sector insights: skills and performance challenges

in the health and social care sector. [Online] Available from:

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/

430164/

65

Student number: -0207586

Executive_Summary_Skills_and_performance_challenges_in_health_and_social_c

are.pdf. [Last accessed 6th April2016].

Kessler, I., Heron, P., Dopson, S., Magee, H, Swain, D. and Askem, J. (2010). The

Nature and consequences of support workers in a hospital setting.

[Online] Available from: http://www.netscc.ac.uk/hsdr/files/project/SDO_FR_08-

1619-155_V01.pdf. [Last accessed 5th January 2016].

Killgannon, H. and Mullens, C. (2008). What are assistant practitioners? British

Journal of Healthcare Assistants 2 (10), pp.513-515.

Kirkpatrick, J. and Kayser Kirkpatrick, W. (2009). The Kirkpatrick four level: A fresh

look after 50 years. [Online] Available from:

http://www.kirkpatrickpartners.com/Portals/0/Resources/Kirkpatrick%20Four

%20Levels%20white%20paper.pdf. [Last accessed 16th May 2016].

Kirkpatrick, D.L. and Kirkpatrick, J.D. (2012). Evaluating training programs: the four

levels. 3rd ed. San Francisco: Berrett-Koehler.

Kolb, D.A. (1984). Experiential learning: Experience as a source of learning and

development. Jersey: Prentice Hall.

Leach, C. and Wilton, E. (2008). Evaluation of Assistant/Associate Practitioner roles

across NHS South Central. [Online] Available from:

http://www.workforce.southcentral.nhs.uk/pdf/NESC_Evaluation_Report_Final_2009

0212.pdf. [Last accessed 9th April 2016].

Mackey H. and Nancarrow, S. (2005) The introduction and evaluation of an

occupational therapy assistant practitioner. Australian Occupational Therapy

Journal;52: (4), pp. 293-301.

Miller, L., Williams, J., Marvell, R., and Tassinari, A. (2015). Assistant Practitioners

in the NHS in England. [Online] Available from:

http://www.skillsforhealth.org.uk/index.php?

option=com_mtree&task=att_download&link_id=175&cf_id=24. [Last accessed 11th

March 2016].

Miller, L. (2011) The Role of Assistant Practitioners in the NHS. Factors affecting

evolution and development of the role. Bristol: Skills for Health

66

Student number: -0207586

Miller, L. (2013) Assistant Practitioners in the NHS: drivers, deployment,

development. Bristol: Skills for Health

Miller, L., Williams, J. and Edwards, H. (2014). Assistant Practitioner roles in the

Welsh Health Sector enhancing the potential for future development. [Online]

Available from: https://www.myhealthskills.com/uploads/articles/files/Assistant

%20Practitioners%20in%20Wales%202014(1)-1393511549.pdf. [Last accessed

12th April 2016].

Mortimer, D. (2016). Building capacity to care and capability to treat – a new team

member for health and social care: Consultation - NHS Employers response. Leeds:

NHS Employers

National Institute for Health Care and excellence. (2014). Safe staffing for nursing in

adult inpatient wards in acute hospitals. [Online] Available from:

https://www.nice.org.uk/guidance/sg1/resources/safe-staffing-for-nursing-in-adult-

inpatient-wards-in-acute-hospitals-61918998469. [Last accessed 13th April 2016].

NHS Education for Scotland. (2010). Healthcare Support Workers. [Online]

Available from:

http://www.hcswtoolkit.nes.scot.nhs.uk/media/3752/hcsw_literaturereview.pdf. [Last

accessed 1st April 2016].

NHS Employers. (2012). Evaluating an assistant practitioner project. [Online]

Available from:

http://www.nhsemployers.org/your-workforce/retain-and-improve/standards-and-

assurance/developing-your-support-workforce/assistant-practitioners/evaluating-an-

assistant-practitioner-project#1. [Last accessed 11 May 2016].

NHS employers. (2012). Evaluating an assistant practitioner project. [Online]

Available from:

http://www.nhsemployers.org/your-workforce/retain-and-improve/standards-and-

assurance/developing-your-support-workforce/assistant-practitioners/evaluating-an-

assistant-practitioner-project. [Last accessed 31st January 2016].

NHS employers. (2015). Assistant practitioners. [Online] Available from:

http://www.nhsemployers.org/your-workforce/retain-and-improve/standards-and-

assurance/developing-your-support-workforce/assistant-practitioners. [Last

accessed 31st January 2016].

67

Student number: -0207586

NHS employers. (2015a). Talent for care and widening participation resources.

[Online] Available from: http://www.nhsemployers.org/news/2015/09/update-on-

talent-for-care-and-widening-participation. [Last accessed 4th July 2016].

NHS employers. (2016). Our response to the nursing associate role

consultation. [Online] Available from:

http://www.nhsemployers.org/news/2016/03/our-response-to-the-nursing-associate-

role-consultation. [Last accessed 5th May 2016].

NHS England. (2014). Five year forward view. [Online] Available from:

https://www.england.nhs.uk/wp-content/uploads/2014/10/5yfv-web.pdf. [Last

accessed 30th March 2016].

NHS Wirral Research & Development Team. (2011). Fact sheet 6: How do I

evaluate my project or service? [Online] Available from:

http://info.wirral.nhs.uk/document_uploads/evidence-factsheets/6Howevaluateprojec

tservice.pdf. [Last accessed 31st January 2016].

Organisation for Economic Co-operation and Development. (n.d.). Outline of

Principles of impact evaluation: Part 1 key concepts. [Online] Available from:

http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf. [Last accessed 11th

March 2016].

Perrin, B. (2012). Use of impact evaluation results. [Online] Available from:

https://www.interaction.org/sites/default/files/Use%20of%20Impact%20Evaluation

%20Results%20-%20ENGLISH.pdf. [Last accessed 14th March 2016]. 

Philips, S. (2012). Work-based learning in health and social care. British Journal of

Nursing. 21 (5), pp.918-922.

Phillips, J.J. (2003). Return on investment in training and performance programs

(Improving human performance). 2nd ed. London and New York: Routledge, Taylor

and Francis group.

Polit, D.F. and Beck, C.T. (2006). Essentials of Nursing research: Methods,

appraisal and utilization. 6th ed. Philadelphia: Lippincott, Williams and Wilkins.

Powell, M., Brown, T. and Smith, J. (2016). Skill mix: Using the assistant practitioner

to drive efficiency. Practice Nursing. 27 (1), pp.40-43.

68

Student number: -0207586

Raelin, J. A. (2008). Work-Based Learning. [Online]. Jossey-Bass. Available

from:<http://www.myilibrary.com?ID=121745> [Accessed 29 April 2016].

Richard, W. (2002) Work-based Learning in Health: Evaluating the experience of

learners, community agencies and teachers, Teaching in Higher Education, 7 (1),

47-63

Rogers, P. J. (2012) Introduction to impact evaluation. [Online] Available from:

https://www.interaction.org/document/introduction-impact-evaluation. [Last

accessed 11th March 2016].

Royal College of Nursing. (2009) Policy Unit, Policy Briefing 06/2009. The Assistant

Practitioner Role. A Policy Discussion Paper, Policy Briefing 06/2009. London: RCN

Policy Unit.

Royal college of Nursing. (2010). Assistant practitioner scoping project. [Online]

Available from:

https://www2.rcn.org.uk/__data/assets/pdf_file/0003/379155/003880.pdf. [Last

accessed 12th May 2016].

Royal College of Nursing. (2016). Royal College of Nursing response to Health

Education England’s consultation: Building capacity to care and capability to treat –

a new team member for health and social care. [Online] Available from:

https://www.rcn.org.uk/professional-development/publications/pub-005567. [Last

accessed 5th April 2016].

Seagraves, L., Osborne, M., Neal, P., Dockrell, R., Hartshorn, C., and Boyd,

A. (1996) Learning in Smaller Companies: Final Report: Stirling. Educational Policy

and Development University of Stirling.

Shaw, A. (2012). Scope of Practice of Assistant Practitioners. [Online] Available

from: http://www.sor.org/learning/document-library/scope-practice-assistant-

practitioners. [Last accessed 14th April 2016].

Sheehan, J. (1986). Aspects of research methodologies. Nursing education today. 6

(5), pp.193-203.

Skills for Health (2009). Nationally Transferable Skills. Bristol: Skills for Health. 

Skills for Health (2009a). Core Standards for Assistant Practitioners. Bristol: Skills

for Health

69

Student number: -0207586

Skills for Health. (2010). Key Elements of the Career Framework. [Online] Available

from: http://www.skillsforhealth.org.uk/index.php?

option=com_mtree&task=att_download&link_id=163&cf_id=24. [Last accessed 20th

April 2016].

Skills for Health. (2015) The Healthcare Support Workforce A case for ongoing

development and investment. Working paper 1 [Online] Available from:

http://www.skillsforhealth.org.uk/index.php?

option=com_mtree&task=att_download&link_id=179&cf_id=24. [Last accessed 5th

April 2016].

Skills for health. (2015a) New apprenticeship standards for healthcare support

workers and assistant practitioners – Approved by BIS. [Online] Available from:

http://www.skillsforhealth.org.uk/news/latest-news/item/209-new-apprenticeship-

standards-for-healthcare-support-workers-and-assistant-practitioners-approved-by-

bis. [Last accessed 5th July 16].

Skills for Health. (2016) How can we act now to create a high-quality support

workforce in the UK's health sector? Pt 1/2. British Journal of Health Care

Assistants. [Online] 10 (3). pp. 44-47. Available from:

http://www.magonlinelibrary.com/doi/10.12968/bjha.2016.10.3.134 [Last accessed

5th April 2016].

Skills for Health. (2016a). How we can act now to create a high quality support

workforce in the UK’s health sector? Working paper 2. [Online] Available from :

http://www.skillsforhealth.org.uk/index.php?

option=com_mtree&task=att_download&link_id=179&cf_id=24. [Last accessed 5th

April 2016].

Smith, J. and Brown, T. (2012). The Assistant Practitioner in Palliative and End of

Life Care. Journal of Community Nursing. 26 (3), pp. 4-6.

Smith, W. (2014). The Talent for Care: A national strategic framework to develop

the healthcare support workforce Part of Framework 15, the Health Education

England guide to action. [Online] Available from:

https://www.hee.nhs.uk/sites/default/files/documents/WES_TfC-National-Strategic-

Framework.pdf. [Last accessed 2nd April 2016].

70

Student number: -0207586

Spilsbury, K., Stuttard, L., Adamson, J., Atkin, K. Borgin, G., McCaughan, D.,

McKenna, H. Wakefield, A. and Carr-Hill, R. (2009a). Mapping the introduction of

assistant practitioner roles in Acute NHS (Hospitals) Trust in England. Journal of

nursing Management. 17 (5), pp.615 - 626.

Spilsbury, K. and Aitkin, K. (2009). The impact of Assistant Practitioners on Acute

NHS Trusts. British Journal of Health Care Assistants 3 (10) pp.508-509

Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R. and Bafani, B. (2012)

Broadening the range of design and methods of impact evaluations: London:

Department for International development

Stewart-Lord, A., McLaren, S. and Ballinger, C. (2011). Assistant practitioners (APs)

perceptions of their developing role and practice in radiography: Results from a

national survey. Radiography: An International Journal of diagnostic imaging and

radiation therapy. 17 (3), pp.193–200.

Stringer, E.T. (1996). Action Research: A handbook for practitioners. London: Sage.

Trochim, W.M.K. (2006) Research Methods Knowledge Base. [Online] Available

from: http://www.socialresearchmethods.net/kb/index.php. [Last accessed 21st

February 2016].

Vaughan, S., Melling, K., O’Reilly, L. and Cooper, D. (2014). Understanding the

debate around regulation of support workers. British Journal of Nursing. 23 (5),

pp.260-263.

Wakefield, A., Spilsbury, K., Aitkin, K. McKenna, H. Borglin, G. and Stuttart, L.

(2009) Assistant or substitute: exploring the fit between national policy vision and

local practice realities of assistant practitioner job descriptions. Health Policy; 90: 2,

pp.286–295.

Wakefield, A. Spilsbury, K. Aitkin, K and McKenna, H. (2010). What work do

assistant practitioners do and where do they fit in the nursing workforce? Nursing

Time. 106 (12), pp.14-17.

Willis, P.G. (2015). Raising the bar shape of caring: A review of the future education

and training of registered nurses and care assistants. [Online] Available from:

https://www.hee.nhs.uk/sites/default/files/documents/2348-Shape-of-caring-review-

FINAL_0.pdf. [Last accessed 4th April 2016].

71

Student number: -0207586

Wilson, M. (2008). East Midlands Assistant Practitioner Project: Assistant

Practitioner Tool Kit. [Online] Available from:

http://www.nhsemployers.org/~/media/Employers/Publications/Assistant-

Practitioner-Toolkit.pdf. [Last accessed 30th January 2016].

Winter, R. and Munn-Giddings, C. (2002) A handbook for action learning in Health

and Social care. London, Routledge, eBook Collection [EBSCOhost, viewed 8th April

2016].

Wright, W., McDowell, R. S. Leese, G. and McHardy, K.C. (2010). A scoping

exercise of work-based learning and assessment in multidisciplinary health care in

Scotland. Journal of Practice Teaching and Learning. 10 (2), pp.28-42.

List of Appendices

Appendix 1: RIAT project plan

72

Student number: -0207586

Appendix 2: PEST analysis

Appendix 3: Covering e mail for Survey Monkey

Appendix 4: Survey monkey results

Appendix 5: Follow up e mail

Appendix 6: RE1 Ethics Form from the University

Appendix 7: Ethics permission from the Trust

Appendix 8: Impact evaluation tool draft

Appendix 9: Guidance notes for filling out the impact evaluation tool (WBEF)

Appendix 10: WBEF process map for pilot phase

Appendix 11: Guidance notes for stakeholder filling out the impact evaluation tool

Appendix 12: Impact evaluation tool final version

Appendix 13: WBEF process map final document

Appendix 14: Flyer for stakeholders

73

Student number: -0207586

74