25
1 A Comparison of Strategies for Assessing Fidelity to Evidence-Based Interventions Shannon Wiltsey Stirman, PhD National Center for PTSD and Stanford University @slwiltsey

A Comparison of Strategies for Assessing Fidelity to ... · Activating Event (Something happens) B Belief (I tell myself something) C Consequence (I feel something) ... Worksheet

  • Upload
    ngomien

  • View
    218

  • Download
    0

Embed Size (px)

Citation preview

1

A Comparison of Strategies for Assessing Fidelity to Evidence-Based Interventions

Shannon Wiltsey Stirman, PhDNational Center for PTSD and Stanford University

@slwiltsey

2Acknowledgements

• Coauthors• Candice Monson, PhD (Co-PI)

• Norman Shields, PhD (Co-I)

• Patricia Carreno

• Kera Mallard

• Matthew Beristianos

• Sharon Hasslen

• Research Funding• Canadian Institute of Health Research RN327031

• National Institute of Mental Health R01 MH106506

• The authors have no conflicts of interest to report.

3Importance of assessing fidelity

• Measure of success of implementation strategies such as training

• Key implementation outcomes (Proctor et al., 2009)

• Necessary to understand unexpected outcomes (e.g., voltage

drop) (Schoenwald et al., 2010)

• Fidelity support has been shown to improve training outcomes (Lu

et al., 2014) and decrease turnover (Aarons et al., 2009)

@slwiltsey

4Relationship between fidelity and outcomes

• Mixed findings re: observation• Meta-analysis found no overall relationship (Webb et al., 2010)

• Fidelity predicted changes in depression (Webb et al., 2010)

• Temporal confounds

• Subsequent findings for CPT for PTSD (Farmer et al., 2015)

• Self-report• Some researchers have found associations with outcomes (Hanson et

al., 2015)

• Clinical worksheets• Fidelity predicted subsequent symptom change (Stirman et al., 2015)

@slwiltsey

5Exploring associations with clinical outcomes

• Implications for data collection

• Important to rule out temporal confounds

• Potential for moderating variables to also impact fidelity

• Possible strategies

• Rate all early sessions and examine their impact on subsequent

symptom change

• Look at session-to-session change

• All require numerous fidelity ratings

– Lower-burden, reliable methods would advance this line of research

6Considerations in assessing fidelity

Strategy Advantages Disadvantage

Observation Accuracy Rater agreement, time

Self-report Less time intensive than observation

Accuracy unknown, response bias

• Clinical documentation Integrated into care, accessible

Clinician burden, responsebias

• Interview Interviewer can probe fordetails

Clinician burden, potential response bias

• Survey Typically brief Clinician burden

Work samples (e.g., CBT worksheets)

Integrated into care, minimizes clinician burden

Requires rating

@slwiltsey

7

Method

8Study design

• Fidelity to cognitive processing therapy was assessed in a sample of clinician participants from a study on implementation support strategies

• Clinician participants completed the following:

• One-time interview

• Monthly self report (re: adherence to CPT)

• Session note with adherence checklist

• CPT worksheets

• Recordings of therapy sessions

@slwiltsey

9

Sample Characteristics

Therapists

• N=40

• 32% M; 68%F

• Age 42 (SD=11)

• 86% White, 4% Hispanic

• 49% Phd/PsyD/MD; 33% Master’s; 18% Bachelor’s/Other

• Years of practice =11 (sd=8)

• 36% Private Practice; 21% Community Mental Health; 11% Federal; 18% Provincial 15% Other

Clients

• N=77

• 41%M; 57% F; ,1% T

• Age 40 (SD=14)

• 75% White; 3% Black; 3% South Asian; 5% Hispanic/Lation; 9% Other

• 78% English First Language, 9% French

• 40% Military or Veteran

• 65% 12 or more years of education

@slwiltsey@slwiltsey

10Observer ratings

• Raters were trained to 90% agreement on adherence and

competence ratings

• Raters reviewed full audio of CPT sessions

• Dichotomous ratings of adherence of unique and essential CPT

items

• Seven-point scale for competence to each CPT item

• Decision rules to foster agreement

@slwiltsey

11Interview

• Interviewers asked about

• The extent to which therapist followed the CPT protocol

• The type, nature, and frequency of adaptations

• Global rating of adherence (generally adherent vs. generally non-

adherent)

@slwiltsey

12Self-Reports

• In the past month, how closely have you followed the CPT

protocol with your cases (0-3 scale)

• Clinical note checklist

• Checked off each unique and essential item completed in a given

session

@slwiltsey

13CPT worksheets

@slwiltsey

mmanding officer making

orders that got us into crossfire.“People in authority cannot be

trusted. He put us in harm’s

I feel fearful and distrusting. I

avoid people in authority, or

argue with them about their

decisions when I have to

interact with them.

Does it make sense to tell yourself “B” above? _Yes. He doesn’t understand what happened and he’s said hurtful things.

What can you tell yourself on such occasions in the future? It’s probably best not to talk with him about it.

Tom told me to get over it I hate him upset

Clinical Note: “CPT session 3. Reviewed ABC sheets, identified stuck points. Assigned ABC

sheets and trauma narrative for homework”

AActivating Event

(Something happens)

BBelief

(I tell myself something)

CConsequence

(I feel something)

Worksheets combined with clinical notes can provide

richer information

@slwiltsey

15CPT worksheet

• Rater scores each section for adherence (0-1)

• Assigns competence rating for each column/section

• Previous research found high inter-rater reliability

• Adherence k=.68-.98

• Competence ICC=.63-.89

@slwiltsey

16

Results

17Results: Observer

• Adherence m=.85, sd=.25 (0-1 scale)

• Competence m = 2.98, sd=1.13 (0-6 scale)

• Feasibility

• 60-75 minutes per rating

• 40 clinicians turned in 485 sessions

• Reliability: k=.87 (adherence), ICC=.78 (competence)

• Treated as the “gold standard”

@slwiltsey

18Results: Interview

• Rater Agreement: simple to get to 95% agreement

• Feasibility: • 30 completed (75%)

• One hour interview (included other topics)

• Coding is brief

• Less precise as it encompasses a larger timeframe

• 37% rated as generally adherent (0-1 scale)

• Agreement with observer ratings• Adherence: r=.048, p-.57

• Competence rpb=.12, p=.16

@slwiltsey

19Monthly self report

• m=2.36, sd=.65 (1-4 scale)

• Feasibility: ~65% response rate (including dropouts)

• Received 56 that could be matched with randomly selected rating

• Agreement with observer rating

• Adherence: r=.42, p=.001

• Competence: r=.13, p=.31

@slwiltsey

20Clinical note checklist (self-report)

• m=.73, sd=.30 (73% of session elements)

• Feasibility: depends on system

• We requested one per month because it wasn’t embedded

• Received 42 that could be matched with randomly selected observer

ratings

• Agreement with observer ratings

• Adherence r=.87, p=.00

• Competence r=.77, p=.003

@slwiltsey

21Worksheet Ratings

• Adherence m=.15, sd=.05 (0-1 scale)

• Competence m=.25, sd=.17 (0-2 scale)

• Feasibility: Depends on system of collection• Challenges in matching up with some sessions

• Therapists posted worksheets for clinical challenges

• 12 could be matched

• Correspondence with observer ratings:• Adherence: r=.08, p=.85

• Competence: r=.21, p=.62

@slwiltsey

22

Discussion

23Discussion

• Clinical notes appeared to be most reliable proxies for observer ratings

• Monthly reports may be adequate under some circumstances

• Data on worksheets should be interpreted with caution• Previous research correlated highly with observer ratings

• Low sample size

• Data collection procedures need to be carefully considered

• Interviews should probably be on a different scale

@slwiltsey

24Future directions

• Larger datasets

• Prospective research

• Consider/refine strategies for data capture

• Examine associations with outcomes

25Contact

[email protected]