Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Webinar 3: Reporting the
results of NCOP local
evaluation studies
13th February 2020
Anna Mountford-Zimdars and Joanne Moore - Evaluation Capability Team
Joanna Lilley – Wessex Inspiration Network
Jatinder Sandhu – CFE Research
Format Presentation:
General remarks
National meta-review of evidence
How can we support each other?
Q and A session
Followed by Tweet chat:
#ncopevaluation
Audiences
Planning to communicate
Example from Shaping Futures
Key ingredients Clarity on the research questions & key outcomes measured
(and why they’re important);
Appreciation of the type of evidence being collected;
Appreciation of the strength of the approach, any limitations;
A coherent understanding of activity-outcome links
(mechanisms involved in generating outcomes);
Evidence-based conclusions and recommendations
(replicable? generalisable?)
Roles OfS:
oversight and leadership;
incentivising doing evaluation;
requirements and guidance.
CFE Research/NCOP:
evaluating the impact of
NCOP;
impact evidence focused on
specific research questions.
Capacity Building Team:
Building capabilities in evaluation at the partnership level;
Aligning with national and local contexts;
Sharing evaluation in practice.
TASO:
causal and impact evidence;
synthesising and sharing this evidence;
raising capabilities to generate, interpret and use this evidence;
focus on identified priority areas.
• Quantitative and qualitative evidence: – Internal report;
– summary;
– Publications;
– Journal Articles;
– Conference presentations/outputs.
• Focus on impact rather than process
• Submit at any time until June 2021 via [email protected].
• Three formal calls in: March 2020; November 2020 and June 2021.
http://cfe.org.uk/app/uploads/Partnership-call-for-evidence-guidance-FINAL.docx
Submitting to the national meta-review of evidence
Yes No Yes No Yes No
Coherentstrategy Disjointedactivities Clearaimofwhatactivitiesseekto
achieve
Aimsdevelopedafteractivity Haveatargetaswellasacontrolor
comparisongroup
Usinggroupsthatarenot
comparable
Approachandactivities
underpinnedbyevidencefrom
literatureorotherevaluations
Norationalefordeveloping
approachandactivities
Selectindicatorsofyourimpact Noconceptofmeasuring
success
Coulduseanexperimentalorquasi-
experimentaldesign
Selectionbiasincomparator
groups
Sharedunderstandingofprocesses
involved
Themodelofchangeisnot
shared
Quantitativeorqualitativedata–or
both,‘triangulation’isgood!
Informationnotsystematically
collected
Thinkaboutselectionbiasandtry
toavoidit
Reasonforactivity Adhocactivities Pre/postdata(minimumtwopointsin
time)
Onlycollectinformationonce
Clearconceptionofwhythe
changesyouseektomakeare
important
Nounderstandingofneedsof
targetgroups
Analysiscompetentlyundertaken Datanotrelatedtothe
intervention
Programmereviews Norevieworevaluation Sharingofresultsandreviewofactivity Resultsnotusedtoinform
decisions
Type1:Narrative
Type2:EmpiricalEnquiry(encompassesType1andthefollowing) Type3:Causalclaims(encompassesType2andthefollowing)
• Type 1) Narrative: Evidence in your results that
delivery is coherent and working to support desired
changes (or not) (and from literature that the approach
is valid)
• Type 2) Empirical: Quantitative and/or qualitative
evidence of a difference compared to without your
intervention
• Type 3) Casual: Pre/post treatment relative to an
appropriate control/comparison group who did not take
part in the intervention
• Strength of evidence: Weaker; Average; Stronger
http://cfe.org.uk/app/uploads/Partnership-call-for-
evidence-guidance-FINAL.docx
Making judgments about evidence
Strengthening Type 1 evidence
Strengthening Type 2 evidence
Strengthening Type 3 evidence
Addressing validity issues Note on validity issues available at:
https://outreachevaluationhub.org/mod/resource/view.php
?id=81
Acknowledging limitations Every evaluation has some limitations!
Research design limitations;
Impact limitations;
Implementation limitations;
Data or statistical limitations;
Context limitations.
Common flaws or serious problem in methodology or
design?
Allow the reader to understand the context and relevance
Lessons learnt Design - It may not always be possible to choose the
strongest evaluation
Data – To what extent are the data collection processes
reliable and embedded (nb. can be useful to look at
patterns of missing data)
Implementation – Delivery not always as envisaged
Other lessons?
How can we work together? Our ideas…
Continuing to develop the guidance
Practice examples
Learning from successful evaluations
Peer reviewing processes, regional linkages
Other things?
Thanks for listening!
Questions…
Followed by Tweet chat:
#ncopevaluation