Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
1
How does technology help you with the evidence?
Peter Morley
The University of Melbourne
Royal Melbourne Hospital
2
2
3
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
COI
• Chair Australian Resuscitation Council (ARC)
• Australian and New Zealand Committee On Resuscitation: delegate on International Liaison Committee on Resuscitation (ILCOR)
• Continuous Evidence Evaluation Working Group (ILCOR/AHA)
4
5
5
6
7
Tsafnat et al. Systematic Reviews 2014, 3:74
8
9
10
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Summary
• So why use technology?
• How can technology help?
• Still need a human touch (with content expertise)
11
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Summary
• So why use technology?
• How can technology help?
• Still need a human touch (with content expertise)
12
Surely we are doing a good job?
13
14
We have a robust process!
15Science Advisory Cte (SAC) and Information Specialists
Resuscitation Science DomainsPre Arrest Systems of
care
Education
Drugs Post Arrest CPR and Adjuncts
ElectricityVentilation
Publication Alerts
Knowledge Synthesis unit (KSU)Number of KSU’s on contract Tied to Domains
Evidence Reviewers(EvRev) Assigned by TF Chairs
Domain Lead Revisits PICO wording^ and Activates SR
^
^
^
Publication by
contractual agreementwith KSU
*Mentee
TF and Science Advisory Committee Approval
Draft Consensus on Science Treatment
Recommendations and knowledge gaps
IndependentSEERS
IS Search
COSTR and Knowledge Gaps
^ Essential step to improve efficiency and TF link
ILCOR Website – Citable and linked to publication
*Mentee
Publication by contractual agreement
with SR Expert
Special Circumstances
Diagnostics Prognostics End of Life
KSU *Growing Global Capacity SR
Knowledge Translation
ILCOR Exec Approval
Emerg.Care
16
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Process for the ILCOR reviews● Task forces select, prioritize, and refine questions (using PICO format)
● Task forces allocate level of importance to individual outcomes
● CEE allocates PICO questions to domain lead
● Domain Lead allocates PICO questions to KSU or Systematic Reviewer
● KSU/SR works with Task Force content experts to develop and fine-tune search strategies (for PubMed, Embase, and Cochrane)
● Public invited to comment on PICO question wording, as well as the proposed search strategies
● Revised search strategies used to search databases (PubMed, Embase, and Cochrane)
● The articles identified by the search are screened by the KSU/SR and Task Force content experts using inclusion and exclusion criteria
● KSU/SR and Task Force content experts agree on final list of studies to include
● KSU/SR and Task Force content experts agree on assessment of bias for individual studies
● KSU/SR and Task Force content experts create GRADE evidence profile tables
● Task Force content experts create draft consensus on science statements and treatment recommendations
● Public invited to comment on draft consensus on science and treatment recommendations
● Detailed iterative review by task force of consensus on science and treatment recommendations to create final version
● Peer review of final CoSTR document
17
What’s wrong with the current process?
The 12 simulated reviews included an average of 3,831 titles/abstracts (range: 1,565-6,368) and 20 studies
(6-42). The average review completion time was 463 days (range: 289-629) (881 person-hours [range: 243-
1,752]). The average person-hours per activity were study selection 26%, data collection 24%, report
preparation 23%, and meta-analysis 17%.
18
19
3,831 titles/abstractsaverage review completion time
was 463 days 881 person-hours
20
20
21
22
We identified 77 trials (28,636 patients) assessing 47 treatments with 54 comparisons and 29 systematicreviews (13 published after 2013). From 2009 to 2015, the evidence covered by existing systematic reviews wasconsistently incomplete: 45 % to 70 % of trials; 30 % to 58 % of patients; 40 % to 66 % of treatments; and 38 % to 71 %
of comparisons were missing. In the cumulative networks of randomized evidence, 10 % to 17 % of treatmentcomparisons were partially covered by systematic reviews and 55 % to 85 % were partially or not covered.
23
23
24
We know how to answer questions!
25
25
26
But even the GRADE group suggests we need action!
Guideline developers should advance methods to streamline
systematic reviews or develop rapid reviews while maintaining
quality and transparency.
27
28
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Summary
• So why use technology?
• How can technology help?
• Still need a human touch (with content expertise)
29
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Process for the ILCOR reviews● Task forces select, prioritize, and refine questions (using PICO format)
● Task forces allocate level of importance to individual outcomes
● CEE allocates PICO questions to domain lead
● Domain Lead allocates PICO questions to KSU or Systematic Reviewer
● KSU/SR works with Task Force content experts to develop and fine-tune search strategies (for PubMed, Embase, and Cochrane)
● Public invited to comment on PICO question wording, as well as the proposed search strategies
● Revised search strategies used to search databases (PubMed, Embase, and Cochrane)
● The articles identified by the search are screened by the KSU/SR and Task Force content experts using inclusion and exclusioncriteria
● KSU/SR and Task Force content experts agree on final list of studies to include
● KSU/SR and Task Force content experts agree on assessment of bias for individual studies
● KSU/SR and Task Force content experts create GRADE evidence profile tables
● Task Force content experts create draft consensus on science statements and treatment recommendations
● Public invited to comment on draft consensus on science and treatment recommendations
● Detailed iterative review by task force of consensus on science and treatment recommendations to create final version
● Peer review of final CoSTR document
30
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
31
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
32
Refining your search
33
Using PubMed and PubMed tools
34
Clinical queries
35
Echinacea and clinical queries
36
MeSH and saving alerts
37
38
Sign in or register
39
Commence your search, eg. using MeSH
40Cardiac arrest, matched by heart arrest. Check box then Add to search builder
41Cardiopulmonary resuscitation, select “OR”, then “Add to search builder”, then “Search Pubmed”
42
When search is complete, select “Create alert”
43Enter name, select what you want to choose, then save. You can review saved searches at My NCBI
44
45
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Further information
• Simple instruction video• https://youtu.be/dSZ05X9wM3c
• NCBI video• https://www.youtube.com/watch?v=kXY5P4C_2l4
46
47
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
48
49
50
51
And full text
52
53
University or Hospital library services
54
55
56
“Related citations” and “Cited by”
57
58
59
Saving Endnote and full text
60
61
62
63
64
65
66
67
68
And can search multiple databases at once: eg. OVID
69
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
70
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
71
Screening of titles and abstracts
72
73
Searching for text
soft-margin polynomial Support Vector Machine (SVM)
74
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
75
75
76
With our new sophisticated technological processes we
don’t need multiple opinions!
77
78
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
79
80
Risk of bias table: RCTs
RCT bias assessment
Study Year DesignTotal
PatientsPopulation
Industry Funding
Allo
cati
on
: Gen
erat
ion
Allo
cati
on
: Co
nce
alm
ent
Blin
din
g: P
arti
cip
ants
Blin
din
g: A
sse
sso
rs
Ou
tco
me:
Co
mp
lete
Ou
tco
me:
Sel
ecti
ve
Oth
er B
ias
Jones 2002 RCT 152 OHCA Partly Low Low High Low Low Low Unclear
Stevens 2002 RCT 36 OHCA No High High High Low Low Low Unclear
Laurence 2005 RCT 74 OHCA No Low Low High High Low Low Unclear
Zhang 2005 RCT 188 OHCA Yes High High High High Low High High
Lopez 2012 RCT 34 OHCA No Low Low High Low Low Low Unclear
Simons 2013 RCT 202 OHCA No Low Low High Low Low Low Low
81
Risk of bias assessment
82
• We found that sentences can be successfully ranked by relevance with area under the receiver operating characteristic (ROC) curve (AUC)> 0.98.
• Articles can be ranked by risk of bias with AUC> 0.72. • Conclusions: We show that text mining can be used to assist risk-of-bias assessments
83
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
84
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
85
And our evidence to decision framework
86
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
87
Anything else???
88
Writing up !?!
An add-on programme (RevManHAL) which helps auto-generate the
abstract, results and discussion sections of RevMan-generated
reviews in multiple languages.
89
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Processes to discuss
● develop and fine-tune search strategies
● search databases (PubMed, Embase, and Cochrane)
● articles identified by the search are screened using inclusion and exclusion criteria
● agree on final list of studies to include
● assessment of bias for individual studies
● create GRADE evidence profile tables
90
So where to from now?
91
Attempts at standardization
92
Systematic review of automation practices
Out of a total of 1190 unique citations that met our search criteria, we found 26 published reports describing automatic extraction of at least one of more than 52 potential data elements used in systematic reviews.
We found no unified information extraction framework tailored to the
systematic review process, and published reports focused on a limited (1– 7) number of data elements.
93
So where are we up to?
• modification of SR processes involves restricting, truncating and/or bypassing one or more SR
steps, which may risk introducing bias to the review findings.
• still a dearth of research examining the actual impact of methodological modifications
and comparing the findings between RRs and SRs.
94
95
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Summary
• So why use technology?
• How can technology help?
• Still need a human touch (with content expertise)
96
96
97
98
Remember when . . .
99
These data suggest that use of human albumin in critically ill patients should be urgently reviewed and that it should not be used outside the context of rigorously conducted, randomised controlled trials.Overall, the risk of death in patients treated with albumin was 6% (95% confidence interval3% to 9%) higher than in patients not given albumin.NNH = 16 !!!
100
No content experts!!!
101
102
103
20 - 22 September • Bologna • ItalyNew technologie s in resuscitationRESUSCITATION 2018
Summary
• So why use technology? It can help substantially!
• How can technology help? It can automate a number of processes (search, retrieve, analyse, incorporate info)
• Still need a human touch (with content expertise): cause that is so important
104
104
105
105