REG-EAACI Taskforce Report

  • View
    561

  • Download
    0

Embed Size (px)

Text of REG-EAACI Taskforce Report

  1. 1. REG-EAACI TASKFORCE REPORT REG SUMMIT 2016, LYON, FRANCE, 16 APRIL SESSION: INFLUENCING Guideline Development: the REG/EAACI Taskforce Reports TIME: 11.45AM-12.45PM Presenters: Nicolas Roche (Hopital de lHotel Dieu, Paris) & Jon Campbell (Skaggs School of Pharmacy, Denver, Colorado) On behalf of: Nikos Papadopoulos, Leif Bjermer, Guy Brusselle, Alison Chisholm, Jerry Krishnan, Zoe Mitchel, David Price, Mike Thomas, Eric van Ganse, Maarten van den Berge helped by Sarah Acaster and Katy Gallop
  2. 2. Taskforce Members Leads: Nicolas Roche & Jon Campbell David Price Mike Thomas Eric van Ganse George Christoff Guy Brusselle Jennifer Quint Jerry Krishnan Leif Bjermer Nikos Papadopoulos Maarten Van Den Berge
  3. 3. Background RCTs are not sufficient to provide holistic evidence Real-life studies are subject to many sources of biases
  4. 4. Where do observational studies fit in? (Almost) Everybody agrees on: o Pitfalls of RCTs o Need for real-life data Guideline developers are often reluctant to include real-life data Quality issues Need to help readers Quality assessment o Tools required o Remaining difficulties in quality assessment (need to help reviewers) o Need to improve reporting
  5. 5. SETTING AND DEFINING STANDARDS CREATING A LEVEL PLAYING FIELD TO HELP RAISE (AND GUIDE ASSESSMENT OF) THE QUALITY OF REAL-LIFE RESEARCH ATS 2013
  6. 6. Goal Create a level playing field and solid foundations for future research that will: o Standardize the field o Enable benchmarking o Assist in assessing the quality of real-life data (including their potential value to clinical practice guidelines)
  7. 7. Conceptual framework of therapeutic research, linking the various types of studies based on ecology of care and population characteristics. Typical positions of the most common study designs have been positioned but can be moved in any direction depending on their specificities Roche Net al. Lancet Respir Med. 2013;1:e29-30. Framework for integrating evidence
  8. 8. GRADE classifications: observational studies vs RCTs Source of evidence Initial quality rating Factors decreasing quality Factors increasing quality Final rating RCTs High Risk of bias Inconsistency Indirectness Imprecision Publication bias Large effect Dose- response Influential residual confounders High Moderate Low Very Low Observational studies Low Guyatt et al. PATS 2012 (2007 ATS/ERS Workshop) + -
  9. 9. Stakeholders Researchers Database designers / promotors Guidelines developers Policy makers Reviewers (journals, projects) Readers
  10. 10. Aims & Objectives Conduct a systematic critical review of the real-life asthma literature published between 2004/13 o Restricted to comparative effectiveness research Describe the quality of currently available real-life research in asthma Highlight studies worthy of possible integration into asthma-related guidelines and policy decisions Recommend quality targets for the future o And topics to address in future observational CER studies
  11. 11. Strategy Agree on / build a quality assessment tool Define a search strategy, perform a review of the literature Apply the quality assessment tool on retrieved articles Synthesise, discuss and conclude on quality, and potential influence of current comparative effectiveness literature on future guideline and/or need for additional studies
  12. 12. Taskforce Timelines 2014-2015 Task 1: literature search to identify asthma real-life research articles (chair: N Roche) Task 2: construction of a dedicated quality assessment tool (chair : J Campbell) 2015-2016 Task 3: quality assessment of identified asthma real-life research articles Task 4: report Task 5: disseminate
  13. 13. 2014-2015 Task 1 (literature search): Formal targeted literature search Poll among Taskforce members and REG members Limit retrieved papers to top priority (4) PICOT questions Task 2 (quality assessment tool): Combination of available tools Discussion and final version of the quality assessment tool
  14. 14. 2015-2016 Task 3 (quality assessment): Quality assessment of articles identified from the literature search Conducted by members of the Taskforce and REG-EAACI network reviewers Results of quality assessment used to determine which papers could be used to complement results of RCTs and inform guidelines. Task 4 (dissemination): Results to be presented at EAACI meeting Publication (submission ~Q3 2016; journals: Allergy and CTA): o A review article (detailed results of quality assessment) o A position paper (information of guidelines)
  15. 15. QUALITY ASSESSMENT TOOL DEVELOPMENT JON CAMPBELL: Skaggs School of Pharmacy, Denver, Colorado
  16. 16. Development Phases Phase I: literature review Phase II: Initial tool creation Phase III: Taskforce review and pilot Phase IV: Larger Pilot & tool finalization/minor modifications Phase V: tool finalization for use development of an online tool
  17. 17. Phase I: Literature Review Many Study Assessment Tools Exist STROBE Statement: Checklist of items that should be included in reports of observational studies Quality standards for real-world research: List of quality criteria for observational database comparative studies (Roche et al) Report of the ISPOR Task Force on retrospective databases: A checklist for retrospective database studies GRACE Checklist: Quality of observational cohort studies for decision making support ENCePP Checklist: checklist for study protocols for pharmacoepidemiology Standards in the conduct of registry studies for patient-centered outcomes research (PCORI): review of existing guidelines and literature to develop methodological standards
  18. 18. Phase I: Literature Review Purpose of most existing tools: o Primarily to provide standardization of best practice and reporting of observational / comparative effectiveness research studies. Purpose of REG-EAACI Taskforce tool: o Decision aid to assess whether or not a study provides evidence that could inform future guidelines (yes or no). If yes, the tools criteria can aid in describing any particular strengths or limitations of the evidence.
  19. 19. Phase I: Literature Review A synthesis of existing tools was visually presented in tabular form. o Overlap was assessed across tools Taskforce decided to focus on merging two existing tools o Roche and colleagues o ISPOR task force ISPOR task force tool included many of the existing tools into its development
  20. 20. Quality criteria for observational database comparative studies Roche et al Arch Conference Ann Am Thorac Soc Feb 2014
  21. 21. ISPOR Task Force. Berger et al Value in Health 2014. 4 relevance & 28 credibility (yes/no) questions (weaknesses and fatal flaws identified)
  22. 22. Phase 1: Roche (REG) and Berger (ISPOR) Roche has 24 relevance and credibility questions ISPOR starts with 4 relevance questions and then moves to 28 credibility questions o Much overlap between two on domains and items o Both are yes/no items; ISPOR has a cant answer option (NA, not reported, not enough info, not enough training to answer). o ISPOR has the concept of fatal flaws and weaknesses
  23. 23. Phase II: Initial Tool Creation Synthesis of pre-existing quality recommendations to develop a first draft Taskforce Quality Assessment Tool Recognition that asthma specificity is not necessary and provides greater tool utility
  24. 24. TABLE Checklist combining ISPOR/Roche et al. assessments Red= Roche (Emphasizes derived from Roche et al.) Green= ISPOR (Emphasizes derived from ISPOR) Assessment using Roche/ISPOR Yes or No Yes = 1 point No = 0 Background/ Relevance 4 Items 1. Clear underlying hypotheses and specific research questions Yes/No 3. Relevant population Yes/No 4. Relevant interventions and outcomes are included Yes/No 5. Applicable context (setting/practice pattern) Yes/No Maximum Raw Score = 4 pts Adjusted Score = 4/4 (100%) #Yess/#Items Design 8 Items 1. Observational comparative effectiveness database study with a priori hypotheses and goals? Yes/No 2. (Independent steering committee involved in) a priori definition of study methodology? Yes/No 3. Evidence of a priori protocol, review of analyses, statistical analysis plan, and interpretation of results? Yes/No 4. Comparison groups concurrent or justified? Yes/No 5. Was a study design used to minimize or account for confounding? Yes/No 6. Comparison groups selected to be sufficiently similar to each other (e.g. either by restriction or recruitment based on the same indications for treatment? Yes/No 7. Sources criteria and methods for selecting participants appropriate to address study questions/hypotheses? Yes/No 8. Registration in a public repository with commitment to publish results Yes/No Maximum Raw Score = 8 pts Adjusted Score = 8/8 (100%) #Yess/#Items Data/Database 3 Items 1. High quality databases that are sufficient to support the study Yes/No 2. Was exposure defined and measured in a valid way? Yes/No 3. Primary outcomes defined and measured in a valid way? Yes/No Maximum Raw Score = 3 pts Adjusted Score = 3/3 (100%) #Yess/#Items OUTCOMES 6 Items 1. A. Clearly defined primary and secondary outcomes chosen a priori Yes/No 2. B. The use of proxy and composite measures is justified and explained Yes/No 3. C. Validity of proxy measures has been checked Yes/No 4. Length of observation: Sufficient f/u duration to reliably assess outcomes of interest and long-term Tx effects? Yes/No 5. Patients: Well described inclusion and exclusion criteria, reflecting target patients characteristics in the real world. Yes/No 6. Sample size: calculated based on clear a priori hypotheses regarding the occurrence of outcomes of interest and target effect of studied Tx versus comparator? Yes/No Maximum Raw Score = 6 pts Ad