Collaborative Database and Computational Models for Tuberculosis Drug Discovery

Embed Size (px)

DESCRIPTION

University of Iowa Pathology grand rounds discuss TB work, C

Citation preview

  • 1. Collaborative Database and Computational Models for Tuberculosis Drug Discovery Sean Ekins Collaborations in Chemistry, Fuquay Varina, NC. Collaborative Drug Discovery, Burlingame, CA. Department of Pharmacology, University of Medicine & Dentistry of New Jersey-Robert Wood Johnson Medical School, Piscataway, NJ. School of Pharmacy, Department of Pharmaceutical Sciences, University of Maryland, Baltimore, MD.

2. In the long history of human kind (and animal kind, too) those who have learned to collaborate and improvise most effectively have prevailed. Charles Darwin 3. Outline

  • Introduction
  • Collaborative Drug Discovery
  • TB Collaborations and Drug Discovery Research
  • Open ADME Models
  • Repurposing FDA approved drugs
  • The Future Mobile Apps for Drug Discovery

4. Open Innovation Open innovation is a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look toadvance their technologyChesbrough, H.W. (2003).Open Innovation: The new imperative for creating and profiting from technology.Boston: Harvard Business School Press, p. xxivCollaborative Innovation A strategy in which groups partner to create a product - drive the efficient allocation of R&Dresources. Collaborating with outsiders-including customers, vendors and even competitors-acompany is able to import lower-cost, higher-quality ideas from the best sources in the world.Open Source Whileopen sourceand open innovation might conflict on patent issues,they are not mutually exclusive, as participating companies can donate their patentsto an independent organization, put them in a common pool or grantunlimited license use to anybody. Hence some open source initiativescan merge the two conceptsSome Definitions 5. How to do it better?What can we do with software to facilitate it ? The future is more collaborative We have tools but need integration

  • Groups involved traverse the spectrum from pharma, academia, not for profit and government
  • More free, open technologies to enable biomedical research
  • Precompetitive organizations, consortia..

A starting point for collaboration A core root of the current inefficiencies in drug discovery are due to organizations and individuals barriers to collaborate effectively Bunin & Ekins DDT 16: 643-645, 2011 6. Major collaborative grants in EU: Framework, IMI NIH moving in same direction? Cross continent collaboration CROs in China, India etc Pharmas in US / Europe More industry academia collaboration not invented here a thing of the past More effort to go after rare and neglected diseases -Globalization and connectivity of scientists will be key Current pace of change in pharma may not be enough. Need to rethink how we use all technologies & resources Collaboration is everywhere 7. Hardware is getting smaller 1930s 1980s 1990s Room size Desktop size Not to scale and not equivalent computing power illustrates mobility Laptop Netbook Phone Watch 2000s 8. Models and software becoming more accessible- free, precompetitive efforts - collaboration Free tools are proliferating 9. Typical Lab:The Data Explosion Problem & Collaborations DDTFeb 2009 10. Collaborative Drug Discovery Platform

    • CDD Vault Secure web-based place for private data private by default
    • CDD Collaborate Selectively share subsets of data
    • CDD Public public data sets - Over 3 Million compounds, with molecular properties, similarity and substructure searching, data plotting etc
      • will host datasets from companies, foundations etc
      • vendor libraries (Asinex, TimTec, ChemBridge)
    • Unique to CDD simultaneously query your private data, collaborators data, & public data,Easy GUI

www.collaborativedrug.com 11. 12. CDD:Single Click to Key Functionality 13. CDD:Mining across projects and datasets 14.

  • Tuberculosis Kills 1.6-1.7m/yr (~1 every 8 seconds)
  • 1/3 rdof worlds population infected!!!!
  • Multi drug resistance in 4.3% of cases
  • Extensively drug resistant increasing incidence
  • No new drugs in over 40 yrs
  • Drug-drug interactions and Co-morbidity with HIV
  • Collaboration between groups is rare
  • These groups may work on existing or new targets
  • Use of computational methods with TB is rare
  • Literature TB data is not well collated (SAR)
  • Funded by Bill and Melinda Gates Foundation

Applying CDD to Build a disease community for TB 15. ~ 20 public datasetsfor TB Including Novartis data on TB hits >300,000 cpds Patents, Papers Annotated by CDD Open to browse by anyonehttp://www.collaborativedrug.com/register Molecules with activityagainst 16. CDD is a partner on a 5 year project supporting >20 labs and proving cheminformatics supportwww.mm4tb.org More Medicines for Tuberculosis 17. Ekins et al, Trends in Microbiology19: 65-74, 2011 Fitting into the drug discovery process 18. Searching for TB molecular mimics; collaboration Lamichhane G, et al Mbio, 2: e00301-10, 2011Modeling CDD Biology Johns Hopkins Chemistry Texas A&M 19. Simple descriptor analysis on > 300,000 compounds tested vs TB4.72 (1.99) 77.75 (30.17)** 42.43 (8.94)* 0.12 (0.34)** 4.24 (1.58) 1.11 (0.82)** 3.38 (1.36)** 352.59 (70.87) Inactive< 90% inhibition at 10uM(N =100,931) 4.76 (1.99) 70.28 (29.55) 41.88 (9.44) 0.19 (0.40) 4.18 (1.66) 0.98 (0.84) 4.04 (1.02) 349.58 (63.82) Active 90% inhibition at 10uM(N =1702) TAACF-NIAID CB2 4.91 (2.35) 85.06 (32.08)* 43.38 (10.73) 0.09 (0.31)** 4.86 (1.77) 1.14 (0.88) 2.82 (1.44)** 350.15 (77.98)** Inactive< 90% inhibition at 10uM(N = 216367) 4.85 (2.43) 83.46 (34.31) 42.99 (12.70) 0.20 (0.48) 4.89 (1.94) 1.16 (0.93) 3.58 (1.39) 357.10 (84.70) Active 90% inhibition at 10uM(N = 4096) MLSMRRBN PSA Atom count RO 5 HBA HBD logP MWT Dataset 20. Bayesian Classification Models for TB Good Bad active compounds with MIC < 5uM Laplacian-corrected Bayesian classifier models were generated using FCFP-6 and simple descriptors. 2 models 220,000 and >2000 compoundsEkins et al., Mol BioSyst, 6: 840-851, 2010 21. Bayesian Classification Dose response Good Bad Ekins et al., Mol BioSyst, 6: 840-851, 2010 22. Bayesian machine learning Ekins, Williams and Xu, Drug Metab Dispos 38: 2302-2308, 2010 Bayesian classification is a simple probabilistic classification model. It is based on Bayes theorem his the hypothesis or model dis the observed data p ( h ) is the prior belief (probability of hypothesishbefore observing any data) p ( d ) is the data evidence (marginal probability of the data) p ( d|h ) is the likelihood (probability of datadif hypothesishis true)p ( h|d ) is the posterior probability (probability of hypothesishbeing true given the observed datad )A weight is calculated for each feature using a Laplacian-adjusted probability estimate to account for the different sampling frequencies of different features.The weights are summed to provide a probability estimate 23. Bayesian Classification TB Models Leave out 50% x 100 Ekins et al., Mol BioSyst, 6: 840-851, 201065.477.96 67.217.05 66.854.06 0.750.01 0.730.01 MLSMRdose response set (N = 2273)77.132.26 78.591.94 78.561.86 0.860 0.860 MLSMRAll single point screen(N = 220463)Sensitivity Specificity Concordance Internal ROC Score External ROC Score Dateset(number of molecules) 24. 100K libraryNovartis Data FDA drugsAdditional test sets Suggests models can predict data from the same and independent labs Initial enrichment enables screening few compounds to find actives 21 hits in 2108 cpds 34 hits in 248 cpds 1702 hits in >100K cpds Ekins and Freundlich,Pharm Res, 28, 1859-1869, 2011. Ekins et al., Mol BioSyst, 6: 840-851, 2010 25.

  • Bayesian Models
  • Generated with kinase data [1] - - (blind testing of previous models showed 3-4 fold
  • enrichment )
  • Models were built as described previously [2]
  • Data for single point screening (cut off for activity % inhibition at 10uM >or equal to
  • 90%)
  • 2.IC 50data Cut off for active = or equal to 5uM
  • 3.IC 90data Cut off for active = or equal to 10uM andvero cell selectivity index greater or
  • equal to 10.
  • [1] Reynolds RC, et al. Tuberculosis (Edinburgh, Scotland) 2011 In Press.
  • [2] Ekins S, et al.,Mol BioSystems 2010;6:840-51.

Models with SRI kinase library data 26. Models with SRI kinase library data; refining data with cytotoxicity Model 1 ROC XV AUC(N 23797)= 0.89 Model 2 (N 1248) = 0.72 Model 3(N 1248)= 0.77 Leave out 50% x 100 Adding cytotoxicity data improves models Dateset(number of molecules) External ROC Score Internal ROC Score Concordance Specificity Sensitivity Model 1 (N = 23797)0.870 0.880 76.772.14 76.492.41 81.72.96 Model 2 (N = 1248)0.650.01 0.700.01 61.581.56 61.858.45 61.308.24 Model 3 (N=1248) 0.740.02 0.750.02 68.676.88 69.289.84 64.8412.11 27. Original TB Models : refining data with cytotoxicity Ekins et al., Mol BioSyst, 6: 840-851, 2010Single ptROC XV AUC= 0.88 Dose resp = 0.78 Dose resp + cyto= 0.86 Leave out 50% x 100 Dateset(number of molecules) External ROC Score Internal ROC Score Concordance Specificity Sensitivity MLSMRAll single point screen(N = 220463)0.860 0.860 78.561.86 78.591.94 77.132.26 MLSMRdose response set (N = 2273)0.730.01 0.750.01 66.854.06 67.217.05 65.477.96 NEW Dose resp and cytotoxicity(N = 2273) 0.820.02 0.840.02 82.614.68 83.915.48 65.997.47 28.

  • Combining cheminformatics methods and pathway analysis
  • Identified essential TB targets that had not been exploited
  • Used resources available to both to identify targets and molecules that mimic substrates
  • Computationally searched >80,000 molecules - tested 23 compoundsin vitro (3 picked as inactives),lead to 2 proposed as mimics of D-fructose 1,6 bisphosphate, (MIC of 20 and 40 ug/ml)
  • POC took < 6mths - - Submitted phase II STTR, Submitted manuscript
  • Still need to test vs target- verify it hits suggested target

Ekins et al, Trends in Microbiology Feb 2011 Phase I STTR-NIAID funded collaboration with Stanford Research International 29. http://www.slideshare.net/ekinssean Ekins S and Williams AJ, MedChemComm,1: 325-330, 2010. Analysis of malaria and TB data 30. TB Compound libraries and filter failures Filtering using SMARTs filters to remove thiol reactives, false positives etcat University of New Mexico (http://pasilla.health.unm.edu/tomcat/biocomp/smartsfilter) Ekins et al., Mol Biosyst, 6: 2316-2324, 2010 31. Antimalarial Compound libraries and filter failures Ekins and Williams Drug Disc Today 15; 812-815, 2010% Failure 32. Correlation between the number of SMARTS filter failures and the number of Lipinski violations for different types of rules sets with FDA drug set from CDD (N = 2804) Suggests # of Lipinski violations may also be an indicator of undesirable chemical features that result in reactivityFilter Correlations with Rule of 5 Ekins and Freundlich,Pharm Res, 28, 1859-1869, 2011. 33. Summary Computational models based on Whole cell TB data could improve efficiency of screening Collaborations get us to interesting compounds quickly Availability of datasets enable analysis that could suggest simple rules A high proportion of compounds fail the Abbott filters for reactivity when compared to drugs and antimalarialsUnderstanding the chemical properties and characteristics of compounds=better compounds for lead optimization. 34. Could all pharmas share their data as models with each other? Increasing Data & Model Access Ekins and Williams, Lab On A Chip, 10: 13-22, 2010. 35. Bridging the Gap

  • Challenge..There is limited access to ADME/Tox data and models needed for R&D
  • How could a company share data but keep the structures proprietary?
  • Sharing models means both parties use costly software
  • What about open source tools?
  • Pfizr had never considered this - So we proposed a study and Rishi Gupta generated models

36.

  • What can be developed with very large training and test sets?
  • HLM training 50,000 testing 25,000 molecules
  • training 194,000 and testing 39,000
  • MDCK training 25,000 testing 25,000
  • MDR training 25,000 testing 18,400
  • Open molecular descriptors / models vs commercial descriptors

Gupta RR, et al., Drug Metab Dispos, 38: 2083-2090, 2010Open source tools for modeling 37. Massive Human liver microsomal stability model PCA of training (red) and test (blue) compounds Overlap in Chemistry space Gupta RR, et al., Drug Metab Dispos, 38: 2083-2090, 2010

  • # Descriptors: 818 Descriptors
  • # Training Set compounds:193,930
  • Cross Validation Results:38,786compounds
  • TrainingR 2 : 0.77
  • 20% Test Set R 2 : 0.69
  • Blind Data Set (2310 compounds):
  • R 2= 0.53
  • RMSE = 0.367
  • ContinuousCategorical:
  • = 0.42
  • Sensitivity = 0.24
  • Specificity = 0.987
  • PPV = 0.823
  • Time (sec/compound): 0.303
  • # Descriptors: 578 Descriptors
  • # Training Set compounds:193,650
  • Cross Validation Results:38,730compounds
  • TrainingR 2 : 0.79
  • 20% Test Set R 2 : 0.69
  • Blind Data Set (2310 compounds):
  • R 2= 0.53
  • RMSE = 0.367
  • ContinuousCategorical:
  • = 0.40
  • Sensitivity = 0.16
  • Specificity = 0.99
  • PPV = 0.80
  • Time (sec/compound): 0.252

HLM Model with MOE2D and SMARTS Keys HLM Model with CDK and SMARTS Keys: 38. RRCK Permeability and MDR Open descriptors results almost identical to commercial descriptors Across many datasets and quantitative and qualitative data Smaller solubility datasets give similar results Provides confidence that open models could be viable MDCK training 25,000 testing 25,000 MDR training 25,000 testing 18,400 Gupta RR, et al., Drug Metab Dispos, 38: 2083-2090, 2010Kappa = 0.50 Sensitivity = 0.62 Specificity = 0.94 PPV = 0.68 Kappa = 0.53 Sensitivity = 0.64 Specificity = 0.94 PPV = 0.72 (Baseline) Kappa = 0.47 Sensitivity = 0.59 Specificity = 0.93 PPV = 0.67 C5.0 RRCK Permeability Kappa = 0.65 Sensitivity = 0.86 Specificity = 0.78 PPV = 0.84 CDK and SMARTS Keys Kappa = 0.67 Sensitivity = 0.86 Specificity = 0.80 PPV = 0.85 (Baseline) MOE2D and SMARTS Keys Kappa = 0.62 Sensitivity = 0.85 Specificity = 0.77 PPV = 0.83 CDK descriptors C5.0 MDR 39. Merck KGaACombining models may give greater coverage of ADME/ Tox chemistry space and improve predictions? Model coverageof chemistry space Lundbeck Pfizer Merck GSK Novartis Lilly BMS Allergan Bayer AZ Roche BI Merk KGaA 40. Next steps

  • ADME/Tox Data crosses diseases
  • Potential to share models selectively with collaborators e.g. academics, neglected disease researchers
  • We used the proof of concept to submit an SBIR Biocomputation across distributed private datasets to enhance drug discovery
  • Develop prototype for sharing models securely- collaborate to show how combining data for TB etc could improve models
  • Phase II- develop a commercial product that leverages CDD

41. Bunin & Ekins DDT 16: 643-645, 2011A complex ecosystem of collaborations: A new business model Inside Company Collaborators Inside Academia Collaborators Molecules, Models, Data Molecules, Models, Data Inside Foundation Collaborators Molecules, Models, Data Inside Government Collaborators Molecules, Models, Data IP IP IP IP Shared IP Collaborative platform/s 42. Finding Promiscuous Old Drugs for New Uses

  • Research published in the last six years - 34 studies - Screened libraries of FDA approved drugs against various whole cell or target assays.
  • 1 or more compounds with a suggested new bioactivity
  • 13 drugs were active against more than one additional disease in vitro

43. Finding Promiscuous Old Drugs for New Uses

  • 109 molecules were identified by screening in vitro
  • Statistically more hydrophobic (log P) and higher MWT than orphan-designated products with at least one marketing approval for a common disease indication or one marketing approval for a rare disease from the FDAs rare disease research database.
  • Created structure searchable databases in CDD

44. 2D Similarity search with hit from screeningExport database and use for 3D searching with a pharmacophore or other model Suggest approveddrugs for testing -may also indicate other uses if it is present in more than one databaseSuggestin silicohits forin vitroscreening Key databases of structures and bioactivity data FDA drugs database Repurpose FDA drugs in silico Ekins S, Williams AJ, Krasowski MD and Freundlich JS, Drug Disc Today,16: 298-310, 2011 45. Crowdsourcing Project Off the Shelf R&D All pharmas have assets on shelf that reached clinic Off the Shelf R&DGet the crowd to help in repurposing / repositioning these assets How can software help? - Create communities to test - Provide informatics tools that are accessible to the crowd- enlarge user base - Data storage on cloud integration with public data - Crowd becomes virtual pharma-CROs and the customer for enabling services 46. Tools for Open Science

  • Blogs
  • Wikis
  • Databases
  • Journals
  • What about Twitter, Facebook, could these be used for social collaboration, science?

47. 2020: A Drug Discovery Odyssey Could our Pharma R&D look like this Massive collaboration networks software enabled. We are in Generation App. Crowdsourcing will have a role in R&D. Drug discovery possible by anyone with app access Ekins & Williams, Pharm Res, 27: 393-395, 2010. 48.

  • Make science more accessible = >communication
  • Mobile take a phone into field /lab and do science more readily than on a laptop
  • GREEN energy efficient computing
  • MolSync + DropBox + MMDS = Share molecules as SDF files on the cloud = collaborate

Mobile Apps for Drug Discovery Williams et al DDT16:928-939, 2011 49. www.scimobileapps.com How do you find scientific mobile Apps ? Development of Wikis to track developments in tools.. 50. Acknowledgments

  • Antony J. Williams (RSC)
  • Rishi Gupta , Eric Gifford, Ted Liston, Chris Waller (Pfizer)
  • Joel Freundlich (Texas A&M), Gyanu Lamichhane (Johns Hopkins)
  • Carolyn Talcott,Malabika Sarker , Peter Madrid, Sidharth Chopra (SRI International)
  • MM4TB colleagues
  • Chris Lipinski
  • Takushi Kaneko (TB Alliance)
  • Nicko Goncharoff (SureChem)
  • Matthew D. Krasowski (University of Iowa)
  • Alex Clark (Molecular Materials Informatics, Inc)
  • Accelrys
  • CDD Barry Bunin
  • Funding BMGF, NIAID.
  • Slideshare: http://www.slideshare.net/ekinssean
  • Twitter: collabchem
  • Blog: http://www.collabchem.com/
  • Website: http://www.collaborations.com/CHEMISTRY.HTM

51. Novartis aerobic and anaerobic TB hits Anaerobic compounds showed statistically different and higher mean descriptor property values compared with the aerobic hits(e.g. molecular weight, logP, hydrogen bond donor, hydrogen bond acceptor, polar surface area and rotatable bond number) The mean molecular properties for the Novartis compounds are in a similar range to the MLSMR and TAACF-NIAID CB2 hits Ekins and Freundlich,Pharm Res, 28, 1859-1869, 2011.