2010 Best Practices Competition Personal and Translational Medicine
Pg Nominating User Company Project Title 2 BioFortis,
Inc. Mer nc. ck & CO., I Biomarker Data Integration
14 IO Informatics,
Inc.
PROOF / iCAPTURE Centre of Excellence
Semantic Data Integration, Knowledge Building and Sharing Applied to Biomarker Discovery and Patient Sc y reening for Pre‐symptomatic Heart, Lung or Kidne
Failure in Transplantation Medicine 33 Ket lth tering Hea
Network Da rd yton Health Konnect (Individual Health Reco
System) 36 King Saud
University A n or ovel algorithm with software support f
comparison of gene signatures 40 LabWare,
Inc M c erck & Co., In Clinical Assay and Specimen Initiative
48 Orion Health
Lahey Clinic Medic ation al Applications Portal Install
54 Surgical Department, Pamela Youde Nethersole
Eastern Hospital
3D Vision for Surgical Robot
64 Randox Laboratories
Determination of a Diagnostic Classsifier for transitional cell cancer of the bladder (TCCB) by
evaluation of bladder cancer biomarkers in urine and blood of 200 individuals with variable bladder
pathologies 71 Mayo Clinic Towards personalized medicine: limiting ventilator
induced lung injury through individual electronic medical records surveillance
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Bio‐IT World 2010 Best Practices Awards
ENTRY FORM 1. Nominating Organization (Fill this out only if you are nominating a group other than your own.)
A. Nominating Organization Organization name: BioFortis, Inc. Address: 9017 Red Branch Rd, Columbia, MD 21045
B. Nominating Contact Person Name: Steve Chen Title: Director of Marketing Tel: (443) 980 8620 Email: [email protected]
2. User Organization (Organization at which the solution was deployed/applied)
A. User Organization Organization name: Merck & CO., Inc. Address: PO BOX 4, WP37C‐301, West Point, PA 19486
B. User Organization Contact Person Name: Guochun Xie Title: Team Leader Tel: (215) 652 4404 Email: [email protected]
3. Project
Project Title: Biomarker Data Integration Name: Guochun Xie Title: Team Leader Tel: (215) 652 4404 Email: [email protected] Team members – name(s), title(s) and company (optional):
‐ Ingrid Akerblom, Executive Director, Merck ‐ Carolyn Buser‐Doepner, OHCS (Oncology Health Care Solution) Program Team Lead, Merck ‐ Mei Hong, Project Leader, Merck ‐ Neil Jessen, Program Manager, Merck ‐ Martin Leach, Executive Director, Merck
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
‐ Gary Mallow, Director, Merck ‐ Michael Nebozhyn, Data Analyst, Merck ‐ Usha Reddy, Project Leader/Business Analyst, Merck ‐ Brenda Yanak, IT Strategy Leader, Merck ‐ Frank Zhang, Solution Manager, Merck ‐ The BioFortis team
4. Category in which entry is being submitted (1 category per entry, highlight your choice)
Basic Research & Biological Research: Disease pathway research, applied and basic research Drug Discovery & Development: Compound‐focused research, drug safety Clinical Trials & Research: Trial design, eCTD Translational Medicine: Feedback loops, predictive technologies Personalized Medicine: Responders/non‐responders, biomarkers IT & Informatics: LIMS, High Performance Computing, storage, data visualization, imaging technologies Knowledge Management: Data mining, idea/expertise mining, text mining, collaboration, resource optimization
Health‐IT: ePrescribing, RHIOs, EMR/PHR Manufacturing & Bioprocessing: Mass production, continuous manufacturing
(Bio‐IT World reserves the right to re‐categorize submissions based on submission or in the event that a category is refined.)
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
5. Description of project (4 FIGURES MAXIMUM):
A. ABSTRACT/SUMMARY of the project and results (150 words max.)
The five year Merck‐Moffitt Cancer Center translational research collaboration endeavors to build a molecularly and clinically annotated human tumor database that includes longitudinal clinical data collection and storage. The 2008 BioIT best practice award was presented to the project's first phase, Biomarker Information Pipeline, which enabled flow of clinical data from Moffitt to Merck's Clinical Data Repository (CDR). Access to such large and complex data sets relied on specialized IT groups. Phase two of the project, Biomarker Data Integration, provides data in a researcher‐accessible, user friendly, integrated data exploration environment (Biomarker Data Exploration Environment). Phase two democratizes data access to Merck oncology researchers, decreases the time and cost required to access the data, and encourages hypothesis‐driven data exploration. The best practice established is: Providing researchers with a data usability environment where they can conduct hypothesis‐driven data exploration without having to rely on data managers or other IT professionals
B. INTRODUCTION/background/objectives
In a 2006 Science article, Dr. Stephen Friend, then Senior Vice President and Franchise Head, Oncology, at Merck, and Dr. William Dalton, CEO of H. Lee Moffitt Cancer Center in Tampa, Florida, described how emerging genomic technologies have the ability to generate new biomarkers that predict how individual cancer patients will respond to various treatments. This revolutionary goal of more personalized cancer care must be driven by Translational Medicine ‐ a continuous cycle of discovery, research, and delivery that depends on the ability to capture, integrate, and use data from molecular profiles and clinical information seamlessly. Development of a translation database can be divided into several steps: (i) clinical data gathering (responsibility of M2GEN ‐ a wholly‐owned subsidiary of Moffitt that is responsible for the Merck‐Moffit collaboration), (ii) data transfer from M2GEN to Merck, (iii) data storage in Biomarker Data Exploration Environment (BioFortis/Labmatrix), and (iv) linkage of clinical and molecular data. The following description in this best practice nomination focuses on (iii) with some reference to (ii) and (iv). The issues of data gathering (i) are outside of the scope of this nomination. Incorporation of the Moffitt clinical data into Merck was divided into two phases, data transfer between Moffitt and Merck and data dissemination within Merck. In the first phase, the Biomarker Information Pipeline (BIP) project (2007‐2008), the team designed and implemented an automated data flow and integration system that piped clinical data (pathology, baseline diagnosis and longitudinal treatment and outcome data) from Moffitt to Merck's Clinical Data Repository (CDR). The entire data set is complex with many domains and involves weekly updates that modify existing data and add new data such as new subjects and pathology information. While the BIP project is an important enabling technology, it is not a complete solution to the challenges faced in the full translational medicine lifecycle. A significant bottleneck prevented Merck from fully
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
realizing the value of the Moffitt data investment: researchers were unable to directly access the data in an easy, exploratory environment. Several limitations of CDR restricted the utility of the Moffitt data, including (i) access to the Moffitt data in CDR was only possible via SDD (SAS Drug Development) and SDTM (Study Data Tabulation Model); (ii) certain activities required knowledge of SQL and data transformation and pivoting; (iii) data was stored in multiple domains that had relationships that were not immediately obvious; (iv) data sets transferred to Merck were not all discrete and sometimes saved as text within a domain, making queries very difficult; and (v) gene expression information gathered from the biospecimens was not stored in the CDR. So the data is complex, stored in different places, and requires specialized database querying knowledge to access. Since most researchers lack this knowledge, accessing the data is a problem for them. The typical solution to this issue is that researchers use technically‐facile intermediaries to query data on their behalf. However, in analyzing this situation it became clear that this process not only required additional people to support the researchers, it was time inefficient, prevented ad hoc data exploration, and thus did not allow Merck to fully exploit the value of the data set. Data Exploration: the next big challenge As shown in the diagram below, acquiring data is only the start of the typical data workflow in pharmaceutical research. The first and last steps, acquisition and statistical analysis, are generally well handled by medium to large organizations. Phase one, Biomarker Information Pipeline, of the Merck‐Moffitt collaboration was established to address the data acquisition step. But the exploration step is not as well optimized. Canvassing researchers at Merck, and other research organizations, we uncovered that the exploration step is a major time investment, is not meeting the needs of the researchers and is a bottleneck between acquisition and analysis.
Exploratory queries range from complex to simple. A complex query may involve identifying a group of samples by iterative criteria parameterization, for example changing age, gender, test results, so that the data set can then be sent for statistical analysis. Exploration results can lead to identification of samples for further analysis or may be as simple as show me everything we know about this list of subjects. The following are examples of complex exploratory queries:
• How many patients are there in the various data sources in my enterprise that have been diagnosed with a given type of cancer, on a certain medication and had a certain therapy and outcome? From these patients, how many primary and metastatic tumor samples do we have?
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Do we have any matching primary and metastatic samples? Of these samples, what gene expression and genotyping data are available? What happens if I change the age profile from 40‐60 to 50‐60?
• What samples have a given biomarker profile (e.g. ER/PR/Her2)? • What do we know about a list of samples? • What information do I have from other studies that can complement data from my study?
Hypothesis driven data exploration is a highly dynamic process. Exploratory paths change frequently, sometimes converging, sometimes diverging, and often resulting in dead ends. Only a small subset of exploratory results end up being formally analyzed to derive quantitative insights (statistical, modeling, etc…). Because of this dynamic nature of data exploration, it is critical that researchers who generate hypothesis, the domain experts, can directly explore in the available data space. The reality, however, is often not the case. For most pharmaceutical researchers to receive answer to questions such as those listed above, they have to go through the traditional workflow shown below:
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
The above workflow usually requires several days or weeks of preparation time for IT and data managers to piece together multiple data sources in various formats scattered across several data repositories. In such an environment, researchers are limited to describe a strict, narrow set of data to be compiled by data managers due to the tremendous amounts of efforts required for data retrieval and analysis. Sometimes researchers are required to write justifications for data requests so that the data management group can prioritize the deluge of such requests. A key requirement for effective data exploration is to be able to rapidly and creatively ask and receive answers to hypothesis‐driven queries. By exploring the data in such a way, researchers themselves – without technical intermediaries – can quickly hone in on the questions that the data can answer and save weeks of turnaround time. Furthermore, as researchers become more self‐sufficient in their exploration and analysis of the data, the bioinformatics/IT groups can focus on more value‐add and domain‐specific tasks. The diagram below shows graphically how the new workflow has started to change at Merck.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
The objective of the Biomarker Data Integration project is to realize significant value & insights from data captured in the large Merck‐Moffitt collaboration (~20,000 tumor specimens collected over 5 years with at least 5 year longitudinal clinical data), specifically:
• Allow non‐SDD, non‐SQL access to Moffitt clinical data and corresponding molecular data • Reduce the cycle time from the formulation of a scientific question to the availability of query
results • Enable end users in Biostatistics and Research Decision Sciences, Clinical Oncology,
Epidemiology, and Molecular Profiling departments to perform analyses that integrate clinical trial data (treatment, outcome) with gene expression data, clinical pathology report data, and other 'omics' data. Allow these end users to run exploratory queries efficiently and not depend on SAS programmers to retrieve data sets.
• Increase scientific productivity by reducing or removing the technological barrier that prevents researchers from fully utilizing available data sets in the Merck enterprise.
C. RESULTS (highlight major R&D/IT tools deployed; innovative uses of technology). We discovered early on that the CDR (Clinical Data Repository) was not the optimal environment for data exploration; this is not a surprising finding since CDR’s original purpose was to provide support for clinical trial data reporting instead of exploratory research on standard of care treatment data. The Merck Biomarker Data Exploration Environment is based on the Labmatrix platform (BioFortis, Columbia, MD). This environment functions as the "Clinical Data Repository for Research". Several aspects of the system are emphasized to support biomarker research, compared to the original CDR.
• agile ‐ research needs change quickly, systems supporting research must evolve with research needs
• exploratory ‐ explore data in ad hoc or even unexpected ways according to the scientific inquiry • direct‐ system must be easy enough for researchers and clinicians to use by themselves • open ‐ the exploratory environment must be able to integrate with other existing systems
including but beyond the CDR Making several key improvements in data integration, data cleaning & standardization, and usability, this data exploration environment introduces a new, repeatable paradigm on how a clinical biomarker team can explore and consume data in a more efficient and effective manner. Data integration
During the pilot phase of the data exploration environment project, we tested both data warehouse (with data replication) and data federation approaches. We believe the best practice is that of a hybrid approach. In an ideal world, all data sources that need to be explored should be federated into the
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
exploration environment so as to reduce data replication and the associated cost of maintaining system interface and data consistency between systems. In the reality though, not all existing data sources are appropriate for federation. For example, if an existing system is not organized for ad hoc data exploration, it must first be restructured and optimized into a more suitable and user‐friendly format. Data standardization & cleaning We approached data standardization at two levels, global and local. At the global level, we mapped several of the Moffitt clinical data domains (such as pathology test names) against the SNOMED vocabulary. We soon learned that many of the terms in actual clinical data sets are not programmatically mappable to standard terms. To remedy this, we are implementing a tool to allow the end users to construct their own "local" standardized mapping system on a smaller set of concepts that are relevant to their needs. This approach provides the users with terms that are meaningful to their team without changing the data terms for everyone. Thereafter, both global and local data standardization can be applied in query expansions to allow for inquires in a more semantically meaningful fashion, without necessarily changing the original data elements. One of the key lessons learned is the importance of quickly getting real data in front of the researchers to identify any additional data areas that need to be cleaned and standardized. Only a limited amount of cleaning and standardization can be accomplished by programmers and bioinformaticians alone. The hard problems often can only be identified by researcher using the data. However, these users are usually unable, and often unwilling, to help with data cleaning and standardization for the sake of such. Instead, we find the best practice is to encourage and enable researchers to explore data in a hypothesis‐driven approach as early as possible. With intuitive access to existing data ‐ even if the data are less than perfect ‐ researchers can explore with meaningful use cases and derive important insights. During this process, we observe that optimizing the data structure and data display require a strong partnership between the bioinformaticians/IT personnel who structure and provide vocabulary mappings and the researchers who explore the data.
Usability Our canvassing of researchers identified some of the primary challenges to working with data: where are the data, how are datasets related and how can I quickly create and edit search criteria to support my ever changing needs. Each of these challenges, and more, were addressed in the Merck Biomarker Data Exploration Environment. Researchers do not need to know which data source contains their data, they just search with data term and the system finds the data source for them. Instead of foreign keys and other database terms, the data are organized around familiar concepts such as patients/subjects and biosamples that clearly show how they are related. Researchers express their exploration process in a familiar pathway‐like diagrammatic format. The resulting query diagram is often referred to as a self‐documenting “mind map” that models the "divide‐and‐conquer" approach to problem solving. As an example, the following diagram illustrates the different approaches, before and after the data exploration environment, to return information on participants playing certain roles in a given version
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
of a specific clinical trial protocol. This query crosses multiple data sources but can still be composed and modified, by people with no SQL knowledge, in less than 10 minutes in the data exploratory environment.
Within this exploratory environment, researchers do not have to understand how data sets are organized & interrelated, or the particular availability of any given type of data. All that is required is a specific question. From the question, the researcher is guided through an exploratory process consisting of three simple steps: 1. Use the familiar google‐like search paradigm to pinpoint data availability and add data sets to the canvas. 2. Drag and drop data sets onto one another to associate multiple lines of inquiry 3. Select from a list of “Plain English Criteria” to specify the exact form of data association.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Researchers can save their queries, share their queries, modify the queries, create shortcuts to frequently‐run queries and even save queries that can be parameterized. And the results can be exported in multiple formats (such as Excel, tab delimited text, Word) and then used for downstream statistical analysis. The query diagram as shown above also makes it easy for any researcher to review a query and understand how it was constructed. The end result is an environment that allows researchers to interact with their data easily, intuitively, and holistically, independent of IT professionals. Resource optimization Requirements for managing and exploring clinical biomarker research data are very different from that for running clinical trials. Clinical trial data sets tend to be well controlled and are usually in discrete, structured formats; clinical biomarker research data sets, on the other hand, often exist in all forms including many non‐structured formats. By storing clinical data for biomarker research purposes in Labmatrix instead of a system that is designed for clinical trials, Merck has eliminated the need to (i) perform complex SDTM mapping, (ii) create multiple SDTM+ dummy field codes due to lack of data field standardization, and (iii) always involve multiple Merck resource departments (e.g. SDTM, Vocabulary, Coding, and Statistics teams) when data & data structure modifications are required. The Merck Biomarker Data Integration team becomes more “lean & agile” due to less unnecessary restrictions from regulatory concerns and inter‐department coordination overhead. D. ROI achieved or expected (200 words max.):
Time and cost savings We estimate a direct annual cost savings of at least 2 to 3 FTE's by eliminating various resource needs in managing data in the CDR. Additional indirect savings result from (i) enabling researchers to retrieve data in real‐time instead of waiting for days or weeks, and (ii) optimizing the researchers’ experience through centralized, direct data access, instead of either learning CDR, SDD, and SDTM by themselves, or waiting in line for help from bioinformaticians/IT personnel who are familiar with these systems. Scientific insights Researchers are able to ask more questions, combine and view data in novel ways and ask questions that were impossible to answer before. The scientific value can be immeasurable.
Cultural change We observe a cultural change catalyzed by researchers’ ability to directly explore large scale data sets in a hypothesis‐driven fashion. Before generating new data, they are now more likely to ask: do we already have data for this question? Before they resort to phone calls and emails to find out "who has
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
the data", they go to the central location to look for data. This results in more appropriate knowledge management and more focus on mining and analysis than sheer data generation.
E. CONCLUSIONS/implications for the field. The Biomarker Data Integration project addressed several underserved areas that are observed both within Merck and other biopharma companies. With the availability of a centralized data exploration system that researchers can use directly without having to rely on programming support, together with the ability to interact holistically with complex clinical and ‐omics data streams, Merck scientists have derived significant ROI. Moreover, highly skilled IT professionals are more often alleviated from the daily barrage of ad hoc requests for data, allowing them to better focus on supporting data integration and other infrastructural needs that can benefit multiple projects, rather than the needs of individual scientists.
We have now applied the data exploration environment to other areas outside of the Merck‐Moffitt collaboration. We have learned that better support for hypothesis driven data exploration is a common need and an area that researchers across many departments at Merck are struggling with. Many of the best practices described in this nomination are applicable (and are already being applied) to other therapeutic areas and other steps in the drug discovery/development workflow (such as in vivo and in vitro studies).
2. REFERENCES/testimonials/supporting internal documents (If necessary; 5 pages max.)
Case Study In support of one of Merck's PI3K pathway inhibitor clinical programs, clinical researchers were seeking information on Ki‐67 status, a marker of tumor cell proliferation, to determine whether Ki‐67 status can be used as an early predictor of response in specific hormone‐receptor sub‐populations of breast cancer. Literature, published clinical trial data, and the Merck‐Moffitt Total Cancer Care database were interrogated to obtain epidemiology information on the amount and prevalence of Ki‐67 staining across receptor sub‐populations of breast cancer. Specifically, scientists asked the following set of questions within our developing database
i. What is the distribution of Ki‐67 staining as a function of ER (estrogen receptor) status? ii. What is the frequency of high Ki‐67 (>13.25%) staining in ER+ Her2‐ breast cancer patients? iii. What is the prevalence of high Ki‐67 in ER+ patients who have failed two prior endocrine, and one prior
chemotherapy treatment(s)? iv. Are samples available from patients with ER+ Her2‐ receptor status for obtaining additional Ki‐67
biomarker data? In the current data feeds from M2Gen, Ki‐67 data entries are stored in a non‐discrete, text format. Examples include "non‐amplified; Ki‐67 30% unfavorable", "Ki67 ‐ 19%", and "Ki‐67 50 percent, unfavorable". M2GEN is in
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
the process of abstracting this unstructured data into discrete formats. In the mean time, however, the non‐discrete text format made it very difficult, for anyone other than database experts, to search for these data in the framework of the existing databases at M2GEN and within the clinical data repository (CDR) at Merck.
However, the flexibility of the google‐like search in Labmatrix allowed Merck researchers to obtain answers to questions i, ii, and iv above. Question iii from above cannot be addressed in the database until additional longitudinal data are abstracted and uploaded into Labmatrix. In brief, the database currently includes information from ~1700 standard‐of‐care treated breast cancer patients of which 2.5% have recorded Ki‐67 data. In this set of 46 patients, 35 patients were ER+ and 11 patients were ER‐. Within the ER+ population, 17 patients had a Ki‐67 score of < 13.25% and 18 patients had a Ki‐67 score of >13.25%. Forward‐looking, the vocabulary and results data for the Ki‐67 assay is being standardized and will be recorded as a discrete element, when available.
In addition to obtaining relevant clinical data, this database search also allowed for selection of the most relevant samples for assay development. While Ki‐67 is a standard clinical immunohistochemistry assay, its prospective use in cancer clinical trials as a predictive biomarker is less common. In order to assess assay performance, not just in fresh biopsies but also in archival samples, Merck researchers have used the database information obtained in these queries to select 40 archival formalin‐fixed paraffin‐embedded tumor specimens from randomly selected patients with invasive ER+ / Her2‐ disease. This sample set is now being used to determine assay performance, sample quality, and archival sample availability. ‐ Carolyn Buser‐Doepner, OHCS (Oncology Health Care Solution) Program Team Lead, MRL Testimonials "With the data in Labmatrix and limited guidance in how to form the query, I was able to get what I needed in ten minutes compared to 5 hours without Labmatrix.” ‐ Mark Morris, Molecular Profiling Program Manager, MRL "I have been trying to get pathology data for the last couple of weeks, by emailing and calling people. What we did in the last 15 minutes with the Biomarker Data Exploration Environment got me the exact data set that I was looking for. It is very easy.“ ‐ Razvan Cristescu, Sr. Research Scientist, MP Data Analysis, MRL "My data sets used to be kept in files and folders. We were only able to look at data for one study at a time. The Biomarker Data Exploration Environment allowed me to easily compare and mine data across multiple studies for the first time.“ ‐ Chi‐Sung Chiu, Program Team Member, Musculo‐Skeletal, MRL
Publications
Dalton, William, S. and Stephen J. Friend. 2006. "Cancer Biomarkers—An Invitation to the Table." Science: Volume 321, pp. 1165‐1168.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Bio‐IT World 2010 Best Practices Awards
1. Nominating Organization (Fill this out only if you are nominating a group other than your own.)
A. Nominating Organization Organization name: IO Informatics, Inc. Address: 2550 9th Street, Suite 114, Berkeley, CA 94710‐2552, U.S.A.
B. Nominating Contact Person Name: Erich A. Gombocz Title: VP, CSO Tel: +1 (510) 705‐8470 Email: egombocz@io‐informatics.com
2. User Organization (Organization at which the solution was deployed/applied)
A. User Organization Organization name: PROOF / iCAPTURE Centre of Excellence Address: University of British Columbia – St. Paul’s Hospital 1081 Burrard Street, Vancouver, British Columbia V6Z 1 Y6, Canada
B. User Organization Contact Person Name: Raymond T. Ng Title: Chief Informatics Officer, PROOF Centre of Excellence Tel: +1 (604) 822‐2394 Email: [email protected]
3. Project
Project Title: "Semantic Data Integration, Knowledge Building and Sharing Applied to Biomarker Discovery and Patient Screening for Pre‐symptomatic Heart, Lung or Kidney Failure in Transplantation Medicine” Team Leader Name: Bruce M. Mc Manus, MD, PhD, FRSC, FCAHS Title: Director, PROOF Centre of Excellence for Commercialization and Research;
Director, The James Hogg iCAPTURE Centre for Cardiovascular and Pulmonary Research; Director, Providence Heart + Lung Institute at St. Paul’s Hospital
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Tel: +1 (604) 806‐8586 Email: [email protected] Team members – name(s), title(s) and company (optional):
4. Category in which entry is being submitted (1 category per entry, highlight your choice)
Basic Research & Biological Research: Disease pathway research, applied and basic research Drug Discovery & Development: Compound‐focused research, drug safety Clinical Trials & Research: Trial design, eCTD Translational Medicine: Feedback loops, predictive technologies Personalized Medicine: Responders/non‐responders, biomarkers IT & Informatics: LIMS, High Performance Computing, storage, data visualization, imaging technologies Knowledge Management: Data mining, idea/expertise mining, text mining, collaboration, resource optimization
Health‐IT: ePrescribing, RHIOs, EMR/PHR Manufacturing & Bioprocessing: Mass production, continuous manufacturing
(Bio‐IT World reserves the right to re‐categorize submissions based on submission or in the event that a category is refined.)
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
5. Description of project (4 FIGURES MAXIMUM): A. ABSTRACT/SUMMARY of the project and results (150 words max.)
The PROOF Centre of Excellence focuses on heart, lung and kidney failure, growing epidemics in Canada and around the world. Its objective is to develop, commercialize and implement non‐invasive tests for prevention, diagnosis, prediction, management, and treatment of diseases associated with organ failure. Biomarker‐based tests help tailor individual treatments, advancing towards “personalized health care”. This requires meaningful functional integration of multiple experimental data such as gene and protein expression and tissue biopsy findings in context of the biological system to help understand the complex processes involved in organ failures, and to qualify and assist in validation of putative biomarkers through extension of the network with curated public reference data. The knowledge built is applied in form of “Applied Semantic Knowledgebases” (ASK) containing arrays of models for acute rejection and non‐rejection to provide confident decision support in patient screening for pre‐symptomatic organ failures, thereby minimizing or avoiding invasive and expensive biopsies and improving the quality of life for patients.
B. INTRODUCTION/background/objectives The PROOF Centre of Excellence focuses on programs in the development of biomarkers for heart, lung, and kidney failure as well as for new blood assays for genes, proteins, and metabolites. These non‐invasive tests will help tailor individual treatments, advancing towards “personalized health care”. By creating a hub that embraces industry, academia, policy makers, clinicians, and patients with wide‐ranging expertise, the PROOF Centre can speed up development of these biomarkers and apply them sooner. On the informatics side, this requires a new paradigm for data integration and knowledge building to provide the foundation for a systems approach to better understand the complex biological functions involved in organ failures and to provide the necessary tools for confident discovery, selection, qualification and validation of easy accessible biomarkers. Treatment related to organ failure, including everything from open heart surgery and kidney dialysis to organ transplants, is a significant area of cost to the Canadian and the Global healthcare system, accounting for more than 30 percent of system resources. Cardiovascular disease (heart disease, diseases of the blood vessels and stroke) accounts for the death of more Canadians than any other disease. Every 7 minutes in Canada, someone dies from heart disease or stroke. Globally, cardiovascular diseases are the number one cause of death and are projected to remain so. An estimated 17.5 million people died from cardiovascular disease in 2005, representing 30 % of all global deaths. In 2006, 33,832 Canadians were on renal replacement therapy, and this number is expected to double over the next 10 years. Each day, an average
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
of 14 Canadians learns that their kidneys have failed. Of the 4,195 Canadians on the waiting list for a transplant as of December 31, 2007, 2,963 (71%) were waiting for a kidney.
PROOF’s experimental results in proteomics, metabolomics, microarrays, genotyping and phenotyping needed to be integrated and analyzed in context with an annotated specimen bio library and patient information to provide a base for understanding and applying state‐based combinatorial multi‐marker patterns. Result of this project will be commercializable screens for indications of pre‐symptomatic heart, lung or kidney failure. The project’s implementation has been carried out in several phases:
• A planning and Research & Development phase with refinements of the semantic toolsets applied; • A core server installation of Sentient Data Manager, Sentient Web Query and Process Manager in
limited user testing phase with internal validation (alpha‐, beta‐deployment), and the building of ASK arrays for Controls, and,
• A site‐wide full production rollout of the entire Sentient Suite (Sentient Data Manager, Sentient Web Query, Process Manager, Image Interactor, multiple Import Assistants, ASK) with IQ/OQ with training and secure, compliant access for Principal Investigators, Clinicians and Advanced Bioinformatics users
• Application of ASK arrays for patient screening with weighting of biomarker relevancy and scoring of hit‐to‐fit between patient result and model
As the project involved integration of experimental and clinical data, secure access, audit trails and regulatory compliance were essential in implementation, setup and configuration of all systems. Privacy concerns and HIPAA‐compliance needed to be attended in great details to assure data access and sharing is governed according to all applicable regulations for disclosure of Protected Health Information (PHI). Conceptually, the project was divided up into a planning phase, semantic toolset refinements, data access, process management, data integration, the construction of ASK arrays and the building of an applied semantic knowledgebase for decision support. A site‐wide rollout after on‐site user training moved the system from project to production in 2010. The following section describes the results for each phase, the tools deployed, the innovative application of IO Informatics technology and the completion of the deployment in details.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
C. RESULTS (highlight major R&D/IT tools deployed; innovative uses of technology).
The data integration project at PROOF involved all components of IO Informatics’ Sentient Suite in a staged and deliberate implementation process to provide early on immediate benefits and continued adoption with most common use scenarios for data access and sharing across resources.
Phase I: Ontology building, network visualization The first phase of the project involved the deployment of the Sentient Knowledge Explorer to Mark Wilkinson’s group, and the creation of a plug‐in API for Knowledge Explorer integration with SADI to be used for the CardioShare initiative. Result: The Sentient “Knowledge Explorer” enables identification, characterization, visualization of experimental networks, analysis and integration with published data for understanding; merging and harmonizing ontologies; synonym mapping for classes and relationships via thesauri; and the creation and qualification of systems‐oriented multi‐source semantic biomarker patterns. These results can be visualized and captured in “SPARQL” screening and inference framework to be applied under a Knowledge Base framework. The API provides access for data transfer, import and visualization via plug‐ins and has been successfully used in SADI for the CardioShare initiative. Deployment completed in January 2009 (KE + SADI) Phase II: Unified Data Access to internal databases This phase was comprised of identification of existing data sources at PROOF, and mapping to relational data stores. These sources contained clinical information (blood tests, patient sample annotations, demographics), microarrays (gene expression and genotyping), and imaging (organ biopsies). Result: The Sentient “Web Query” provides federated databases, a search and collaboration interface. Through secure browser‐based access it facilitates advanced, easy searching and secure sharing of data and results; and, efficient interaction with analytical resources and applications. At PROOF, the “Web Query” has been extremely useful to identify samples and search for and traverse through experimental results from a common web interface. Deployed at PROOF March 2009 (WQ) Phase III: Process Management, workflows During this phase, the Sentient “Process Manager” was installed on the target server system, and administrators were trained in its use. During that time, use cases for Process Management at PROOF were solicited and defined, specifically for multi‐step pipelines in sample processing. These steps involve patient cohort selection, sample tracking and handling, generation of experimental data from different experimental platforms, quality control checks and statistical evaluation.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Result: The Sentient ”Process Manager” provides web‐based task management for multi‐stage experimental procedures and high‐level management of projects via dashboard views of crucial steps in pipelines of experimental and clinical data handling. At PROOF, the “Process Manager” plays an essential role in sample management across multiple analytical steps to track processing of aliquots according to workflow criteria. Deployed at PROOF July 2009 (PM) Phase IV: Semantic Data Integration For cross‐source, cross‐ontology data integration, integration mappings were created in the Sentient “Knowledge Explorer” during this phase of the project. For ambiguity reduction, IPI (International Protein Index at EBI/EMBL) thesaurus and GPSDP gene symbol synonym mapping was applied and PIs trained in integration mapping. Benefits: Extensible ontology classes for genes and proteins; platform‐specific probe sets; relationship view for a single patient’s biomarker data; allows for parameter‐ and user‐defined relations weighting to be used in scoring algorithms
Result: The Sentient “Knowledge Explorer” enables the identification, characterization, visualization of experimental networks, analysis and integration with published data for understanding; and, the creation and qualification of systems‐oriented multi‐source semantic biomarker patterns. These results can be visualized and captured in “SPARQL” screening and inference framework to be applied under a Knowledge Base framework to create state‐based multi‐modal semantic pattern for protein‐interactions in treatment of organ transplant patients. The “Knowledge Explorer” is very important to research in qualifying correlations between statistically and algorithmically defined relationships through facilitating hypothesis testing for biologists, as they quickly can run SPARQL queries to verify marker selection mechanistically and from functional biology angles; a very important functionality for PROOF. Deployed at PROOF July 2009 (KE)
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Fig.1: Genomic and proteomic biomarker response of Controls (Sub‐Network)
• Ontology extension with public resources (left part): Merging of Pathway ontology (OBO) and GOslim (OBO) to map proteins to functions
• User‐definable weighting for individual marker response scoring (center graph): relationship line thickness indicates weight
• Color‐coded relations (listed in lower left) help identifying relationships in biological network
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Fig.2: Experimental biomarker network for Acute Rejection patients, enhanced with public resources • Dynamically generated ontology (upper left), icon‐identified genes and proteins (center graph) • Entity relationships and URI details (bottom panel), color‐coded relations (right bottom) • Relevant references from OMIM and NCBI Protein included for biological functions and mechanistic understanding
(the right lower corner of main graph area shows a control patient) • Facilitates biologists to review relationship network from a mechanistic perspective
Phase V: Building of ASK arrays During this phase, IO Informatics obtained genomic & proteomic heart failure biomarker data and created basic SPARQL queries that match for combinatorial biomarker patterns in applicable ranges. We also obtained additional biomarker statistics and created queries that score a subject’s fit to the model for tests on Control patients.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Process to create a SPARQL query directly from graph: Select the genomic and proteomic markers for a specific patient; then set the confidence range of expression value for each biomarker, and make the patient variable to find all matching patients. Process to build ASK arrays: Explore relationships; use an extensible ontology; capture combinatorial multi‐marker classifiers; hide network complexity unless needed; apply pattern to Controls for model validation; and then use ASK for patient screening for prediction of acute rejection risk.
Benefits: Scientists don’t need to know anything about SPARQL; you can pick and choose the sub graph with relationships you are interested to explore in your dataset; the query is automatically generated in the background; it can be saved, reloaded and applied to new datasets as model for predictive screening Result: An “Applied Semantic Knowledgebase (ASK)” makes it possible to reduce biological network complexity through graphical queries (SPARQL). Arrays of semantic patterns can be used to screen for profiles characteristic for detecting and stratifying potential heart, lung and kidney failures and optimizing treatment to avoid them. Deployed at PROOF August 2009 (ASK)
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Fig.3: Model building using graphical SPARQL query generation in Sentient Knowledge Explorer • Graphical SPARQL query directly obtained via point‐&‐click from graph • Combinatorial marker sets (in this example: cyan = genes, green = proteins) • Ranges and weights for each potential marker for testing and iterative model refinement
Benefits: Semantic data integration puts multi‐modal experimental and public data in context and accounts for discovery of unknown or hidden relationships; the exploration of the biomarker network allows capturing multi‐marker classifiers; query patterns are directly derived from graph without requiring any SPARQL or query‐language knowledge; the confidence of the model and ranges for individual markers can be iteratively refined; and, ASK Patterns are directly applied to validation and screening of unknown patient populations
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Fig.4: Web‐based ASK (“Applied Semantic Knowledgebase”) Arrays for Organ Failure screening • Securely accessible via web browser • Arrays of profiles for Acute Rejection and Non Rejection Patients • Result scoring for each patient’s fit to the model applied for confident decision‐making (example: combinatorial
markers for 2 Acute Rejection patients, using a high confidence array consisting of 5 genomic and 4 proteomic markers; lowest score indicates best fit to model)
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Phase VI: Data capture & imaging integration, extended ASK usage During this phase, the Sentient software was configured for automated import of data from a variety of data types and experimental spreadsheets. Exceptional attention has been paid to ensure compliant data separation during the import and proper privacy and access handling based on user privileges and content. The integration of Aperio’s Digital Biopsy System was initiated to extend the combinatorial experimental datasets (from genomics, proteomics and metabolomics) and the clinical patient data with imaging endpoints. Aperio images are typically very large (500KB – 2 GB/image), so the generation of a reasonably sized loss‐free compressed reference image was required to allow fast web access to images for collaboration. “ASK” arrays have been applied to controls and are used to identify patients suitable / less suitable for validation studies (patient stratification). Once biomarker tests have been approved and deployed, “ASK” will be extremely useful for real‐time patient screening. Result: The Sentient “Import Assistant” (in interaction with the central server‐based “Data Manager”) automates capturing of data and metadata; the Sentient “Image Interactor” facilitates fast‐loading reference images accessible via “WebQuery”; and, the “WebQuery” accounts for central searching of image metadata and image analysis results. Started December 2009, ongoing (IA, II)
D. ROI achieved or expected (200 words max.): Cardiovascular disease (heart disease, diseases of the blood vessels and stroke) accounts for the death of more 30% of the Western population. In 2006, there were 33,832 Canadians on renal replacement therapy, and this number is expected to double over the next 10 years. Tissue biopsies are currently the only way to monitor transplant patients for organ rejection. They are also an invasive but necessary tool for individually fine‐tuning the dosage of immunosuppressive drugs required by every transplant patient. Too small a dosage can result in organ rejection and potential organ failure; too much leaves patients susceptible to dangerous infections and cancer. Biopsies are costly, too: heart transplant patients undergo at least a dozen biopsies in the first year after transplant, at a cost of $5,000‐$10,000 each. The ability to predict and diagnose rejection of transplanted organs with a simple, inexpensive blood test significantly reduces the need for biopsies and the burden on the healthcare system.
E. CONCLUSIONS/implications for the field.
The project is the first of its kind using site‐wide semantic data technology to meaningfully interconnect multi‐OMIC experimental data on biological responses with clinical observations and biopsy imaging to discover,
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
qualify and validate combinatorial biomarker panels and use them to develop, commercialize and implement non‐invasive tests for prevention, diagnosis, prediction, management, and treatment of diseases associated with organ failure. Being able to access, share and apply the knowledge obtained in the resulting semantic knowledgebase allows to speeds up development and application of these biomarkers. Using state‐of‐the‐art data technology to gain insights in the complex and interrelated biological mechanisms involved in organ failures, qualification and validation will be grounded on better understanding of all knowledge available. This work not only will help improving the health and quality of life of patients, but also will have a dramatic effect in decreasing the financial burden on the health care system. The PROOF Centre’s “Biomarkers in Transplantation” initiative, which is now in its second phase, has identified biomarkers in the blood that will tell the clinical transplant team immediately whether a transplanted organ is being rejected, using a simple blood test. With a network of partners, the PROOF Centre is testing these combinatorial gene and protein blood biomarker panels for their diagnostic reliability in the care of heart and kidney transplant patients. The next phase of this work will run for two years, and will culminate with applications to Health Canada and the US FDA for use of the blood test in clinical care.
6. REFERENCES/testimonials/supporting internal documents (If necessary; 5 pages max.)
References:
(1) McManus B. Searching for better biomarkers of heart risk and disease. Current Cardiovascular Risk Reports 2008 Mar;2(2):79‐81 http://www.springerlink.com/content/1354r4387k7667u2/fulltext.pdf (2) Barraclough KA, Landsberg DN, Shapiro RJ, Gill JS, Li G, Balshaw RF, Chailimpamontree W, Keown PA for the Biomarkers in Transplantation Team. A matched cohort pharmacoepidemiological analysis of steroid free immunosuppression in renal transplantation. Transplantation 2009 Mar;87(5):672‐680 http://journals.lww.com/transplantjournal/Abstract/2009/03150/A_Matched_Cohort_Pharmacoepidemiological_Analysis.9.aspx
(3) Chailimpamontree W, Dmitrienko S, Li G, Balshaw R, Magil A, Shapiro RJ, Landsberg D, Gill J, Keown PA for the Biomarkers in Transplantation Team. Probability, predictors, and prognosis of posttransplantation glomerulonephritis. Journal of the American Society of Nephrology 2009 Apr; 20(4):843‐851 http://jasn.asnjournals.org/cgi/content/full/20/4/843
(4) Lin D, Hollander Z, Meredith A, McManus B. Searching for “Omic” biomarkers. The Canadian Journal of Cardiology 2009 Jun; 25(A):9A‐14A
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
http://www.pulsus.com/journals/journalHome.jsp?sCurrPg=journal&jnlKy=1&fold=Home
(5) Lin D, Hollander Z, Ng RT, Imai C, Ignaszewski A, Balshaw R, Cohen Freue G, Wilson‐McManus JE, Qasimi P, Mui A, Triche T, McMaster R, Keown PA, McManus BM for the Biomarkers in Transplantation Team. Whole blood genomic biomarkers of acute heart allograft rejection. The Journal of Heart and Lung Transplantation 2009 Aug; 28(9):927‐935 http://www.jhltonline.org/home (6) Gunther, OP; Balshaw, RF.; Scherer, A; Hollander, Z; Mui, A; Triche, TJ; Freue, GC; Li, G; Ng, RT; Wilson‐McManus, J; McMaster, WR; McManus, BM; Keown, PA; for the Biomarkers in Transplantation Team. Functional Genomic Analysis of Peripheral Blood During Early Acute Renal Allograft Rejection. http://pt.wkhealth.com/pt/re/transplantation/abstract.00007890‐200910150‐00015.htm
(7) Gombocz EA Changing decision‐making in life sciences & personalized medicine: Applied Semantic Knowledgebases (ASK ®) at work!, Lecture at Planet Connect Life Science Symposium: Advances Towards Personalized Medicine, Claremont Resort and Spa, Berkeley, CA, November 19, 2009 http://lifescience.planetconnect.com/program/talkdetails?talk=a0G60000003B18k
(8) Stanley RA, Z. Rhoades Z, Gombocz EA: Applied Semantic Knowledgebases (ASK): Changing how knowledge is built and applied in Life Sciences and personalized medicine, Poster at Adapt 2009 Accelerating Development & Advancing Personalized Therapy, Grand Hyatt Washington, Washington DC, September 22‐25, 2009 http://www.io‐informatics.com/news/pdfs/Adapt200909.pdf (9) Gombocz EA, Rhoades Z Predictive Toxicology: Applied Semantics with major implications towards safer drugs, Poster at SemTech 2009 Semantics Technology Conference, The Fairmont Hotel, San Jose, CA, June 14‐18, 2009 http://www.io‐informatics.com/news/pdfs/SemTech2009_poster20090602.pdf
Testimonials: (1) Raymond Ng – *attached as PDF (2) Mark Wilkinson – *attached as PDF
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
External articles:
(1) BioIT World Magazine, September‐October 2009 issue, September 15, 2009 IO Informatics’ Working Solution http://www.bio‐itworld.com/issues/2009/sep‐oct/IO‐informatics.html
(2) BiotechDaily, September 16, 2009 Informatics Software Application Challenges Researchers to “Ask for Knowledge from Data” http://biotechdaily.com/?option=com_article&Itemid=294725412&cat=Lab%20Technologies
Other supporting documents:
The National Center for Biomedical Ontology (NCBO) Seminar Series:
Gombocz E “Practical use with major implications: Applied Semantic Knowledgebase (ASK™) for Predictive Biology” August 5, 2009 http://www.bioontology.org/ASK
IO Informatics Press Releases:
February 23, 2009 IO Informatics adds new members to its Personalized Medicine Informatics Working Group
January 22, 2009 IO Informatics ‐ CardioSHARE to give advanced technology keynote talk at 2009 Conference on Semantics in Healthcare & Life Sciences ‐ (C‐SHALS)
November 12, 2008 IO Informatics announces formation of Personalized Medicine Working Group
Jan 6, 2010
Dear Selection Committee,
It is my pleasure to write a testimonial and recommendation for the IO-informatics Sentient Knowledge Explorer (KE) to be considered for the 'Best Practices Award'. It is a particular pleasure because the “Best Practices” aspect of the Knowledge Explorer is what excites me most about this tool!
My laboratory studies, among other things, standards and best-practices for bioinformatics data and service interoperability, and I sit on the advisory boards of several international standards projects in the Semantic Web space. My students and I are continually confounded by software and tools that do not utilize standards correctly, and/or do not implement them fully. The KE is the first visualization system we have found that truly interprets Semantic Web-based data “correctly”, utilizing all of the features that make SW-based technologies so potentially powerful! As an anecdote, in our early days using the KE, we used to assemble complex datasets with large amounts of metadata and annotations using the full breadth of the W3C Semantic Web standards. We used to make friendly bets about whether KE would be able to “properly” render the data, and almost without exception we were pleasantly surprised! As a result, KE has now become the de facto tool within which we examine our integrated datasets, and is so reliable that we now assume that there is a problem with the data itself if KE is unable to render it correctly! This truly speaks to KE's adherence to Best Practices at the technical level.
Beyond being standards-compliant, which in itself is rare for an emergent Semantic Web tool, the KE developers have spent extraordinary amounts of time determining end-user data query and exploration needs, and constructing visualizations that can accommodate those needs. I have participated in their focus-groups, where they once again follow Best Practices in Human-Computer-Interaction R&D by clearly mapping-out needs and requirements, generating use-cases, then iteratively modifying the interface until the end-users are comfortable with their ability to resolve these use-cases, and understand when they have found the answer. This focus on the non-technical end-user has allowed us to make KE the cornerstone data exploration technology for our entire institute, in particular for the high-dimensional and excruciatingly complex cardiovascular disease biomarker studies. KE's ability to consume and intuitively represent a wide variety of data-types - from images to quantitative data - and more importantly, display that data in ways that make the significant features immediately obvious to our biologist end-users, has allowed us to move to a completely new level of data analysis at the iCAPTURE Center. Of particular note is the KE's graphical query-building tools – an interface that our lab now believes is the most pragmatic way of bringing the full power of database queries into the hands of biologist end-users without requiring them to learn a query language. The built-in ontology management and thesaurus tools allow us to pursue our interests in data interoperability and integration without having to hard-code each resource-wrapper to compensate for it's local ontology – again, the best-practices in software
engineering exhibited by KE ensures that this layer of semantics is distinct and separate from the data or the visualization layer.
Finally, our personal/scientific interactions with the IO Informatics team have led to novel research and development ideas, resulting in IO-informatics creating a “plug-ins” API for the KE system. Once again, the recent addition of an external API follows best practices in open software development, allowing end-users to extend and manipulate the KE interface in ways that the KE developers had not anticipated. Such productive and mutually beneficial partnerships between researchers and vendors are rare; we are thrilled to have found a company that has both the deep expertise and understanding necessary to design a product like the KE, as well as the raw scientific interest necessary to pursue new and exciting exploratory extensions of their own technology!
I highly recommend the Sentient Knowledge Explorer for the Best Practices Award!
Sincerely,
Mark [email protected] Professor, Medical GeneticsPI Bioinformatics, Heart + Lung Institute, iCAPTURE CenterSt. Paul's HospitalUniversity of British ColumbiaVancouver, BC, Canada
Bio‐IT World 2010 Best Practices Awards Nominating Organization name: Nominating Organization address: Nominating Organization city: Kettering Nominating Organization state: OH Nominating Organization zip: 45429 Nominating Contact Person: Nominating Contact Person Title: Nominating Contact Person Phone: Nominating Contact Person Email: User Organization name: Kettering Health Network User Organization address: 3535 Southern Bvld. User Organization city: Kettering User Organization state: OH User Organization zip: 45429 User Organization Contact Person: James Lewis III User Organization Contact Person Title: Manager of the Individual Health Record System User Organization Contact Person Phone: (937) 395‐8522 User Organization Contact Person Email: [email protected] Project Title: Dayton Health Konnect (Individual Health Record System) Team Leaders name: Team Leaders title: Team Leaders Company: Team Leaders Contact Info: Team Members name: Team Members title: Team Members Company: Entry Category: Health‐IT Abstract Summary: Introduction: The Dayton Health Konnect Individual Health Record System (IHR) is a next generation information solution for health care organizations, providers, and patients. Although its roots are at Kettering Health Network, a health system consisting of five hospitals that serve the Dayton OH metropolitan area, the unique and innovative architecture and demand for this product has provided many opportunities for major expansion. The foundation of the IHR project is the partnership between healthcare technology solutions firm CentriHealth, who has developed and owns the IHR technology, and Kettering Health Network. The result of this joint venture between vendor and health care organization is a state of the art information system that gives its end users and affiliated providers a health record personal to each individual that is fully sourced and pre‐populated from external clinical and financial source systems. Data received from various sources is transformed using a unique Health Ontology into useful clinical records. The access of these
records by users and physicians enhances the collaborative patient care process by coordinating care in real time among providers, individuals and healthcare organizations. Results: Dayton Health Konnect is the name of the pilot individual health record system, which included both a provider and patient portal was initiated in October 2006 and was put into production March 2007. Since its inception, the IHR has grown steadily in both scale and user adoption. Current features include a comprehensive listing of Diagnosed Health Conditions, Medications, Health Care Assessments, Lab, and X‐ray results. Also, accessible are dictated reports for both inpatient and ambulatory venues. Though phase I focused primarily on the individual use of the IHR, phase II of the project aims to address the physician adoption. In conjunction with the Anthem Health Plan and CentriHealth, Kettering Health Network plans on implementing Electronic Health Record functions of the IHR for the physicians, including E‐Rx. CentriHealth’s IHR product has been certified by SureScript for transmission and receipt of E‐Rx transactions. Medication therapy is often large component of an individual’s treatment, it is important to include these features in the IHR. Dayton Health Konnect plans to fully implement E‐Rx in 2010. Two independent studies of the pilot have verified that the IHR demonstrably drives patient behavior change at scale, resulting in better outcomes, better compliance, and appreciably reduced costs. Driven by Frank Perez CEO of Kettering Network, the hospital system enacted a structured marketing program and has been able to register 3200 employees accounting for 70% of Kettering's self‐insured health plan membership. Administration used many different methods including incentives for IHR usage to make the project a part of KHN's corporate culture. The KHN IHR project has served as an alpha site for Centrihealth's product which is expanding via Anthem to the Indianapolis market in 2010. Delaware based research group HealthCore Inc. conducted a study on data from the KHN pilot and discovered that IHR users with Diabetes, Hypertension and High Cholesterol had significantly higher utilization rates of the system. The study concluded that KHN diabetics using the IHR had higher screening rates for A1c, Renal Function, and Lipid Tests along with lower rates in ER visits in comparison to their counterparts who did no t use the system. Due to these statistics, Kettering Health Network has tied various internal wellness programs to usage of the IHR and the DaytonHealthKonnect website with hope of promoting preventative care within its employee base. One of the successful ventures has been the rapid growth of the IHR Diabetes Program. Kettering Health Network has cut their co‐pays in half on diabetic, cholesterol, and hypertension drugs for individuals who choose to participate in the program. Furthermore, education sessions and meetings with KHN dieticians are also covered by the hospital’s benefit plan to program participants. Compliance is measured by the achievement of acceptable quarterly A1c scores based on HEDIS guidelines. In only one year, this program has grown to over 100 members. This is just one of several outreach ventures that Dayton Health Konnect hopes to expand to the community in 2010. Working in tandem with Anthem / WellPoint the culmination of this effort will enable Kettering Health Network to maintain and improve upon its status as a leading healthcare provider in the Southwestern OH region.
In the spring of 2009, Kettering Health Network signed a hosting agreement with healthcare insurance provider Anthem/ WellPoint to add data on an additional 100,000 individuals within the region to Dayton Health Konnect. These new members consist of employees from a group of 150 pre‐determined local businesses that are Anthem insured. Anthem/Wellpoint is in the process of evaluating metric data from this local “roll‐out” with the potential for expansion to employers across the country. This phase of the IHR project is in its earliest stages and Anthem and Kettering Health Network will collaborate with study groups to establish metric data detailing utilization, measurable wellness of registered members, and projected cost savings to the employers. This collaborative effort will also extend event coordination and promotion of the IHR within the local populous. ROI achieved: Conclusions: References:
Bio‐IT World 2010 Best Practices Awards Nominating Organization name: Nominating Organization address: Nominating Organization city: Riyadh Nominating Organization state: Riyadh Nominating Organization zip: 11451 Nominating Contact Person: Nominating Contact Person Title: Nominating Contact Person Phone: Nominating Contact Person Email: User Organization name: King Saud University User Organization address: College of Science, Bld 5, P.O. Box 2455 User Organization city: Riyadh, Saudi Arabia User Organization state: Riyadh User Organization zip: 11451 User Organization Contact Person: Dr. Haseeb Ahmad Khan User Organization Contact Person Title: Chair Professor User Organization Contact Person Phone: +966 1 4674712 User Organization Contact Person Email: [email protected] Project Title: A novel algorithm with software support for comparison of gene signatures Team Leaders name: Team Leaders title: Team Leaders Company: Team Leaders Contact Info: Team Members name: Team Members title: Team Members Company: Entry Category: Personalized Medicine Abstract Summary: Introduction: The gene expression profiling is an important component of personalized medicine that could aid the physicians to better understand the cellular morphology, resistance to chemotherapy and overall clinical outcome of disease [1,2]. Such individualized treatment may significantly increase survival due to the optimization of treatment procedure according to clinical pathogenesis. Ein‐Dor et al [3] have pointed out that the gene sorted for the same clinical types of patients but different groups differed widely and possessed only few genes in common. An explanation to this lack of overlap between predictive signatures from different studies with the same goal may be due to the presence of more predictive genes than required to design an accurate predictor [4]. However, the microarray technique itself has been shown to be highly reproducible within and across two high volume laboratories [5]. Numerous statistical procedures including t‐test [6,7], analysis of variance [8], Pearson correlation [9], Wilcoxon signed‐rank test [10,11] and Mann Whitney U test [12,13] have been used for comparison of microarray data. However, the validity of various conventional statistical methods for two‐group comparison of
gene signatures was never evaluated using carefully selected data sets. A novel algorithm with software support is presented herein for more realistic and comprehensive interpretation of gene signatures. Results: Computational method and theory of CalcHEPI The formula used for computation of HEPI score is HEPI=Σ [(Ni(0→t) Sj(0→1)/Nt]x100. Where Ni is the number of genes with Score Sj. The subscript ‘i’ may vary between 0 and total number of genes in the signature and ‘j’ may vary between 0 (minimum score) and 1 (maximum score). Nt is the total number of genes in the signature. First, all the ratios of expression data are categorized according to a logical scale to get the respective Ni and Sj values. The percent contributions of each set of genes (genes with same expression score) are computed and then summed up to get HEPI score. The fold‐change strategy used in HEPI scores is robust, accurate and reproducible. Although the concept of fold‐change has been described in microarray experiments it has never been utilized for collective interpretation of gene signatures. Technically, the ratio of the color intensity of each spot (probe) measures the relative expression of the corresponding gene under two different experimental conditions. In general, a gene is said to be differentially expressed if the ratio in absolute value of the expression levels between the control and treated group exceeds certain thresholds. The most acceptable expression ratios for up‐ and down‐regulated genes have been suggested as >1.5 and <0.5 respectively [11,17,18]. While adopting the same cut‐off values, additional sub‐grading has been proposed in this protocol. HEPI scores are simple to interpret, easy to compare and prominent for visual cross checking. Software design CalcHEPI software has been developed in Microsoft Excel platform due to Excel’s flexibility, universal availability, and macro‐based automation. Moreover, the spreadsheet layout of Excel is perfectly suitable for storing and analyzing microarray data as well as developing microarray analysis software. The data selection is controlled by input box to allow the users to select the paired expression values from any place of the worksheet (Fig. 1). The software then utilizes Excel’s worksheet formula function together with a macro subroutine to compute HEPI scores (Fig. 2). The percent contribution of norm‐regulated (green), down‐regulated (blue) and up‐regulated (red) genes is also shown as a color‐coded bar. The output of the software provides a comprehensive understanding of the results in terms of both qualitative (up‐ or down‐regulation) and quantitative (gradation in fold‐change) analysis of gene signature with the quick review indicator bar. The clarity and integrity of report format are quite helpful for any cross evaluation. HEPI scores are valid for any size of array signature as they are calculated according to percent (and not number) of differentially expressed genes on a 10 point scale (5 for up regulation and 5 for down regulation). Software validation The functional accuracy and reliability of software have been validated using the simulated and real gene signatures data for two‐group comparisons. Six pairs of expression data were specifically designed to represent various degrees of similarity/differences (details not shown). Among them, the two groups in pair 4 are not significantly different whereas the groups in pair 6 possess maximum difference. All these 6 pairs were subjected to
nonparametric comparisons with Mann‐Whitney U test, Kolmogorov‐Smirnov test, Kruskal‐Wallis test, Wilcoxon signed‐rank test, Sign test, Friedman test and Kendall W test using SPSS (Version 10). The real expression data of published signatures including ovarian carcinoma [14], ulcerative colitis [15], leukemia [16] and adenocarcinoma [6] were also analyzed by the above nonparametric tests as well as CalcHEPI. The characteristics of these real signatures have been summarized in our earlier report [10]. ROI achieved: Conclusions: References: 1. Buckhaults P. Gene expression determinants of clinical outcome. Curr Opin Oncol 2006; 18: 57‐61. 2. Potti A, Dressman HK, Bild A, Riedel RF, Chan G, Sayer R, Cottrill H, et al. Genomic signatures to guide the use of chemotherapeutics. Nat Med 2006; 12: 1294‐300. 3. Ein‐Dor L, Zuk O, Domany E. Thousands of samples are needed to generate a robust gene list for predicting outcome in cancer. Proc Natl Acad Sci USA 2006; 103: 5923‐8. 4. Roepman P, Kemmeren P, Wessels LF, Slootweg PJ, Holstege FC. Multiple robust signatures for detecting lymph node metastasis in head and neck cancer. Cancer Res 2006; 66: 2361‐6. 5. Anderson K, Hess KR, Kapoor M, et al. Reproducibility of gene expression signature‐based predictions in replicate experiments. Clin Cancer Res 2006; 12: 1721‐7. 6. Notterman DA, Alon U, Sierk AJ, Levine AJ. Transcriptional gene expression profiles of colorectal adenoma, adenocarcinoma, and normal tissue examined by oligonucleotide arrays. Cancer Res 2001; 61: 3124‐30. 7. Tanaka TS, Jaradat SA, Lim MK, Kargul GJ, Wang X, et al. Genome‐wide expression profiling of mid‐gestation placenta and embryo using a 15000 mouse developmental Cdna microarray. Proc Natl Acad Sci USA 2000; 97: 9127‐32. 8. Bushel PR, Hamadeh HK, Bennett L, Green J, Ableson A, et al. Computational selection of distinct class‐ and subclass‐specific gene expression signatures. J Biomed Inform 2002; 35: 160‐70. 9. Bouras T, Southey MC, Chang AC, Reddel RR, Willhite D, et al. Stanniocalcin 2 is an estrogen‐responsive gene coexpressed with the estrogen receptor in human breast cancer. Cancer Res 2002; 62: 1289‐95. 10. Khan HA. ArrayVigil: a methodology for statistical comparison of gene signatures using segregated‐one‐tailed (SOT) Wilcoxon’s signed‐rank test. J Mol Biol 2005; 345: 645‐9. 11. Khan HA. ArraySolver: an algorithm for colour‐coded graphical display and Wilcoxon signed‐rank statistics for comparing microarray gene expression data. Comp Func Genom 2004; 5: 39‐47. 12. Kihara C, Tsunoda T, Tanaka T, Yamana H, Furukawa Y, et al. Prediction of sensitivity of esophageal tumors to adjuvant chemotherapy by cDNA microarray analysis of gene‐expression profiles. Cancer Res 2001; 61: 6474‐9. 13. Rus V, Atamas SP, Shustova V, Luzina IG, Selaru F, et al. Expression
of cytokine‐ and chemokine‐related genes in peripheral blood mononuclear cells from lupus patients by cDNA array. Clin Immunol 2002; 102: 283‐90. 14. Wang K, Gan L, Jeffery E, Gayle M, Gown AM, et al. Monitoring gene expression profile changes in ovarian carcinomas using cDNA microarray. Gene 1999; 229, 101‐8. 15. Dooly TP, Curto EV, Reddy SP, Davis RL, Lambert GW, et al. Regulation of gene expression in inflammatory bowel disease and correlation with IBD drugs screening by DNA microarrays. Infalmm Bowel Dis 2004; 10, 1‐14. 16. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 1999; 286: 531‐7. 17. Bull JH, Ellison G, Patel A, et al. Identification of potential diagnostic markers of prostate cancer and prostatic intraepithelial neoplasia using cDNA microarray. Br J Cancer 2001; 84: 1512‐19. 18. Wang W, Marsh S, Cassidy J, McLeod HL. Pharmacogenomic dissection of resistance to thymidylate synthase inhibitors. Cancer Res 2001; 61: 5505‐10.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Bio‐IT World 2010 Best Practices Awards
1. Nominating Organization (Fill this out only if you are nominating a group other than your own.)
A. Nominating Organization Organization name: LabWare, Inc. Address: Three Mill Road, Wilmington, DE 19806
B. Nominating Contact Person Name: Steven Neri Title: Director of Business Development Tel: 302‐658‐8444 Email: [email protected]
2. User Organization (Organization at which the solution was deployed/applied)
A. User Organization Organization name: Merck & Co., Inc. Address: UG3D‐70, 351 N. Sumneytown Pike, North Wales, PA 19454‐2505
B. User Organization Contact Person Name: Robert Stelling Title: IT Program Manager Tel: 267‐305‐1693 Email: [email protected]
3. Project
Project Title: Clinical Assay and Specimen Initiative Team Leader Name: Robert Stelling Title: IT Program Manager Tel: 267‐305‐1693 Email: [email protected] Team members – name(s), title(s) and company (optional): Teresa Hesley, Business Process Owner, Merck Brenda Yanak, IT Account Manager, Merck Philip Stipcevich, Solution Manager, Merck Darrel Whyte, Managing Consultant, LabWare
4. Category in which entry is being submitted (1 category per entry, highlight your choice)
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Basic Research & Biological Research: Disease pathway research, applied and basic research Drug Discovery & Development: Compound‐focused research, drug safety Clinical Trials & Research: Trial design, eCTD Translational Medicine: Feedback loops, predictive technologies Personalized Medicine: Responders/non‐responders, biomarkers IT & Informatics: LIMS, High Performance Computing, storage, data visualization, imaging technologies Knowledge Management: Data mining, idea/expertise mining, text mining, collaboration, resource optimization
Health‐IT: ePrescribing, RHIOs, EMR/PHR Manufacturing & Bioprocessing: Mass production, continuous manufacturing
(Bio‐IT World reserves the right to re‐categorize submissions based on submission or in the event that a category is refined.) 5. Description of project (4 FIGURES MAXIMUM):
A. ABSTRACT/SUMMARY of the project and results (150 words max.) Excellence in biomarker research and operations are critical as pharmaceuticals transition to from one-drug-fits-all to targeted medicines. Inefficient clinical specimen handling significantly impacts progress and speed in biomarker research. Thus, Merck initiated CASI (Clinical Assay and Specimen Initiative) in late 2007 to improve clinical specimen management. After extensive analysis, planning, improved business processes, and development the first release of a new CASI IT system is now in place. Integration of clinical trial, patient, consent, and specimen data improves accessibility of clinical specimens. These benefits enhance research dependent on clinical specimens, including biomarker research and development of personalized medicines. CASI reduces cycle times in late stage clinical trials (particularly on large trials with complex specimen handling requirements), early stage clinical decisions, biomarker discovery, variable response identification, and target identification.
B. INTRODUCTION/background/objectives The ability to deliver innovative and differentiated products that answer unmet medical needs and improve patient outcomes is a cornerstone of Merck's vision. Developing drugs for sub-populations is also a strategic imperative. The Clinical Assay and Specimen Initiative (CASI) allows Merck to capitalize on technology advances in biomarker research in order to realize the vision of personalized medicine and targeted therapies.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
CASI is an overarching, multi-year Merck initiative that spans R&D, leveraging specimens collected in clinical trials to feed basic research towards development of biomarkers and targeted drug therapies. In addition, CASI alleviates clinical trial bottlenecks due to inefficient specimen tracking, while improving our cost structure and resource utilization around specimen testing, processing and storage. These objectives are achieved via process improvements, new governance mechanisms, new information technology tools, and partnerships with key vendors that enhance our specimen management infrastructure.
CASI provides a foundation for the entire "life-cycle" of clinical specimen management, from planning for specimen collection through ultimate specimen destruction. Key benefits are measured via realization metrics that quantify cycle times, effort, and costs associated with specimen management activities. Specific benefits that CASI provides include:
• Reduced cycle times in tracking specimens and completing clinical trial primary testing and in locating and accessing specimens for post-primary testing
• Reduced costs for clinical assay testing and specimen storage
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
• Improved quality of specimens via consistent specimen collection, processing, and storage, thus improving laboratory assay data
• Consistent and documented compliance with HIPAA and other regulations governing specimen use, globally
• Informed use of valuable specimens aligned with therapeutic area priorities
C. RESULTS (highlight major R&D/IT tools deployed; innovative uses of technology).
The CASI (Clinical Assay and Specimen Initiative) program utilized Six Sigma methodology to redesign processes and develop tools to improve clinical specimen management at Merck. Merck is implementing CASI in a series of phased "Releases" that include new IT capabilities and business process improvements.
• Quick Wins are CASI process improvements, such as standard templates for patient consent and specimen collection, that were implemented in conjunction with the new Clinical Specimen Management Global Research and Development Procedure (GRDP) issued in 2Q 2009. These were employed without the new IT tool.
• Release 1 of the Clinical Specimen Management System (CSMS) was implemented November 2009 and focused on new late-phase trials that follow standard Merck late stage clinical trials processes. All types (e.g. oncology, vaccines, cardiovascular, etc.) of new Merck late stage clinical trials starting in 2010 leverage CASI processes and CSMS.
• Release 2 focuses on fully outsourced studies as well as external collaborations involving human specimens.
• Release 3 focuses on early phase clinical trials, such as Clinical Pharmacology and Experimental Medicine. Implementation of advanced specimen search capability to enable patient cohort browsing is also planned for Release 3.
• Later Releases may include selected cohorts of legacy specimens where justification exists based on scientific or business value of the specimens.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Quick Wins
CASI process improvements
4Q2010
2Q09
4Q09
Early phase clinical trials, such as for Clinical Pharmacology, and advanced search
Release 1 Release 2 Release 3 Later Release
•Merck late stage clinical trials
Fully outsourced clinical trials as well as external collaborations involving human specimens
Access to the existing inventory of "legacy" specimens
2011
> 2011
Merck partnered with LabWare to configure the LabWare LIMS v6 application into CSMS. The role of the LabWare application within CASI is to support improved processes for Trial Setup, Primary Testing, and Post Primary request or disposal of specimens. CSMS is integrated to a central lab, as well as Merck systems for clinical trial management and clinical patient data. Merck and LabWare used an iterative design
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
and development approach to prototype, develop, pilot and deploy CSMS in approximately 14 months time from the start of the Merck and LabWare partnership. CSMS Release 1 capabilities include:
• LabWare LIMS configuration of user interface for trial setup, consent information management, specimen tracking, basic search capabilities and governance of specimen regarding consent limitations
• CTMS (Clinical Trial Management System) integration to reduce effort and defects in trial setup by reducing transcriptions and pulling information electronically from authoritative sources
• Central lab integrations including trial setup to facilitate statement of work (SoW) and specimen tracking
• CDR (Clinical Data Repository) integration to enable basic search capabilities using SDTM data standards
CSMS Release 1 benefits include:
• Improve SoW process with one Central Lab with less effort and less rework. • Track consent agreements with IRB/ERCs/clinical sites (i.e., the limits of
specimen use), as well as patient level consents for specimen use (i.e., who agreed to participate), to reduce discrepancies, as well as time and effort to request specimens for unplanned tests.
• Ensure compliance with consent agreements • Track shipment, receipt, chain of custody, issues, and requests for specimens for
both main study testing and post-primary use of specimens • Increase operational efficiencies and reduce cycle times related to biobank
inventory management
Trial Set-UpTrial Set-Up
Directory of ServicesDirectory of Services
Specimen TrackingSpecimen Tracking
Request / DisposalRequest / Disposal
Clinical Specimen
Management System
CSMS
Central LabVendor
Central LabVendor
Clinical TrialManagement
System
ClinicalData
Repository
ClinicalData
Repository
EDCElectronic
Data Capture
EDCElectronic
Data Capture
Initial Trial Information flows from CTMS to CSMS
CRO xxx Directory of Services (DOS) is uploaded into CSMS and is regularly refreshed
Protocol information entered by the CRS is pushed back to CRO xxx
Clinical information obtained through the trial is also uploaded into CSMS
• Clinical Research Specialists enter the test plan
• Consent Negotiators document consent information
• Curators manage the biobank inventory and requests
Regular electronic feeds from the central lab system to CSMS continually update specimen status
Specimen curators send specimen disposal requests to CRO xxx
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
D. ROI achieved or expected (200 words max.): The CASI initiative supports a long- term ROI. The primary measurable benefits will be compared to baseline values late in 2010, and periodically thereafter. CASI metrics include the following:
CASI Objective Description
Inventory Access and Quality: Enhance access to stored specimens for biomarker and target discovery Improve the utility and management of the clinical specimens inventory
Proportion of genomic specimens obtained out of total number of genomic specimens that are possible to be obtained Proportion of subjects who sign Future Use Consents out of all possible consent for future use studies Mean cycle time from specimen request submitted to curator to specimen received at testing site Proportion of usable specimens identified upon receipt for post-primary testing Proportion of Non value-add specimens (as identified by algorithm) in inventory (internal and external)
Utilization: Increase utilization of specimens for biomarker research to improve decision-making
Number of specimens retrieved for post-primary testing
Number of Go No Go decisions influenced by the use of biomarkers
Operational
Median cycle time from specimen obtained to specimen queued for primary testing Average effort/time spent by scientific staff working on specimen management activities as a % of total time
E. CONCLUSIONS/implications for the field.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
To support a goal to deliver targeted medicines, Merck recognized the need to improve specimen management for clinical trials, as well as for biomarker research. The CASI program leveraged Six Sigma methodologies, and an iterative design and development approach, to standardize on a single system and set of processes to improve R&D. The CSMS application uses standard capabilities of LabWare LIMS configured with key lab and clinical application integrations, to provide various Merck stakeholders immediate visibility of operational, specimen, consent, and patient data. This is not the first time a biopharmaceutical company configured a LIMS to manage a biobank. However, to our knowledge CASI is the first to implement a single system and with a scope that includes a set of processes for use across basic research, clinical trials, and to support translational and personalized medicine efforts.
6. REFERENCES/testimonials/supporting internal documents (If necessary; 5 pages max.)
Bio‐IT World 2010 Best Practices Awards Nominating Organization name: Orion Health Nominating Organization address: 225 Santa Monica Boulevard, 10th Floor Nominating Organization city: Santa Monica Nominating Organization state: CA Nominating Organization zip: 90401 Nominating Contact Person: Heidi Olso Nominating Contact Person Title: Account Manager Nominating Contact Person Phone: 503‐684‐2800 Nominating Contact Person Email: [email protected] User Organization name: Lahey Clinic User Organization address: 41 Mall Road User Organization city: Burlington User Organization state: MA User Organization zip: 01805 User Organization Contact Person: Kelly Weinstein User Organization Contact Person Title: Director of Marketing, Orion Health User Organization Contact Person Phone: 310 526 4032 User Organization Contact Person Email: [email protected] Project Title: Medical Applications Portal Installation Team Leaders name: Nelson Gagnon Team Leaders title: CIO Team Leaders Company: Lahey Clinic Team Leaders Contact Info: 781‐744‐3444 Team Members name: Dr. Patrick Dempsey Team Members title: DR Team Members Company: Lahey Clinic Entry Category: Health‐IT Abstract Summary: Introduction: Lahey Clinic’s multidisciplinary approach gives patients access to preeminent physicians from virtually every medical specialty; these physicians cooperate to develop personalized treatment plans. Unfortunately, providing multidisciplinary treatment was more difficult than necessary because clinicians lacked an integrated view of patient information. Physicians at the Clinic had long used about a dozen systems to gather clinical data, including: • Laboratory system from Sunquest Information Systems, Inc. • Cardiology systems from MUSE® (GE) and Xcelera (Royal Philips Electronics) • Pathology system from WindowPath • Radiology PACs from FUJIFILM Medical Systems USA • Radiology Information System from GE • Digital dictation systems from Dictaphone/Nuance Communications, Inc. • Hospital admissions from MEDITECH (Medical Information Technology, Inc.) • Registration/scheduling from GE/IDX
Clinicians were forced to use different logins and passwords for each application. While much patient data was online, some data continued to be stored in paper records, making it hard for users to ascertain where to find information. In some cases, staff needed to physically transport records from one location to another. And while every physician was dictating notes, there was no easy way to retrieve them; the staff would have to print records every day for the physicians, which was a cumbersome process. An additional challenge was the inability to efficiently exchange information with referring physicians in a timely manner. Since the Boston area offers many superlative choices of specialists in all disciplines, Lahey Clinic needed to improve its processes for distributing information to referring physicians. Ultimately, it hoped to encourage a two‐way health information exchange with them. The clinic knew that a key requirement for effective change was to have a portal solution that had the ability to integrate data from best of breed applications across the various disciplines. Lahey had 15 systems with clinical information and no way to see all information in an easy fashion. Results: The clinic began looking for a Web portal to address these issues. After reviewing several choices, Lahey Clinic selected the Orion Health™ Concerto® Physician Portal, its Clinical Data Repository and the Orion Health™ Rhapsody® Integration Engine. Using the Concerto Portal, clinicians at Lahey Clinic can now search for patients by name, medical record number, hospital room, floor, appointment or under the category of ‘recent patients’. The portal is giving the Clinic the flexibility to display clinical information, such as microbiology results, in the desired format, and once they find the record for the patient, the portal enables them to see the following information: • Demographic data • Appointments • Hospital admissions • Referring providers • Chart notes about the patient, which can be sorted by date and author • Laboratory information • EKGs in both text and wave forms • microbiology results • color‐coded abnormal results, and radiology reports with embedded links to PACs images • Scanned paper documents, including documents that patients bring in from other institutions, signed consent forms for surgery, audiology and ophthalmology reports are also easily stored and retrieved. The new technology is also allowing for delivery of a medications reconciliation solution as recommended by the Joint Commission of Accreditation of Healthcare Organizations (JCAHO). Healthcare staff can now record the medications that the patient was taking at home, the medications they are taking in the hospital and the medications that they should take following discharge from the hospital.
Changes to medications can now be documented, helping to avoid medication errors such as omissions and duplications. D. ROI achieved or expected (200 words max.): Today, Lahey Clinic uses the Concerto Portal to take data from its existing systems and display it in one central place, and it's a cost effective tool for streamlining the collection of clinical data. Behind the scenes, Lahey uses data integration technology, including Orion Health Rhapsody, to access data from each individual system and store it in a common Clinical Data Repository. The portal acts as the user front end to the repository and to some individual IT systems within the hospital, such as Documentum, which stores scanned documents. The new system currently serves 5,800 registered users with more than 1,000 unique users signing on each day. Lahey is now in the process of rolling out the system to non‐Lahey providers, beginning with the groups that make the most frequent referrals, as well as restricting access to licensed providers and emergency room physicians. E. CONCLUSIONS/implications for the field. The new Portal is now able to efficiently exchange information with referring physicians, streamlining data for easier retrieval which in turn is improving patient care. Other clinics and hospitals can easily see the benefits Lahey has achieved and consider them when in the process of a Portal installation. With a new global drive of creating electronic health records to benefit the population, this is a good best practice to consider for any health organization to consider. References: Case study available. Here are the main points taken from the case study: Lahey Clinic, Burlington, Massachusetts, USA The Lahey Clinic is a physician‐led, nonprofit multispecialty group practice with roughly 500 physicians located around the Boston area, with large sites in Burlington and Peabody MA. The Lahey Clinic, a renowned research center, offers patients access to clinical trials of new therapies for diseases including cancer, diabetes, heart disease and cataracts. Research programs encompass more than 200 clinical trial protocols as well as participation in numerous national and international studies. The Need For Better Access To Data The clinic’s multidisciplinary approach gives patients access to preeminent physicians from virtually every medical specialty; these physicians cooperate to develop personalized treatment plans. Unfortunately, providing multidisciplinary treatment was more difficult than necessary because clinicians lacked an integrated view of patient information. Physicians at the Clinic have long used about a dozen systems to gather clinical data, including: • Laboratory system from Sunquest Information Systems, Inc.
• Cardiology systems from MUSE® (GE) and Xcelera (Royal Philips Electronics) • Pathology system from WindowPath • Radiology PACs from FUJIFILM Medical Systems USA • Radiology Information System from GE • Digital dictation systems from Dictaphone/Nuance Communications, Inc. • Hospital admissions from MEDITECH (Medical Information Technology, Inc.) • Registration/scheduling from GE/IDX. Clinicians were forced to use different logins and passwords for each application. While much patient data was online, some data continued to be stored in paper records, making it hard for users to ascertain where to find information. In some cases, staff needed to physically transport records from one location to another. And while every physician was dictating notes, there was no easy way to retrieve them; the staff would have to print records every day for the physicians, which was a cumbersome process. An additional challenge was the inability to efficiently exchange information with referring physicians in a timely manner. Since the Boston area offers many superlative choices of specialists in all disciplines, Lahey Clinic needed to improve its processes for distributing information to referring physicians. Ultimately, it hoped to encourage a two‐way health information exchange with them. Selecting A Physician Portal The clinic began looking for a Web portal to address these issues. After reviewing several choices, Lahey Clinic selected the Orion Health™ Concerto® Physician Portal, its Clinical Data Repository and the Orion Health™ Rhapsody® Integration Engine. According to Peter K. Dempsey, MD, Vice Chair of the Department of Neurosurgery and Chief Medical Informatics Officer at Lahey Clinic, “We selected Orion Health for its experience with Health Information Exchanges.” The Concerto Portal also gave the Clinic the flexibility to display clinical information, such as microbiology results, in the desired format. Integrating Patient Information Today, Lahey Clinic uses the Concerto Portal to take data from its existing systems and display it in one central place. Behind the scenes, Lahey uses data integration technology, including Orion Health Rhapsody, to access data from each individual system and store it in a common Clinical Data Repository. The portal acts as the user front end to the repository and to some individual IT systems within the hospital, such as Documentum, which stores scanned documents. Using the Concerto Portal, clinicians at Lahey Clinic can now search for patients by name, medical record number, hospital room, floor, appointment or under the category of recent patients. Once they find the record for the patient, the portal enables them to see the following information: Demographic data• Appointments• Hospital admissions• Referring providers• Chart notes about the patient, which can be sorted by date and author• Laboratory information: EKGs in both text and wave forms, microbiology results, • color‐coded abnormal results, and radiology reports with embedded links to PACs images A list of diagnoses and procedures to comply with the Joint Commission on the • Accreditation of Healthcare Organizations (JCAHO) regulations Scanned paper documents, including documents that patients bring in from • other institutions, signed consent forms for surgery, audiology and ophthalmology reports
Patient Privacy Although all Lahey providers can access all patient data, except for confidential psychiatric records, the Clinic restricts access for non‐Lahey providers to patients with whom the provider has a “relationship.” When a referring physician sends a patient to Lahey, that physician is added to a provider table in the Lahey system, which indicates what patients that provider is able to view online Once non‐Lahey providers enter the system, they see a home page that lists all of their patients who have had clinical activity at Lahey. For emergency room physicians, who don’t have a prior relationship with the patient, the portal requires that physicians enter the patient’s name and date of birth. The Providers then need to fill in a Clinical Use screen that requires them to acknowledge that they are using the information to deliver care. Lahey can later audit which outside providers have accessed patient information from the system. This ensures that providers, both Lahey Clinic providers and other providers in the region, have access to a patient’s medical record when they need it. System Rollout And Community Access To ensure that the system truly serves its users, Lahey Clinic relied on an advisory committee that included physicians, ancillary system owners, and information services to review the design, layout and clinical decisions. In order to make the system truly usable from Day 1, three years of converted data was made available to physicians during rollout. The Clinic trained users through departmental presentations as well as an online manual and an online video. Special training Web pages were made available to non‐Lahey providers. According to Dr. Dempsey, “The application requires very little support. It’s very intuitive and easy to use. New users are able to learn the system quickly.” The system currently serves 5,800 registered users with more than 1,000 unique users signing on each day; Lahey is in the process of rolling out the system to non‐Lahey providers, beginning with the groups that make the most frequent referrals, as well as restricting access to licensed providers and emergency room physicians. Benefits Lahey Clinic has achieved its goals with the Concerto Portal. As stated by Dr. Dempsey, “We have improved the distribution of data within the Clinic. Now users can easily log on and get the information they need without going to multiple different systems. We’ve strengthened relationships with outside providers. We also see the portal as an excellent stepping stone toward getting clinicians comfortable with the idea of using an EMR. We’re currently in the process of installing a full‐fledged EMR for both inpatient and clinical settings; the portal has allowed providers to use devices to access information and thus changed their behavior.” Next Steps While the portal has been rolled out successfully within the clinic, the existing implementation is just the beginning. According to Dr. Dempsey, “When the EMR is available, the portal will be able to access information such as medications and allergies. We’re also in ongoing discussions with other health care institutions and are encouraging them to create their own clinical data repositories and share their information with us to create a health information exchange. Ultimately, we would like to be able to display both Lahey data and non‐Lahey data on the same screen, while allowing each site to retain its own data.”
Dr. Dempsey stated, “We’d also like the system to proactively notify referring physicians and primary care providers by e‐mail when one of their patients has a clinical activity, such as an ER visit, hospital admission, lab result or consultation at the Clinic. We hope to be able to send an e‐mail with a link to the portal so they can actively follow the patient.”
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Bio‐IT World 2010 Best Practices Awards
1. Nominating Organization (Fill this out only if you are nominating a group other than your own.)
A. Nominating Organization Organization name: Minimal Access Surgery Training Centre (MASTC), Pamela Youde Nethersole Eastern
Hospital Address: MASTC, Pamela Youde Nethersole Eastern Hospital, Chai Wan, Hong Kong
B. Nominating Contact Person Name: Dr Carrison KS Tong Title: Medical Physicist Tel: 852‐25955906 Email: [email protected], [email protected]
2. User Organization (Organization at which the solution was deployed/applied)
A. User Organization Organization name: Surgical Department, Pamela Youde Nethersole Eastern Hospital Address: Surgical Department, Pamela Youde Nethersole Eastern Hospital, Chai Wan, Hong Kong
B. User Organization Contact Person Name: Professor Michael KW Li Title: Chief Of Service Tel: 852‐25956389 Email: [email protected]
3. Project
Project Title: 3D Vision for Surgical Robot Team Leader Name: Dr Carrison KS Tong Title: Medical Physicist Tel: 852‐25955906 Email: [email protected] Team members: Name: Dr K H Fung Title: Consultant Radiologist Name: Professor Michael KW Li Title: Chief Of Service, Department of Surgery
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
4. Category in which entry is being submitted (1 category per entry, highlight your choice) Basic Research & Biological Research: Disease pathway research, applied and basic research Drug Discovery & Development: Compound‐focused research, drug safety Clinical Trials & Research: Trial design, eCTD Translational Medicine: Feedback loops, predictive technologies Personalized Medicine: Responders/non‐responders, biomarkers IT & Informatics: LIMS, High Performance Computing, storage, data visualization, imaging technologies Knowledge Management: Data mining, idea/expertise mining, text mining, collaboration, resource optimization
Health‐IT: ePrescribing, RHIOs, EMR/PHR Manufacturing & Bioprocessing: Mass production, continuous manufacturing
(Bio‐IT World reserves the right to re‐categorize submissions based on submission or in the event that a category is refined.) 5. Description of project (4 FIGURES MAXIMUM):
A. ABSTRACT/SUMMARY of the project and results (150 words max.) A Stereoscopic Visualization System (SVS) was built to broadcast or telecast three-dimensional images from a surgical robot or a pair of endoscopic cameras to a 3D LCD monitor. This system allows assistant surgeons, nurses, trainers, and trainees in robotic surgery to view the operating field with accuracy, thus enabling quality support to the surgeon during operations. It may also be utilized to view the operating field for the entire surgical team in Laparoscopic surgery. The new 4D imaging system also enables effective training of surgeons to use a surgical robot to perform surgical procedures. In summary, the system demonstrates a method of transmitting three dimensional images from a surgical robot to a 3D LCD through a computer, and from there to display the same 3D image on a 3D LCD in a real-time manner.
B. INTRODUCTION/background/objectives
Robotic surgery and laparoscopic surgery are increasingly popular due to their many benefits over traditional open surgery, including quicker recovery time, less pain, and less scarring. The robotic surgery system extends the benefits of laparoscopic surgery through its ability to reach deep and hidden areas, thus enabling much more efficient and complex operation to be performed at a distance. A robotic surgery system, such as the da Vinci® Surgical System ("dVSS") from Intuitive Surgical, Inc., typically consists of three main components; operation console, a patient-side cart, and an image processing unit. A surgeon provides input through a manipulator of the operation console which, in turn, controls the robotic arms on the patient-side cart to perform the necessary motions at the patient. The movement of the robotic arms is captured by a stereoscopic dual-camera endoscopic system mounted on one of the arms and sent to the operation console through the image processing unit. This design provides a high precision surgery for complicated procedure with its 3D vision. There is a 2D LCD on the image processing unit for the supporting of the surgery by the assistant surgeons and nurses. In this design, the whole surgical team cannot share the same 3D viewing for the surgical field. If an extra 3D console is used, the cost for the setup will required at least US130,000 more.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
1
Image processing unit
Surgeon at the operationconsole remotely controlling
movement of the robotic arms
Patient-side Cart
2D LCD Display for Surgeon assistant, trainees & nurses
Immersive 3D Display
Robotic arms
Assistant surgeon
Figure 1
Pamela Youde Nethersole Eastern Hospital (PYNEH) has developed a new Stereoscopic Visualization System (SVS) to support the robotic surgical service in operation theatre, lecturing and conference rooms’ environments. This system provides a real time, high definition (resolution of 1920 x 1080 pixels in 3D) stereoscopic viewing for the users, to support the clinical environment in operation theatre, using 3D LCD monitors and a new stereoscopic depth control function. It allows the entire operating team including assistant surgeons, nurses, and trainees to appreciate the 3D effect, which only the operator can see in existing surgical robots, thus making the operation more efficient and accurate. The system also allows the display of the 3D images in remote sites inside and outside the operation theatre such as lecturing rooms for remote consultation, teaching, and training purposes. Furthermore, the system also supports the recording and replaying of stereoscopic video which is important for training.
C. RESULTS (highlight major R&D/IT tools deployed; innovative uses of technology). The methodology we developed for real time 3D display is a method of displaying a stereoscopic pair of video input signals from the left and right cameras so that left and right video signals were displayed using the odd and even numbers of rows of pixels for a 3D LCD on which a layer of clockwise and anti-clockwise circular Polarized material has been coat on its surface. The viewer after wearing a pair of passive Polaroid glasses with one lens in clockwise and another lens in anti-clockwise circular polarized will see the 3D video in real time. However, after a 3D LCD has been bought, no driver or library was provided for display of stereo video signal but only a test program for display of a stereoscopic pair of bitmap images on the 3D LCD. A two-step method has been developed for the real time display of the
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
video on the 3D LCD monitor. Since there are several techniques to display 3D images by different 3D LCD monitor manufacturers. The first step is a generic method to measure stereoscopic sub-pixel maps as shown in Figure 2. By inputting a pair of black and white images representing left and right video inputs into a computer connected with the 3D LCD monitor, a special pattern of the 3D LCD monitor can be displayed using the test program. In horizontal interlacing 3D LCD, black and white horizontal lines can be obtained. By screen print of the special pattern, a 24 bits color bitmap of resolution 1920 x 1080 pixels of the pattern can be obtained. This bitmap is a combination of three 8 bits red, green, and blue sub-pixel maps. Since the black image representing the left video input causes sub-pixel values (0, 0, 0) and the white image representing the right video input causes sub-pixel values (255, 255, 255) in the sub-pixel maps, this sub-pixel maps, called “Stereoscopic Sub-pixel Maps”, provide information of the arrangement of the pixels and sub-pixels for stereoscopic display using those type of 3D LCD displays.
Test pattern with image L and image R
monitor
Display of test pattern on 3D LCD
Stereoscopic sub-pixel maps for 3D displays
L L L L L
R R R R R
L L L L L
R R R R R
L L L L L
L L L L L
R R R R R
L L L L L
R R R R R
L L L L L
Horizontal interlacing (HI)
Sub-pixel values of (255, 255, 255)
Sub-pixel values
L L L L L
R R R R R
L L L L L
R R R R R
L L L L L
of (0, 0, 0)
Figure 2
The second step can instantly convert the online video input signals from the robot and display on the 3D LCD. In Figure 3, two images of resolution 1920 x 1080 pixels are captured at a frame rate of 30 frames per second from the video inputs of the surgical robot. The computer has to read in the three sub-pixel values of the 1920 x 1080 pixels for each input. By comparing with the Stereoscopic Sub-pixel Maps, a 3D image with a resolution of 1920 x 1080 pixels of video inputs can be obtained and drawn on the 3D LCD monitor. 50% of the video input signals are not used using this method. On the surface of the 3D LCD monitor, there is a layer of circular polarizing material which allows a row of pixels displaying the image of the left camera in clockwise (or anti-clockwise) direction and another row of pixels displaying the image of the right camera in anti-clockwise (or clockwise) directions. The user after wearing a pair of
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
circular Polaroid glasses with one lens in clockwise and the other in anti-clockwise directions can see different images in his eyes and build a stereoscopic image of the robotic surgery or Laparoscopic surgery. This technique is called “Stereoscopic Sub-pixel Mapping” (SSM). Since the distance between two eyes may vary for each person, this distance affect the depth perception for the surgeons and nurses. For the correct depth perception, a controllable pixel shift between the left and right eye images has been implemented in the SSM technique. By changing the pixel shift, the user can change the depth perception of the 3D LCD to adapt to his/her own depth perception.
1
Right video input
Left video input
Stereoscopic sub-pixel maps
for HI 3D display
Real time display of HD video after incorporating 3D
computer
Video display after wearing 3D
passive stereo glasses
L
R
L
R
L
R
L
R
Polarizing filter
L R
Figure 3
The layout of the operation theatre and lecture theatre for 3D robotic surgery support and training is shown in Figure 4. Two video input signals can be obtained from the image processing unit of the surgical robot through two video splitters. Two images are captured from the two video signals by a computer with Stereoscopic Visualization System (SVS) installed. In the SVS, the sub-pixels of each image are re-arranged using the Stereoscopic Sub-pixel Mapping to create 3D images. These 3D images of the operating field of the robotic surgery can be displayed on the 3D LCD monitors in the operation theatre and lecture theatres.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Image processing unit
Surgeon at the operating console remotely controlling movement of the robotic arms
Patient-side Cart installed with three robotic arms and two endoscopic cameras
3D LCD
Operation theatre and lecture theatre settings for 3D Robotic Surgery support and training
SVS
3D LCD
SVS
Lecture Theatre
Operation Theatre
Asistantsurgeon
Figure 4
The cost spent in the development of the SVS is as following:
No Description Cost (US$)
1 46 inches Stereo LCD monitor $9,200.00
2 1 set of computer with video capture devices $4,500.00
3 1 pair of passive stereo glasses $40.00
Total $13,740.00
D. ROI achieved or expected (200 words max.):
The high 3D image quality of the SVS produced is as the following specifications: High resolution (1920 x 540 pixels per each eye) High brightness (500 cd/m2) Simultaneously supporting both 2D and 3D modes High color fidelity
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Reliable Trouble free Depth perception control No flickering and less eye strain Depth resolution of 0.3 mm.
The advanced features of the SVS are including:
3D broadcast ready, Recordable and replayable in stereoscopic mode Can be adapted to various 3D LCD hardware Low cost (US$13,740) Short development cycle (4.5 month of a physicist) Designed for long hours of viewing Small size for operation theatre setting
The high image quality and advanced features have contributed to the improvement of the precision, speed and safety of the surgery. The time for some surgical procedures such as suturing can be reduced to 60%. Since 15th August 2009, the system has been put in production stage for supporting the robotic surgery service and more than 30 surgeries have been performed using our system. The above system has also been demonstrated in two international conferences successfully.
E. CONCLUSIONS/implications for the field. There is no doubt that robotic surgery will be one of the major life saving tools in the future. Our stereoscopic visualization system can further enhance the precision, quality and safety of the robotic surgery. It can also provide a valuable tool for robotic surgery training. The development of stereoscopic visualization technique has laid a milestone on the history of robotic surgery.
1. REFERENCES/testimonials/supporting internal documents (If necessary; 5 pages max.)
The intellectual property of the invention in project has completed the patent filing in Hong Kong and this project also has won an Asia Pacific ICT Merit Award and a Hong Kong ICT Award in 2009 as the followings:
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Figure 5
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Figure 6
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Figure 7
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Bio‐IT World 2010 Best Practices Awards
1. Nominating Organization (Fill this out only if you are nominating a group other than your own.)
A. Nominating Organization Organization name: Address: B. Nominating Contact Person Name: Title: Tel: Email:
2. User Organization (Organization at which the solution was deployed/applied)
A. User Organization Organization name: Randox Laboratories Ltd. Address: Molecular Biology, 55 Diamond Road, Crumlin, County Antrim, Northern Ireland. BT29 4QY. United Kingdom
B. User Organization Contact Person Name: Mr. John V Lamont Title: Chief Scientist Tel: 0044 (2894) 451061 Email: [email protected]
3. Project
Project Title: Determination of a Diagnostic Classsifier for transitional cell cancer of the bladder (TCCB) by evaluation of bladder cancer biomarkers in urine and blood of 200 individuals with variable bladder pathologies Team Leader Name: Dr. Mark W Ruddock Title: Team Leader Tel: 0044 (2894) 451061 Email: [email protected] Team members – name(s), title(s) and company (optional): Dr. Cherith N Reid, Molecular Biology Development Manager, Randox Laboratories Ltd. [email protected]
4. Category in which entry is being submitted (1 category per entry, highlight your choice)
Basic Research & Biological Research: Disease pathway research, applied and basic research
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Drug Discovery & Development: Compound‐focused research, drug safety Clinical Trials & Research: Trial design, eCTD Translational Medicine: Feedback loops, predictive technologies Personalized Medicine: Responders/non‐responders, biomarkers IT & Informatics: LIMS, High Performance Computing, storage, data visualization, imaging technologies Knowledge Management: Data mining, idea/expertise mining, text mining, collaboration, resource optimization
Health‐IT: ePrescribing, RHIOs, EMR/PHR Manufacturing & Bioprocessing: Mass production, continuous manufacturing
(Bio‐IT World reserves the right to re‐categorize submissions based on submission or in the event that a category is refined.) 5. Description of project (4 FIGURES MAXIMUM):
A. ABSTRACT/SUMMARY of the project and results (150 words max.) Currently, the only definitive test for the diagnosis of transitional cell carcinoma of the bladder is
cystoscopy, an expensive procedure that is both invasive and uncomfortable for the patient and
required biannually for the rest of their life. As such, few diagnostic biomarker tests, with FDA
approval have been launched unto the market. However, one of the drawbacks of these tests is their
lack of sensitivity and specificity, which fails to secure their use in a clinical setting.
Using Randox Biochip Array Technology (BAT), we have examined the urine and serum from 80
bladder cancer and 77 control patients in an attempt to identify biomarkers with a view to developing
a diagnostic algorithm. Our study has demonstrated that many of the 28 biomarkers assessed
individually did not differentiate between either group however, they contributed to a highly sensitive
and specific multivariate algorithm using Binary Logistic Regression Analyses.
B. INTRODUCTION/background/objectives Bladder cancer, on a cost per patient basis, is the most expensive cancer to manage from diagnosis
to death (1). This is because patients with a history of bladder cancer require regular surveillance
cystoscopy. Provisional estimates to the National Health Service (NHS) in the UK for the treatment
of bladder cancer in 1 year are just short of 200 million pounds (329 million US dollars). Furthermore,
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
17% of these costs are due to investigations of patients with risk factors but who subsequently prove
not to have bladder cancer.
The usefulness of a diagnostic test is dependent on its sensitivity and specificity. Sensitivity is the
number of true positives and specificity is the number of true negatives. There are two gold
standards for the detection of urothelial cancers. The first, cytology, depends on the examination of
urothelial cells in voided urine. The advantages of cytology are the ease of obtaining the specimen
and its high specificity. The disadvantages are its poor sensitivity, subjectivity and low cellular yield.
Therefore cytology is normally combined with a second gold standard, flexible cystoscopy.
Cystoscopy allows direct observation of the bladder, biopsy of suspicious regions and results in 95%
accuracy of diagnosis. Unfortunately, cystoscopy is extremely expensive. Its other disadvantages
are an association with 10% risk of UTI, discomfort for the patient and it does not allow for upper
tract visualisation and does not always detect small areas of carcinoma in situ.
To overcome the limitations of cytology and cystoscopy, there has been extensive research into
urine biomarkers. Urine provides an insight into the proteome excreted by the tumour itself and from
the circulation. No single biomarker or panel of biomarkers has yet achieved the levels of sensitivity
and specificity required to reduce the frequency of cystoscopy. Interpretation of the vast literature is difficult because there has not been a systematic approach and results from different studies are
difficult to compare. The high proportions of studies with poor study design, potential bias, overly
optimistic estimates, inadequate information and lack of reproducibility prompted guidelines for
diagnostic biomarker studies.
Over the last 10 years, many bladder cancer biomarkers including Bladder Tumour Antigen (BTA),
nuclear matrix protein 22 (NMP22), telomerase, fluorescence in situ hybridisation (FISH), urinary
bladder cancer (UBC) and fibrinogen degradation products (FDP) have been evaluated against urine
cytology. NMP22 and BTA have FDA approval as point of care assays. NMP22 is a promising
biomarker. Its major drawback is that it detects a nuclear matrix protein, which requires immediate
stabilisation in urine. BTA can be confounded by blood in the urine . Lokeshwar’s group has explored
hyaluronic acid assays as both diagnostic and prognostic markers with impressive results.
Fragments of cytokeratin 8 and 18 have been used as surrogate markers of apoptosis and necrosis
and have been evaluated using a UBC ELISA. Many markers have low specificity and are positive in
large proportions of patients with urological pathologies other than bladder cancer and in patients
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
with urinary infections. New putative markers, such as survivin, burgeon the urology literature. EGF
has been shown to induce expression of the matrix metalloproteinase (MMP), MMP9, in some
bladder cancer cells. MMP9, itself has been proposed as a bladder cancer marker. All novel
markers should be benchmarked against the high specificity of urine cytology and the high sensitivity
of telomerase .
There has been a move towards the use of combinations of biomarkers. Some advocate that
studies should be based on combinations of representative biomarkers from each of the key
molecular pathways involved in bladder carcinogenesis: neo-vascularisation and angiogenesis;
proliferation; chromosomal, genetic and epigenetic abnormalities; chemokines; growth factors and
tumour antigens. Others have preferred the concept of mapping the urine proteome in a manner
similar to the genome project.
In this study we adopted a systematic approach to measure 28 biochemical biomarkers and record
clinico-pathological variables in 157 patients presenting with haematuria of whom 80 had a final
diagnosis of bladder cancer and 77 were designated as controls.
Our study objectives were to:
(1) Compare protein biomarker levels in bladder cancer patients and controls;
(2) Create diagnostic classifiers for bladder cancer patients based on proven predictive
probability, i.e., clinical and demographic data only;
(3) Create classifiers based on biomarkers only;
(4) Determine whether a combination of proven predictive probability and biomarkers
significantly improved receiver operating characteristics (ROC) of proven predictive
probability;
(5) Determine whether correction for dehydration using protein, creatinine or osmolality
significantly improved the ROC of classifiers; and
(6) Calculate diagnostic accuracies of the classifiers in known confounding pathologies.
C. RESULTS (highlight major R&D/IT tools deployed; innovative uses of technology).
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
R&D/IT tools deployed Randox Laboratories, an international diagnostics company, is unique in offering Biochip Array
Technology. Our facility is accredited to ISO and our testing services use both internal quality
control and several external quality assessment schemes, to assure the quality of our test results.
Randox’s commitment to research and development enables us to bring innovative tests to market.
Already the Evidence Investigator™ instrument utilises Randox’s revolutionary Biochip Array
Technology (BAT), which allows the simultaneous detection of multiple analytes from a single
sample volume as low as 25µl. A single 9x9mm biochip acts as the reaction vessel, replacing
multiple cuvettes in this bench-top semi automated analyser. The easy-to-use compact biochip-
imaging platform comprises a super-cooled Charge Coupled Device (CCD) camera and customised
image processing software.
The Evidence Investigator™ has a wide test menu available including cardiac, cytokine, drugs of
abuse, cerebral, thyroid, adhesion molecules, tumour monitoring, fertility, Ranplex CRC DNA array,
growth promoter and anti-microbial arrays.
The state of the art technology employed by the Evidence Investigator™ is user friendly, using
Windows® based software, can be integrated with LIMS and offers extensive QC data generation,
comprehensive results requiring no manual manipulation, user-defined cut-off values and much
more. Furthermore, our data processing was easily integrated, into existing statistical analyses
software, supervised by a bio-statistician.
The Standards for Reporting of Diagnostic Accuracy (STARD) working group reviewed standards in
2003 and recommended that reports should include details about the patient population, data
collection, reference standard, statistical analyses and blinding. Where possible, this study has
adhered to STARD recommendations.
In conclusion, a rigorously controlled study in combination with the Randox Biochip technology
platform makes us strategically placed for the development of a multivariate algorithm.
D. ROI achieved or expected (200 words max.): On a return on investment basis, it is clear that after full validation, the benefits to patients at risk of
bladder cancer clearly outweigh the costs, with benefits to Healthcare at a number of different levels.
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
Requiring a team of doctors, technicians, anesthetist, surgeon, nurse, pathologist and an operating
theatre, the cost of performing a flexible cystoscopy is in the region of $1800. On a comparative
basis, the Biochip Array Test would not exceed $250 per patient test. For an individual patient test,
substituting a cystoscopy for Biochip Array Test represents a saving of approx. $1550 per test.
Current estimates are that up to 3.6 million cystoscopies for TCCB (Transitional Cell Cancer of the
bladder) are carried out each year in Europe and the USA. On this basis, the Biochip Test analysis
would represent a saving to Healthcare of approx 5.6 billion dollars per year. In addition to the
monetary saving, the biochip test is relatively non-invasive, requiring blood and urine samples taken
from the patient by a nurse. Results are available in a few hours; analyses performed using the
Randox Evidence Investigator™ analyser and a laboratory analyst. This represents a clear saving of
man hours
E. CONCLUSIONS/implications for the field. Currently, patients who present with haematuria are scheduled for cystoscopy. However, the vast
majority of these cystoscopies are unnecessary, as most of the patients who present with
haematuria do not have bladder cancer. Furthermore, the test is both invasive and very costly. One
intervention that could potentially triage these patients is the use of bladder cancer biomarkers as a
screening tool. Unfortunately, the sensitivity and specificity of such tests that are available (e.g.
NMP22 & BTA) are limited and therefore their use in isolation is unwarranted. However, as outlined
in our application, we have developed a bladder cancer biomarker algorithm for our Biochip Array
Platform with high sensitivity and specificity (ROC > 0.92). Using this high throughput technology,
hundreds of patients could be screened each hour saving both unnecessary cystoscopies and
significant discomfort to the patient. Furthermore, the screening of bladder cancer patients is very
costly. Bladder cancer patients have to be screened biannually for the rest of their lives and as such,
the costs to the NHS per annum for the treatment of this disease are in excess of £200 million per
year. Furthermore, as outlined above, only 5 patients in every 100 who present with haematuria
have bladder cancer. The clinicians involved with this study have identified this as a major problem
and one of great concern as their time would be better spent with the 5 patients that have bladder
cancer. If a system were in place, i.e., bladder cancer biomarker algorithm on Biochip Array for high
throughput screening, this would offer the clinicians a tool to screen out low risk haematuria patients
250 First Avenue, Suite 300, Needham, MA 02494 | phone: 781‐972‐5400 | fax: 781‐972‐5425
Published Resources for the Life Sciences
and allowing them more time to treat patients who have bladder cancer. This is currently a major
obstacle and would be of great benefit to both the clinicians and the NHS, who currently fund these
tests. Moreover, this would also reduce the number of these invasive procedures, which like any
hospital procedure, carries a risk for the patient. The potential saving to the NHS and the freeing of
the clinicians time, would have a very significant impact.
1. REFERENCES/testimonials/supporting internal documents (If necessary; 5 pages max.)
(1) Urinary Biomarkers in Bladder Cancer (2009) H.F. O’Kane, F. Abogunrin, B. Duggan, M. Ruddock, D. O’Rourke, K. Williamson. British J. Med. & Surg. Urology. Vol 6 (2): 255-256
(2) Algorithmic classifiers to diagnose bladder cancer (2009) K. Williamson, F. Abogunrin, M. Stevenson, J. O'Sullivan, B. Duggan, N. Anderson, D. O'Rourke, H. O'Kane, M. Ruddock, J. Lamont. Joint ECCO 15 - 34TH ESMO Multidisciplinary Congress BERLIN, 20 - 24 SEPTEMBER 2009. Abstract 48LBA.
(3) Sensitivities and Specificities of Biomarkers and Multivariate Diagnostic Classifiers for Transitional Cell Carcinoma of the Bladder (2009). Funso Abgogunrin, Hugh F O’Kane, Mark W Ruddock, Michael Stevenson, Joe M O’Sullivan. Neil H Anderson, Declan O’Rourke, Cherith N Reid, Brian Duggan, John V Lamont, Ruth E Boyd, Peter Hamilton, Thiagarajan Nambirajan, Kate E Williamson (submitted to Journal of Clinical Oncology 17th December 2009).
(4) Patent Filed – Bladder Cancer - Patent Application number 0916193.6.
Bio-IT World 2010 Best Practices Awards Nominating Organization name: Mayo Clinic Nominating Organization address: 200 First St SW Nominating Organization city: Rochester Nominating Organization state: MN Nominating Organization zip: 55905 Nominating Contact Person: Nominating Contact Person Title: Nominating Contact Person Phone: Nominating Contact Person Email: User Organization name: Mayo Clinic User Organization address: 200 First St SW User Organization city: Rochester User Organization state: MN User Organization zip: 55905 User Organization Contact Person: Vitaly Herasevich, MD, PhD User Organization Contact Person Title: Assistant Professor of Medicine User Organization Contact Person Phone: 507-255-4055 User Organization Contact Person Email: [email protected] Project Title: Towards personalized medicine: limiting ventilator induced lung injury through individual electronic medical records surveillance Team Leaders name: Ognjen Gajic, MD, MSc Team Leaders title: Associate Professor of Medicine Team Leaders Company: Mayo Clinic Team Leaders Contact Info: [email protected] Team Members name: Team Members title: Team Members Company: Entry Category: Personalized Medicine Abstract Summary: Introduction: Acute lung injury (ALI) and its more severe form, acute respiratory distress syndrome (ARDS) with prevalence of 7% of ICU admissions, and in-hospital mortality rate of 40-50% are serious public health problems. Most patients with ALI/ARDS require mechanical ventilation as a life support intervention. The efficacy of mechanical ventilation is hampered by significant
iatrogenic complications, most importantly ventilator-induced lung injury (VILI). Ventilation with high tidal volume (Vt) 12-15 mL/kg predicted body weight (PBW) was used for many years as a standard of care for patients with ALI/ARDS. Large National Institute of Health ARDS Network clinical trial confirmed the superiority of lung protective ventilation, reporting lower mortality rate (31 versus 40 percent) in ALI/ARDS patients ventilated with Vt of 6 mL/kg of predicted body weight (PBW) compared to a Vt of 12 mL/kg PBW. While lung protective mechanical ventilation is currently the only intervention shown to improve survival in patients with ALI/ARDS, the implementation of this strategy into clinical practice is often delayed or inconsistently applied. An important contributing factor to this has been a failure by bedside providers to recognize the syndrome. Education and feedback alone have had a limited success in preventing VILI and facilitating lung protective ventilation. To enhance the safety of mechanically ventilated patients, ALI electronic surveillance was combined with continuous ventilator monitoring to provide near real time automated decision support and minimize the potential for VILI (VILI sniffer). The purpose of the present study was to access the accuracy of VILI sniffer in detecting VILI risk, its effect on provider behavior and satisfaction, and the adherence to protective mechanical ventilation after the implementation. Results: Out of 111 alerts sent during study period expert co-investigator confirmed 77 cases of VILI risk (positive predictive value of 69%). Number of pages decreased during 12 month intervention period from median 22 in the first month to 6 in the last. Appropriate intervention followed 52% of true VILI alerts. The exposure to potentially injurious ventilation decreased after the intervention from 40.6±74.6 hours to 26.9±77.3 hours (P=0.004). Our satisfactory survey demonstrates that in its current form, the VILI sniffer does not display the characteristics of an annoying alert, but further refinements are needed to maximize its performance and the acceptance by bedside providers. ROI achieved: Development of customized VILI sniffer program was performed by JAVA programmer who spent 160 hours for developing and testing application. Further support for error-free program takes in average 10 minutes per day to
check log file for unusual events. With $65/hour price of programmer work the estimated cost to bring application to production stage was $10400. Potential direct benefit for patients clearly indicated in result section. Calculation of specific amount of money is currently conducted. Conclusions: We demonstrated the feasibility and effectiveness of fully automated EMR surveillance of mechanically ventilated patients at risk of ventilator induced lung injury. In the emergency and critical care medicine, where errors are extremely common and the timing of appropriate intervention is of high importance, EMR surveillance provides enormous opportunity to enhance patient safety. References:
1. Rubenfeld GD, Caldwell E, Peabody E, Weaver J, Martin DP, Neff M, Stern EJ, Hudson LD: Incidence and Outcomes of Acute Lung Injury. N Engl J Med 2005, 353(16):1685-1693.
2. Hickling KG, Walsh J, Henderson S, Jackson R: Low mortality rate in adult respiratory distress syndrome using low-volume, pressure-limited ventilation with permissive hypercapnia: a prospective study. Crit Care Med 1994, 22(10):1568-1578.
3. Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. The Acute Respiratory Distress Syndrome Network. N Engl J Med 2000, 342(18):1301-1308.
4. Yilmaz M, Keegan MT, Iscimen R, Afessa B, Buck CF, Hubmayr RD, Gajic O: Toward the prevention of acute lung injury: protocol-guided limitation of large tidal volume ventilation and inappropriate transfusion. Crit Care Med 2007, 35(7):1660-1666; quiz 1667.
5. Herasevich V, Yilmaz M, Khan H, Hubmayr RD, Gajic O. Validation of an electronic surveillance system for acute lung injury. Intensive Care Med 2009;35(6):1018-23.