Doctoral Thesis Dissertation 2014-03-20 @PoliMi

  • Published on
    09-Jun-2015

  • View
    1.786

  • Download
    1

Embed Size (px)

DESCRIPTION

Slides of my doctoral thesis dissertation talk, given on 20 March 2014 at Politecnico di Milano. Title: "Computational prediction of gene functions through machine learning methods and multiple validation procedures"

Transcript

  • 1. Computational Prediction of Gene Functions through Machine Learning methods and Multiple Validation Procedures candidate: Davide Chicco davide.chicco@polimi.it supervisor: Marco Masseroli PhD Thesis Defense Dissertation 20th March 2014

2. Computational Prediction of Gene Functions through Machine Learning methods and Multiple Validation Procedures 1) Analyzed scientific problem 2) Machine learning methods used 3) Validation procedures 4) Main results 5) Annotation list correlation measures 6) Novelty indicator 7) Final list of likely predicted annotations 8) Conclusions 3. Biomolecular annotations The concept of annotation: association of nucleotide or amino acid sequences with useful information describing their features The association of a gene and an information feature term corresponds to a biomolecular annotation This information is expressed through controlled vocabularies, sometimes structured as ontologies (e.g. Gene Ontology), where every controlled term of the vocabulary is associated with a unique alphanumeric code Gene Biological function feature Annotation gene2bff 4. Biomolecular annotations The association of an information/feature with a gene ID constitutes an annotation Annotation example: Scientific fact: the gene GD4 is present in the mitochondrial membrane Corresponds to the coupling: GD4 mitochondrial membrane GD4 is present in the mitochondrial membrane 5. The problem Many available annotations in different databanks However, available annotations are incomplete Only a few of them represent highly reliable, humancurated information In vitro experiments are expensive (e.g. 1,000 and 3 weeks) To support and quicken the timeconsuming curation process, prioritized lists of computationally predicted annotations are extremely useful These lists could be generated by softwares based on Machine Learning algorithms 6. The problem Other scientists and researchers dealt with the problem in the past by using: Support Vector Machines (SVM) [Barutcuoglu et al., 2006] k-nearest neighbor algorithm (kNN) [Tao et al., 2007] Decision trees [King et al., 2003] Hidden Markov models (HMM) [Mi et al. 2013] These methods were all good in stating if a predicted annotation was correct or not, but were not able to make extrapolations, that is to suggest new annotations absent from the input dataset 7. The software input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix BioAnnotationPredictor: A pipeline of steps and tools to predict, validate and analyze biomolecular annotation lists 8. input matrix outputStatistical method Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix The software reads the data from the db GPDW The software creates the input matrix: Input Annotation matrix A {0, 1} m x n m rows: genes n columns: annotation features A(i,j) = 1 if gene i is annotated to feature j or to any descendant of j in the considered ontology structure (true path rule) A(i,j) = 0 otherwise (it is unknown) feat 1 feat 2 feat 3 feat 4 feat N gene 1 0 0 0 0 0 gene 2 0 1 1 0 1 gene M 0 0 0 0 0 9. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix The software applies a statistical method (Truncated Singular Value Decomposition, Semantically Improved SVD with gene clustering, Semantically Improved SVD with clustering and term-term similarity weights) to a binary A input matrix Returns a real output A~ matrix Every element of the A matrix is compared to its corresponding element of the A~ matrix 10. After the computation, we compare the Aij element to the Aij~ input matrix outputStatistical method 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0.1 0.3 0.6 0.5 0.2 0.6 0.8 0.1 0.9 0.8 0.3 0.2 0.4 0.6 0.8 Input Aij Output: Aij~ Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix if Aij = 1 & Aij~ > : AC TP if Aij = 1 & Aij~ : AR FN if Aij = 0 & Aij~ : NAC TN if Aij = 0 & Aij~ > : AP FP AC: Annotation Confirmed; AR: Annotation to be Reviewed NAC: No Annotation Confirmed; AP: Annotation Predicted : minimizes the sum APs + ARs Input Output Yes Yes Yes No No No No Yes 11. input matrix outputStatistical method Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix AC: Annotation Confirmed AR: Annotation to be Reviewed NAC: No Annotation Confirmed AP: Annotation Predicted The Annotations Predicted - AP (FP) are the annotations absent in input and predicted by our software: we suggest them as present We record them in ranked lists: Input Output Yes Yes Yes No No No No Yes Rank Annotation ID Likelihood value 1 218405 0.9742584 2 222571 0.8545574 n 203145 0.1673128 12. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix An annotation prediction is performed by computing a reduced rank approximation A~ of the annotation matrix A (where 0 < k < r, with r the number of non zero singular values of A, i.e. the rank of A) Truncated Singular Value Decomposition (tSVD) 13. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix Only the first most important k columns of A are used for reconstruction (where 0 < k < r, with r the number of non zero singular values of A, i.e. the rank of A) In [P. Khatri et al. "A semantic analysis of the annotations of the human genome, Bioinformatics, 2005], the authors argued that the study of the matrix A shows the semantic relationships of the gene-function associations. A large value of a~ij suggests that gene i should be annotated to term j, whereas a value close to zero suggests the opposite. Truncated Singular Value Decomposition (tSVD) 14. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix We departed from this method developed by Khatri et al. (2005) Wayne State Univeristy, Detroit, and implemented it Improvement: Khatri et al. used a fixed SVD truncation level k=500 We developed a method for automated data- driven selection of k based on Receiver Opearating Characteristic (ROC) curve We got better results shown in several publications Truncated Singular Value Decomposition (tSVD) 15. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix Semantically improved (SIM1) version of the Truncated SVD, based on gene clustering [P. Drineas et al., "Clustering large graphs via the singular value decomposition", Machine Learning, 2004] Inspiring idea: similar genes can be grouped in clusters, that have different weights Truncated SVD with gene clustering (SIM1) 16. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix Truncated SVD with gene clustering (SIM1) 1. We choose a number C of clusters, and completely discard the columns of matrix U where j = C+1, ..., n. (we have an algorithm for the choice of C) 2. Each column uc of SVD matrix U represents a cluster, and the value U(i,c) indicates the membership of gene i to the c-th cluster. 3. For each cluster, first we generate Wc = diag(uc), and then the modified gene-to-term matrix Ac = Wc A, in which the i-th row of A is weighted by the membership score of the corresponding gene to the c-cluster. 17. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix Truncated SVD with gene clustering (SIM1) 4. Then, we compute Tc = Ac T Ac, and its SVD(Tc) 5. Then, every element of the A~ matrix is computed considering the c_th cluster that minimize its Euclidean norm distance to the original vector: ai~ = ai * Vk,c,i * Vk,c,i T 6. Output matrix is produced Tc = x 18. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix Semantically improved (SIM2) version of the Truncated SVD, based on gene clustering and term- term similarity weights [P. Resnik, "Using information content to evaluate semantic similarity in a taxonomy, arXiv.org, 1995] Inspiring idea: functionally similar terms, should be annotated to the same genes Truncated SVD with gene clustering and term- similarity weights (SIM2) 19. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix Truncated SVD with gene clustering and term- similarity weights (SIM2) In the algorithm shown before, we would add the following step: 6. a) Furthermore, to effect more accurate clustering, we compute the eigenvectors of the matrix G~ = ASAT where real n*n matrix S is the term similarity matrix. Starting from a pair of ontology terms, j1 and j2, the term functional similarity S(j1, j2) can be calculated using different methods. Similarity is based on Resnik measure [P. Resnik, "Using information content to evaluate semantic similarity in a taxonomy", arXiv.org, 1995] 20. input matrix output Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix Other methods With some colleagues at Politecnico di Milano we also implemented other methods (not included in this thesis): Probabilistic Latent Semantic Analysis (pLSA) Latent Dirichlet Allocation with Gibbs sampling (LDA) And with some colleagues at University of California Irvine we have been trying to design and implement other models: Auto-Encoder Deep Neural Network 21. After the computation, we compare the Aij element to the Aij~ input matrix outputStatistical method 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0.1 0.3 0.6 0.5 0.2 0.6 0.8 0.1 0.9 0.8 0.3 0.2 0.4 0.6 0.8 Input Aij Output: Aij~ Data reading Statisical method Predicted annotation lists A input matrix A~ output matrix if Ai

Recommended

View more >