39
Center for Genes, Environment, and Health Biological Sequences and Hidden Markov Models CBPS7711 Sept 9, 2010 Sonia Leach, PhD Assistant Professor Center for Genes, Environment, and Health National Jewish Health [email protected] Slides created from David Pollock’s slides from last year 7711 and current reading list from CBPS711 website

Biological Sequences and Hidden Markov Models CBPS7711 Sept 9, 2010

  • Upload
    dutch

  • View
    39

  • Download
    0

Embed Size (px)

DESCRIPTION

Biological Sequences and Hidden Markov Models CBPS7711 Sept 9, 2010. Sonia Leach, PhD Assistant Professor Center for Genes, Environment, and Health National Jewish Health [email protected]. - PowerPoint PPT Presentation

Citation preview

Page 1: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Center for Genes, Environment, and Health

Biological Sequences and Hidden Markov Models

CBPS7711Sept 9, 2010

Sonia Leach, PhDAssistant Professor

Center for Genes, Environment, and HealthNational Jewish Health

[email protected]

Slides created from David Pollock’s slides from last year 7711 and current reading list from CBPS711 website

Page 2: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Andrey Markov 1856-1922

2 Center for Genes, Environment, and Health

Introduction• Despite complex 3-D structure, biological molecules have

primary linear sequence (DNA, RNA, protein) or have linear sequence of features (CpG islands, models of exons, introns, regulatory regions, genes)

• Hidden Markov Models (HMMs) are probabilistic models for processes which transition through a discrete set of states, each emitting a symbol (probabilistic finite state machine)

• HMMs exhibit the ‘Markov property:’ the conditional probability distribution of future states of the process depends only upon the present state (memory-less)

• Linear sequence of molecules/features ismodelled as a path through states of the HMMwhich emit the sequence of molecules/features

• Actual state is hidden and observed only through output symbols

Page 3: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Hidden Markov Model• Finite set of N states X• Finite set of M observations O• Parameter set λ = (A, B, π)

– Initial state distribution πi = Pr(X1 = i)– Transition probability aij = Pr(Xt=j | Xt-1 = i)– Emission probability bik = Pr(Ot=k | Xt = i)

3 Center for Genes, Environment, and Health

1 2

3

N=3, M=2π=(0.25, 0.55, 0.2)A = B =

000.1

1.09.00

8.02.00

5.0

25.0

5.0

75.09.01.0

Example:

Page 4: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Hidden Markov Model• Finite set of N states X• Finite set of M observations O• Parameter set λ = (A, B, π)

– Initial state distribution πi = Pr(X1 = i)– Transition probability aij = Pr(Xt=j | Xt-1 = i)– Emission probability bik = Pr(Ot=k | Xt = i)

4 Center for Genes, Environment, and Health

Hidden Markov Model (HMM)

OtO

XtX t-1

t-1

1 2

3

N=3, M=2π=(0.25, 0.55, 0.2)A = B =

000.1

1.09.00

8.02.00

5.0

25.0

5.0

75.09.01.0

Example:

Page 5: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

5 Center for Genes, Environment, and Health

Probabilistic Graphical Models

Time

Observability Utility Observabilityand Utility

MarkovDecisionProcess (MDP)

A tA t−1

X tX t −1

U tU t−1

PartiallyObservableMarkovDecisionProcess (POMDP)

A t−1A t

X tX t −1

OtO t−1

U tU t−1

Markov Process (MP)X tX t −1

Hidden Markov Model (HMM)

OtO

XtX t-1

t-1

Y X

Page 6: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Three basic problems of HMMs1. Given the observation sequence O=O1,O2,

…,On, how do we compute Pr(O| λ)?

2. Given the observation sequence, how do we choose the corresponding state sequence X=X1,X2,…,Xn which is optimal?

3. How do we adjust the model parameters to maximize Pr(O| λ)?

6 Center for Genes, Environment, and Health

Page 7: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

• Probability of O is sum over all state sequencesPr(O|λ) = ∑

all X Pr(O|X, λ) Pr(X|λ)

= ∑

all X πx1

bx1o1 ax1x2 bx2o2 . . . axn-1xn bxnon

• Efficient dynamic programming algorithm to do this: Forward algorithm (Baum and Welch)

7 Center for Genes, Environment, and Health

1 2

3

N=3, M=2π=(0.25, 0.55, 0.2)A = B =

000.1

1.09.00

8.02.00

5.0

25.0

5.0

75.09.01.0

Example:

πi = Pr(X1 = i)aij = Pr(Xt=j | Xt-1 = i)bik = Pr(Ot=k | Xt = i)

Page 8: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

A Simple HMMCpG Islands where in one state, much higher

probability to be C or G

G .1C .1A .4T .4

G .3C .3A .2T .2 0.1

0.2

CpG Non-CpG

0.8 0.9

From David Pollock

Page 9: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s

G

G .3

G .1

Pr(G|λ) = πC bCG + πN bNG

= .5*.3 + .5*.1For convenience, let’s drop the0.5 for now and add it in later

Page 10: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s

G

G .3

G .1

For O=GC have 4 possible state sequences CC,NC, CN,NN

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

Page 11: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s

G

G .3

G .1

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

(.075*.8+

.015*.1)*.3=.0185

(.075*.2+

.015*.9)*.1=.0029

G

For O=GCG have possible state sequences CCC, CCNNCC, NCN NNC, NNNCNC, CNN

Page 12: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s

G

G .3

G .1

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

(.075*.8+

.015*.1)*.3=.0185

(.075*.2+

.015*.9)*.1=.0029

G

For O=GCG have possible state sequences CCC, CCNNCC, NCN NNC, NNNCNC, CNN

Page 13: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s

G

G .3

G .1

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

(.075*.8+

.015*.1)*.3=.0185

(.075*.2+

.015*.9)*.1=.0029

G

(.0185*.8+.0029*.1

)*.2=.003

(.0185*.2+.0029*.9)

*.4=.0025

A

(.003*.8+

.0025*.1)*.2=.0005

(.003*.2+|

.0025*.9)*.4=.0011

A

Page 14: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

G

G .3

G .1

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

(.075*.8+

.015*.1)*.3=.0185

(.075*.2+

.015*.9)*.1=.0029

G

(.0185*.8+.0029*.1

)*.2=.003

(.0185*.2+.0029*.9)

*.4=.0025

A

(.003*.8+

.0025*.1)*.2=.0005

(.003*.2+|

.0025*.9)*.4=.0011

A

Problem 1: Pr(O| λ)=0.5*.0005 + 0.5*.0011= 8e-4

Page 15: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

G

G .3

G .1

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

(.075*.8+

.015*.1)*.3=.0185

(.075*.2+

.015*.9)*.1=.0029

G

(.0185*.8+.0029*.1

)*.2=.003

(.0185*.2+.0029*.9)

*.4=.0025

A

(.003*.8+

.0025*.1)*.2=.0005

(.003*.2+|

.0025*.9)*.4=.0011

A

Problem 2: What is optimal state sequence?

Page 16: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Forward AlgorithmProbability of a Sequence is the Sum of All

Paths that Can Produce It

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s

G

G .3

G .1

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

(.075*.8+

.015*.1)*.3=.0185

(.075*.2+

.015*.9)*.1=.0029

G

(.0185*.8+.0029*.1

)*.2=.003

(.0185*.2+.0029*.9)

*.4=.0025

A

(.003*.8+

.0025*.1)*.2=.0005

(.003*.2+|

.0025*.9)*.4=.0011

A

Page 17: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Viterbi AlgorithmMost Likely Path (use max instead of sum)

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s(note error in formulas on his)

G

G .3

G .1

max(.3*.8,.1*.1)*.3= .072

max(.3*.2,.1*.9)*.1=.009

C

max(.072*.8,.009*.1)*.3= .0173

max(.072*.2,.009*.9)*.1=.0014

G

max(.0173*.8,.0014*.1)*.2=.0028

max(.0173*.2+.0014*.9)*.4=.0014

A

max(.0028*.8,.0014*.1)*.2=.00044

max(.0028*.2,.0014*.9)*.4=.0005

A

Page 18: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

The Viterbi AlgorithmMost Likely Path: Backtracking

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

Adapted from David Pollock’s

G

G .3

G .1

max(

.3*.8,

.1*.1)*.3= .072

max(.3*.2,.1*.9)*.1=.009

C

max(

.072*.8,

.009*.1)*.3= .0173

max(.072*.2,.009*.9)*.1=.0014

G

max(.0173*.8,.0014*.1)*.2=.0028

max(

.0173*.2+.0014*.9)*.4=.0014

A

max(.0028*.8,.0014*.1)*.2=.00044

max(.0028*.2,

.0014*.9)*.4

=.0005

A

Page 19: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Forward-backward algorithm

G .1C .1A .4T .4

G .3C .3A .2T .2

0.10.2

Non-CpG

0.8

0.9

CpG

G

G .3

G .1

(.3*.8+.1*.1)*.3=.075

(.3*.2+.1*.9)*.1=.015

C

(.075*.8+

.015*.1)*.3=.0185

(.075*.2+

.015*.9)*.1=.0029

G

(.0185*.8+.0029*.1

)*.2=.003

(.0185*.2+.0029*.9)

*.4=.0025

A

(.003*.8+

.0025*.1)*.2=.0005

(.003*.2+|

.0025*.9)*.4=.0011

AProblem 3: How to learn model?Forward algorithm calculated Pr(O1..t,Xt=i| λ)

Page 20: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Parameter estimation by Baum-Welch Forward Backward Algorithm

Forward variable αt(i) =Pr(O1..t,Xt=i | λ)Backward variable βt(i) =Pr(Ot+1..N|Xt=i, λ)

Rabiner 1989

Page 21: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Homology HMM• Gene recognition, classify to identify distant

homologs• Common Ancestral Sequence

– Parameter set λ = (A, B, π), strict left-right model– Specially defined set of states: start, stop, match, insert,

delete– For initial state distribution π, use ‘start’ state– For transition matrix A use global transition probabilities– For emission matrix B

• Match, site-specific emission probabilities

• Insert (relative to ancestor), global emission probs

• Delete, emit nothing

• Multiple Sequence Alignments

Adapted from David Pollock’s

Page 22: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Homology HMM

start

insert insert

match

delete delete

match end

insert

Adapted from David Pollock’s

Page 23: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Homology HMM Example

A .1C .05D .2E .08F .01

A .04C .1D .01E .2F .02

A .2C .01D .05E .1F .06

insert

delete

insert

match match

insert

delete

match

delete

Page 24: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

24 Center for Genes, Environment, and HealthEddy, 1998

Ungapped blocks

Ungapped blocks where insertion statesmodel intervening sequence between blocks

Insert/delete statesallowed anywhere

Allow multiple domains,sequence fragments

Page 25: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Homology HMM• Uses

– Find homologs to profile HMM in database• Score sequences for match to HMM

– Not always Pr(O| λ) since some areas may highly diverge– Sometimes use ‘highest scoring subsequence’

• Goal is to find homologs in database

– Classify sequence using libraryof profile HMMs

• Compare alternative models

– Alignment of additional sequences– Structural alignment when alphabet is secondary

structure symbols so can do fold-recognition, etc

Adapted from David Pollock’s

Page 26: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Why Hidden Markov Models for MSA?• Multiple sequence alignment as consensus

– May have substitutions, not all AA are equal

– Could use regular expressions but how to handle indels?

– What about variable-length members of family?

26 Center for Genes, Environment, and Health

FOS_RAT IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTPSTGAYARAGVV 112FOS_MOUSE IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTQSAGAYARAGMV 112

FOS_RAT IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTPS-TGAYARAGVV 112FOS_MOUSE IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTQS-AGAYARAGMV 112FOS_CHICK VPTVTAISTSPDLQWLVQPTLISSVAPSQNRG-HPYGVPAPAPPAAYSRPAVL 112

FOS_RAT IPTVTAISTSPDLQWLVQPTLVSSVAPSQ-------TRAPHPYGLPTPS-TGAYARAGVV 112FOS_MOUSE IPTVTAISTSPDLQWLVQPTLVSSVAPSQ-------TRAPHPYGLPTQS-AGAYARAGMV 112FOS_CHICK VPTVTAISTSPDLQWLVQPTLISSVAPSQ-------NRG-HPYGVPAPAPPAAYSRPAVL 112FOSB_MOUSE VPTVTAITTSQDLQWLVQPTLISSMAQSQGQPLASQPPAVDPYDMPGTS----YSTPGLS 110FOSB_HUMAN VPTVTAITTSQDLQWLVQPTLISSMAQSQGQPLASQPPVVDPYDMPGTS----YSTPGMS 110

Page 27: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Why Hidden Markov Models?• Rather than consensus sequence which describes the

most common amino acid per position, HMMs allow more than one amino acid to appear at each position

• Rather than profiles as position specific scoring matrices (PSSM) which assign a probability to each amino acid in each position of the domain and slide fixed-length profile along a longer sequence to calculate score, HMMs model probability of variable length sequences

• Rather than regular expressions which can capture variable length sequences yet specify a limited subset of amino acids per position, HMMs quantify difference among using different amino acids at each position

27 Center for Genes, Environment, and Health

Page 28: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Model Comparison• Based on

– For ML, take• Usually to avoid numeric

error

– For heuristics, “score” is– For Bayesian, calculate

– Uses ‘prior’ information on parameters

P(D |, M)

Pmax (D |, M)

lnPmax (D |, M)

log2 P(D | fixed ,M)

Pmax (, M | D) P(D |, M) * P * P M

P(D |, M) * P * P M )(P

Adapted from David Pollock’s

Page 29: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Parameters, • Types of parameters

– Amino acid distributions for positions (match states)

– Global AA distributions for insert states– Order of match states– Transition probabilities– Phylogenetic tree topology and branch lengths– Hidden states (integrate or augment)

• Wander parameter space (search)– Maximize, or move according to posterior

probability (Bayes)

Adapted from David Pollock’s

Page 30: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Expectation Maximization (EM)• Classic algorithm to fit probabilistic model

parameters with unobservable states• Two Stages

– Maximize• If know hidden variables (states), maximize model

parameters with respect to that knowledge

– Expectation• If know model parameters, find expected values of

the hidden variables (states)

• Works well even with e.g., Bayesian to find near-equilibrium space

Adapted from David Pollock’s

Page 31: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Homology HMM EM• Start with heuristic MSA (e.g., ClustalW)• Maximize

– Match states are residues aligned in most sequences

– Amino acid frequencies observed in columns

• Expectation– Realign all the sequences given model

• Repeat until convergence• Problems: Local, not global optimization

– Use procedures to check how it workedAdapted from David Pollock’s

Page 32: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Model Comparison• Determining significance depends on

comparing two models (family vs non-family)– Usually null model, H0, and test model, H1

– Models are nested if H0 is a subset of H1

– If not nested• Akaike Information Criterion (AIC) [similar to

empirical Bayes] or • Bayes Factor (BF) [but be careful]

• Generating a null distribution of statistic– Z-factor, bootstrapping, , parametric

bootstrapping, posterior predictive

2

Adapted from David Pollock’s

Page 33: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Z Test Method• Database of known negative controls

– E.g., non-homologous (NH) sequences– Assume NH scores

• i.e., you are modeling known NH sequence scores as a normal distribution

– Set appropriate significance level for multiple comparisons (more below)

• Problems– Is homology certain?– Is it the appropriate null model?

• Normal distribution often not a good approximation

– Parameter control hard: e.g., length distribution

~ N(, )

Adapted from David Pollock’s

Page 34: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Bootstrapping and Parametric Models

• Random sequence sampled from the same set of emission probability distributions– Same length is easy– Bootstrapping is re-sampling columns– Parametric uses estimated frequencies, may

include variance, tree, etc.• More flexible, can have more complex null• Pseudocounts of global frequencies if data limit

• Insertions relatively hard to model– What frequencies for insert states? Global?

Adapted from David Pollock’s

Page 35: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Homology HMM Resources• UCSC (Haussler)

– SAM: align, secondary structure predictions, HMM parameters, etc.

• WUSTL/Janelia (Eddy)– Pfam: database of pre-computed HMM

alignments for various proteins– HMMer: program for building HMMs

Adapted from David Pollock’s

Page 36: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

36 Center for Genes, Environment, and Health

Page 37: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Why Hidden Markov Models?• Multiple sequence alignment as consensus

– May have substitutions, not all AA are equal

– Could use regular expressions but how to handle indels?

– What about variable-length members of family?

– (but don’t accept everything – typically introduce gap penalty)

37 Center for Genes, Environment, and Health

FOS_RAT IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTPSTGAYARAGVV 112FOS_MOUSE IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTQSAGAYARAGMV 112

FOS_RAT IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTPS-TGAYARAGVV 112FOS_MOUSE IPTVTAISTSPDLQWLVQPTLVSSVAPSQTRAPHPYGLPTQS-AGAYARAGMV 112FOS_CHICK VPTVTAISTSPDLQWLVQPTLISSVAPSQNRG-HPYGVPAPAPPAAYSRPAVL 112

FOS_RAT IPTVTAISTSPDLQWLVQPTLVSSVAPSQ-------TRAPHPYGLPTPS-TGAYARAGVV 112FOS_MOUSE IPTVTAISTSPDLQWLVQPTLVSSVAPSQ-------TRAPHPYGLPTQS-AGAYARAGMV 112FOS_CHICK VPTVTAISTSPDLQWLVQPTLISSVAPSQ-------NRG-HPYGVPAPAPPAAYSRPAVL 112FOSB_MOUSE VPTVTAITTSQDLQWLVQPTLISSMAQSQGQPLASQPPAVDPYDMPGTS----YSTPGLS 110FOSB_HUMAN VPTVTAITTSQDLQWLVQPTLISSMAQSQGQPLASQPPVVDPYDMPGTS----YSTPGMS 110

Page 38: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

Why Hidden Markov Models?• Rather than consensus sequence which describes the

most common amino acid per position, HMMs allow more than one amino acid to appear at each position

• Rather than profiles as position specific scoring matrices (PSSM) which assign a probability to each amino acid in each position of the domain and slide fixed-length profile along a longer sequence to calculate score, HMMs model probability of variable length sequences

• Rather than regular expressions which can capture variable length sequences yet specify a limited subset of amino acids per position, HMMs quantify difference among using different amino acids at each position

38 Center for Genes, Environment, and Health

Page 39: Biological Sequences and  Hidden Markov Models CBPS7711 Sept 9, 2010

39 Center for Genes, Environment, and Health

Acknowledgements