+ All Categories
Home > Documents > Advanced Algorithms and Models for Computational Biology -- a machine learning approach

Advanced Algorithms and Models for Computational Biology -- a machine learning approach

Date post: 27-Jan-2016
Category:
Upload: guang
View: 38 times
Download: 0 times
Share this document with a friend
Description:
Advanced Algorithms and Models for Computational Biology -- a machine learning approach. Computational Genomics II: Sequence Modeling & Gene Finding with HMM Eric Xing Lecture 4, January 30, 2005. Reading: Chap 3, 5 DEKM book Chap 9, DTW book. Probabilities on Sequences. - PowerPoint PPT Presentation
Popular Tags:
55
Advanced Algorithms Advanced Algorithms and Models for and Models for Computational Biology Computational Biology -- a machine learning approach -- a machine learning approach Computational Genomics II: Computational Genomics II: Sequence Modeling & Sequence Modeling & Gene Finding with HMM Gene Finding with HMM Eric Xing Eric Xing Lecture 4, January 30, 2005 Reading: Chap 3, 5 DEKM book Chap 9, DTW book
Transcript
Page 1: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Advanced Algorithms Advanced Algorithms and Models for and Models for

Computational BiologyComputational Biology-- a machine learning approach-- a machine learning approach

Computational Genomics II:Computational Genomics II:

Sequence Modeling &Sequence Modeling &

Gene Finding with HMMGene Finding with HMM

Eric XingEric Xing

Lecture 4, January 30, 2005

Reading: Chap 3, 5 DEKM book Chap 9, DTW book

Page 2: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Probabilities on Sequences

Let S be the space of DNA or protein sequences of a given length n. Here are some simple assumptions for assigning probabilities to sequences:

Equal frequency assumption: All residues are equally probable at any position; i.e., P(Xi,r)=P(Xi,q) for any two residues r and q, for all i.

this implies that P(Xi,r)=r=1/|A|, where A is the residue alphabet (1/20 for proteins, 1/4 for DNA)

Independence assumption: whether or not a residue occurs at a position is independent of what residues are present at other positions. probability of a sequence

P(X1, X2, ..., XN) = r·r· , ..., · r= rN

Page 3: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Failure of Equal Frequency Assumption for (real) DNA

For most organisms, the nucleotides composition is significantly different from 0.25 for each nucleotide, e.g., H, influenza .31 A, .19 C, .19 G, .31 T P. aeruginosa .17 A, .33 C, .33 G, .17 T M. janaschii . 34 A, .16 C, .16 G, .34 T S. cerevisiae .31 A, .19 C, .19 G, .31 T C. elegans .32 A, .18 C, .18 G, .32 T H. sapiens .30 A, .20 C, .20 G, .30 T

Note symmetry: AT, CG, even thought we are counting nucleotides on just one strand. Explanation: although individual biological features may have non-symmetric composition,

usually features are distributed ~ randomly w.r.t. strand, so get symmetry.

Page 4: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

General Hypothesis Regarding Unequal Frequency

Neutralist hypothesis: mutation bias (e.g., due to nucleotide pool composition)

Selectionist hypothesis: natural selection bias

Page 5: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Models for Homogeneous Sequence Entities

Probabilities models for long "homogeneous" sequence entities, such as: exons (ORFs) introns inter-genetic background protein coiled-coil (other other structural) regions

Assumptions: no consensus, no recurring string patterns have distinct but uniform residue-composition (i.e., same for all sites) every site in the entity are iid samples from the same model

The model: a single multinomial: X ~ Mul(1,)

Page 6: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Multinomial Model for Sequence

For a site i, define its residue identity to be a multinomial random vector:

The probability of an observation si=A (i.e, xi,A=1) at site i:

The probability of a sequence (x1, x2,..., xN):

. , w.p.

and ],,[

where , ∑

T]G,C,[A,∈,

T]G,C,[A,∈,,

,

,

,

,

11

110

jjjji

jjiji

Ti

Gi

Ci

Ai

iX

XX

X

X

X

X

X

x

k

xk

xT

xG

xC

xAA

jii

kTGCA

AjXPAxp

∏}nucleotide observed index the where,{)say,( , 1

∏∏

,

,)(),...,,(

k

nk

k

xk

N

i k

xk

N

iiN

k

N

iki

kixPxxxP

1

1121

Page 7: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Maximum likelihood estimation: multinomial parameters:

Bayesian estimation: Dirichlet distribution:

Posterior distribution of under the Dirichlet prior:

Posterior mean estimation:

)|(maxarg

DP

Nnk

k

kk

k

nk

k

ML :shown that becan It

s.t. ,maxarg...},{ ∑∏

121

∏∏∏

∑-- )(

)(

)(

)(k

kk

k

kk

kk

kk CP 11

∏∏∏ -),...,|(k

nk

k

nk

kkN

kkkkxxP 111

Nn

ddDP kk

k

nkkkk

kk∏ -)|( 1

Parameter Estimation

Page 8: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Limitations non-uniform residue composition (e.g., CG rich regions) non-coding structural regions (MAR, centromere, telomere) di- or tri- nucleotide couplings estimation bias evolutionary constrains

Models for Homogeneous Sequence Entities, ctd

Page 9: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Site Models

Probabilities models for short sequences, such as: splice sites translation start sites promoter elements protein "motifs"

Assumptions: different examples of sites can be aligned without indels

(insertions/deletions) such that tend to have similar residues in same positions

drop equal frequency assumption; instead have position-specific frequencies

retain independence assumption (for now)

Page 10: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Site Models ctd.

Applies to short segments (<30 residues) where precise residue spacing is structurally or functionally important, and certain positions are highly conserved

DNA/RNA sequence binding sites for a single protein or RNA molecule Protein internal regions structurally constrained due to folding

requirements; or surface regions functionally constrained because bind certain ligands

Example: C. elegans splice sites

5' ss

Page 11: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Nucleotide Counts for 8192 C. elegans 3' Splice Sites

Page 12: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

3' Splice Site - C. elegans

Page 13: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

5' Splice Sites - C. elegans

Page 14: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Limitation of Homogeneous Site Models

Failure to allow indels means variably spaced subelements are "smeared", e.g.: branch site, for 3' splice sites; coding sequences, for both 3' and 5' sites

Independence assumption usually OK for protein sequences (after correcting for evolutionary

relatedness) often fails for nucleotide sequences; examples:

5' sites (Burge-Karlin observation); background (dinucleotide correlation, e.g., GC repeats).

Page 15: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Why Correlation?

Splicing involves pairing of a small RNA with the transcription at the 5' splice site.

The RNA is complementary to the 5' srRNA consensus sequence.

A mismatch at position -1 tends to destabilize the pairing, and makes it more important for other positions to be correctly paired.

Analogy can be easily drew for other DNA and protein motifs.

Page 16: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Comparing Alternative Probability Models

We will want to consider more than one model at a time, in the following situations: To differentiate between two or more hypothesis about a sequence To generate increasingly refined probability models that are progressively more

accurate

First situation arises in testing biological assertion, e.g., "is this a coding sequence?" Would compare two models:

1. one associated with a hypothesis Hcoding which attaches to a sequence the probability of observing it under experiment of drawing a random coding sequence from the genome

2. one associate with a hypothesis Hnoncoding which attaches to a sequence the probability of observing it under experiment of drawing a random non-coding sequence from the genome.

Page 17: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

)|(

)|()|,(

00 MDP

MDPDMMLR a

a

)|(log-)|(log)|,( 00 MDPMDPDMMLLR aa

Likelihood Ratio Test

The posterior probability of a model given data is:

P(M|D) = P(D|M)P(M)/P(D)

Given that all models are equally probable a priori, the posterior probability ratio of two models given the same data reduce to a likelihood ratio:

the numerator and the denominator may both be very small!

The log likelihood ratio (LLR) is the logarithm of the likelihood ratio:

Page 18: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Hidden Markov Models The Hidden Markov Models for sequence parsingfor sequence parsing

Page 19: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Given un-annotated sequences, delineate:

transcription initiation site, exon-intron boundaries, transcription termination site, a variety of other motifs: promoters, polyA

sites, branching sites, etc.

The hidden Markov model (HMM)

GAGAACGTGTGAGAGAGAGGCAAGCCGAAAAATCAGCCGCCGAAGGATACACTATCGTCGTCCTTGTCCGACGAACCGGTGGTCATGCAAACAACGCACAGAACAAATTAATTTTCAAATTGTTCAATAAATGTCCCACTTGCTTCTGTTGTTCCCCCCTTTCCGCTAGCGGAATTTTTTATATTCTTTTGGGGGCGCTCTTTCGTTGACTTTTCGAGCACTTTTTCGATTTTCGCGCGCTGTCGAACGGCAGCGTATTTATTTACAATTTTTTTTGTTAGCGGCCGCCGTTGTTTGTTGCAGATACACAGCGCACACATATAAGCTTGCACACTGATGCACACACACCGACACGTTGTCACCGAAATGAACGGGACGGCCATATGACTGGCTGGCGCTCGGTATGTGGGTGCAAGCGAGATACCGCGATCAAGACTCGAACGAGACGGGTCAGCGAGTGATACCGATTCTCTCTCTTTTGCGATTGGGAATAATGCCCGACTTTTTACACTACATGCGTTGGATCTGGTTATTTAATTATGCCATTTTTCTCAGTATATCGGCAATTGGTTGCATTAATTTTGCCGCAAAGTAAGGAACACAAACCGATAGTTAAGATCCAACGTCCCTGCTGCGCCTCGCGTGCACAATTTGCGCCAATTTCCCCCCTTTTCCAGTTTTTTTCAACCCAGCACCGCTCGTCTCTTCCTCTTCTTAACGTTAGCATTCGTACGAGGAACAGTGCTGTCATTGTGGCCGCTGTGTAGCTAAAAAGCGTAATTATTCATTATCTAGCTATCTTTTCGGATATTATTGTCATTTGCCTTTAATCTTGTGTATTTATATGGATGAAACGTGCTATAATAACAATGCAGAATGAAGAACTGAAGAGTTTCAAAACCTAAAAATAATTGGAATATAAAGTTTGGTTTTACAATTTGATAAAACTCTATTGTAAGTGGAGCGTAACATAGGGTAGAAAACAGTGCAAATCAAAGTACCTAAATGGAATACAAATTTTAGTTGTACAATTGAGTAAAATGAGCAAAGCGCCTATTTTGGATAATATTTGCTGTTTACAAGGGGAACATATTCATAATTTTCAGGTTTAGGTTACGCATATGTAGGCGTAAAGAAATAGCTATATTTGTAGAAGTGCATATGCACTTTATAAAAAATTATCCTACATTAACGTATTTTATTTGCTTTAAAACCTATCTGAGATATTCCAATAAGGTAAGTGCAGTAATACAATGTAAATAATTGCAAATAATGTTGTAACTAAATACGTAAACAATAATGTAGAAGTCCGGCTGAAAGCCCCAGCAGCTATAGCCGATATCTATATGATTTAAACTCTTGTCTGCAACGTTCTAATAAATAAATAAAATGCAAAATATAACCTATTGAGACAATACATTTATTTTATTTTTTTATATCATCAATCATCTACTGATTTCTTTCGGTGTATCGCCTAATCCATCTGTGAAATAGAAATGGCGCCACCTAGGTTAAGAAAAGATAAACAGTTGCCTTTAGTTGCATGACTTCCCGCTGGAT

Gene Finding

Page 20: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

A AA AX2 X3X1 XT

Y2 Y3Y1 YT...

... The sequence:

The underlying source:

Ploy NT,

genomic entities,

sequence of rolls,

dice,

Hidden Markov Models

Page 21: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Example: The Dishonest Casino

A casino has two dice: Fair die

P(1) = P(2) = P(3) = P(5) = P(6) = 1/6 Loaded die

P(1) = P(2) = P(3) = P(5) = 1/10P(6) = 1/2

Casino player switches back-&-forth between fair and loaded die once every 20 turns

Game:1. You bet $12. You roll (always with a fair die)3. Casino player rolls (maybe with fair die,

maybe with loaded die)4. Highest number wins $2

Page 22: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Puzzles Regarding the Dishonest Casino

GIVEN: A sequence of rolls by the casino player

1245526462146146136136661664661636616366163616515615115146123562344

QUESTION How likely is this sequence, given our model of how the casino works?

This is the EVALUATION problem in HMMs

What portion of the sequence was generated with the fair die, and what portion with the loaded die? This is the DECODING question in HMMs

How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back? This is the LEARNING question in HMMs

Page 23: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

A Stochastic Generative Model

Observed sequence:

Hidden sequence (a parse or segmentation):

A

B

1 4 3 6 6 4

BA A ABB

Page 24: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Definition (of HMM) Observation spaceObservation space

Alphabetic set:

Euclidean space:

Index set of hidden statesIndex set of hidden states

Transition probabilitiesTransition probabilities between any two statesbetween any two states

or

Start probabilitiesStart probabilities

Emission probabilitiesEmission probabilities associated with each stateassociated with each state

or in general:

A AA Ax2 x3x1 xT

y2 y3y1 yT...

...

Kccc ,,, 21CdR

M,,, 21I

,)|( , jii

tj

t ayyp 11 1

.,,,,lMultinomia~)|( ,,, I iaaayyp Miiii

tt 211 1

.,,,lMultinomia~)( Myp 211

.,,,,lMultinomia~)|( ,,, I ibbbyxp Kiiii

tt 211

.,|f~)|( I iyxp ii

tt 1

Graphical model

K

1

2

State automata

Page 25: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Probability of a Parse

Given a sequence x = x1……xT

and a parse y = y1, ……, yT, To find how likely is the parse:

(given our HMM and the sequence)

p(x, y) = p(x1……xT, y1, ……, yT) (Joint probability)

= p(y1) p(x1 | y1) p(y2 | y1) p(x2 | y2) … p(yT | yT-1) p(xT | yT)

= p(y1) P(y2 | y1) … p(yT | yT-1) × p(x1 | y1) p(x2 | y2) … p(xT | yT)

= p(y1, ……, yT) p(x1……xT | y1, ……, yT)

=

Marginal probability:

Posterior probability:

,,

def

,

jt

it

tt

yyM

jiijyy aa

1

11

,def

iyM

iiy

1

11

, anddef

,

kt

it

tt

xyM

i

K

kikxy bb

1 1

Let

TT yyyyy aa ,, 1211

TT xyxy bb ,, 11

yyxx

1 2 112 1

y y y

T

t

T

tttyyy

N ttyxpapp )|(),()( ,

)(/),()|( xyxxy ppp

A AA Ax2 x3x1 xT

y2 y3y1 yT...

...

Page 26: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

FAIR LOADED

0.05

0.05

0.950.95

P(1|F) = 1/6P(2|F) = 1/6P(3|F) = 1/6P(4|F) = 1/6P(5|F) = 1/6P(6|F) = 1/6

P(1|L) = 1/10P(2|L) = 1/10P(3|L) = 1/10P(4|L) = 1/10P(5|L) = 1/10P(6|L) = 1/2

The Dishonest Casino Model

Page 27: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Example: the Dishonest Casino

Let the sequence of rolls be: x = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4

Then, what is the likelihood of y = Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair?

(say initial probs a0Fair = ½, aoLoaded = ½)

½ P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) =

½ (1/6)10 (0.95)9 = .00000000521158647211 = 5.21 10-9

Page 28: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Example: the Dishonest Casino

So, the likelihood the die is fair in all this run

is just 5.21 10-9

OK, but what is the likelihood of = Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded,

Loaded, Loaded, Loaded?

½ P(1 | Loaded) P(Loaded | Loaded) … P(4 | Loaded) =

½ (1/10)8 (1/2)2 (0.95)9 = .00000000078781176215 = 0.79 10-9

Therefore, it is after all 6.59 times more likely that the die is fair all the way, than that it is loaded all the way

Page 29: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Example: the Dishonest Casino

Let the sequence of rolls be: x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6

Now, what is the likelihood = F, F, …, F? ½ (1/6)10 (0.95)9 = 0.5 10-9, same as before

What is the likelihood y = L, L, …, L?

½ (1/10)4 (1/2)6 (0.95)9 = .00000049238235134735 = 5 10-7

So, it is 100 times more likely the die is loaded

Page 30: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Three Main Questions on HMMs

1.1. EvaluationEvaluation

GIVEN an HMM M, and a sequence x,FIND Prob (x | M)ALGO. ForwardForward

2.2. DecodingDecoding

GIVEN an HMM M, and a sequence x ,FIND the sequence y of states that maximizes, e.g., P(y | x , M),

or the most probable subsequence of statesALGO. Viterbi, Forward-backward Viterbi, Forward-backward

3.3. LearningLearning

GIVEN an HMM M, with unspecified transition/emission probs.,and a sequence x,

FIND parameters = (i, aij, ik) that maximize P(x | )ALGO. Baum-Welch (EM)Baum-Welch (EM)

Page 31: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Applications of HMMs

Some early applications of HMMs finance, but we never saw them speech recognition modelling ion channels

In the mid-late 1980s HMMs entered genetics and molecular biology, and they are now firmly entrenched.

Some current applications of HMMs to biology mapping chromosomes aligning biological sequences predicting sequence structure inferring evolutionary relationships finding genes in DNA sequence

Page 32: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Typical structure of a gene

Page 33: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

E0 E1 E2

E

poly-A

3'UTR5'UTR

tEi

Es

I0 I1 I2

intergenicregion

Forward (+) strand

Reverse (-) strand

Forward (+) strand

Reverse (-) strand

promoter

GAGAACGTGTGAGAGAGAGGCAAGCCGAAAAATCAGCCGCCGAAGGATACACTATCGTCGTCCTTGTCCGACGAACCGGTGGTCATGCAAACAACGCACAGAACAAATTAATTTTCAAATTGTTCAATAAATGTCCCACTTGCTTCTGTTGTTCCCCCCTTTCCGCTAGCGGAATTTTTTATATTCTTTTGGGGGCGCTCTTTCGTTGACTTTTCGAGCACTTTTTCGATTTTCGCGCGCTGTCGAACGGCAGCGTATTTATTTACAATTTTTTTTGTTAGCGGCCGCCGTTGTTTGTTGCAGATACACAGCGCACACATATAAGCTTGCACACTGATGCACACACACCGACACGTTGTCACCGAAATGAACGGGACGGCCATATGACTGGCTGGCGCTCGGTATGTGGGTGCAAGCGAGATACCGCGATCAAGACTCGAACGAGACGGGTCAGCGAGTGATACCGATTCTCTCTCTTTTGCGATTGGGAATAATGCCCGACTTTTTACACTACATGCGTTGGATCTGGTTATTTAATTATGCCATTTTTCTCAGTATATCGGCAATTGGTTGCATTAATTTTGCCGCAAAGTAAGGAACACAAACCGATAGTTAAGATCCAACGTCCCTGCTGCGCCTCGCGTGCACAATTTGCGCCAATTTCCCCCCTTTTCCAGTTTTTTTCAACCCAGCACCGCTCGTCTCTTCCTCTTCTTAACGTTAGCATTCGTACGAGGAACAGTGCTGTCATTGTGGCCGCTGTGTAGCTAAAAAGCGTAATTATTCATTATCTAGCTATCTTTTCGGATATTATTGTCATTTGCCTTTAATCTTGTGTATTTATATGGATGAAACGTGCTATAATAACAATGCAGAATGAAGAACTGAAGAGTTTCAAAACCTAAAAATAATTGGAATATAAAGTTTGGTTTTACAATTTGATAAAACTCTATTGTAAGTGGAGCGTAACATAGGGTAGAAAACAGTGCAAATCAAAGTACCTAAATGGAATACAAATTTTAGTTGTACAATTGAGTAAAATGAGCAAAGCGCCTATTTTGGATAATATTTGCTGTTTACAAGGGGAACATATTCATAATTTTCAGGTTTAGGTTACGCATATGTAGGCGTAAAGAAATAGCTATATTTGTAGAAGTGCATATGCACTTTATAAAAAATTATCCTACATTAACGTATTTTATTTGCTTTAAAACCTATCTGAGATATTCCAATAAGGTAAGTGCAGTAATACAATGTAAATAATTGCAAATAATGTTGTAACTAAATACGTAAACAATAATGTAGAAGTCCGGCTGAAAGCCCCAGCAGCTATAGCCGATATCTATATGATTTAAACTCTTGTCTGCAACGTTCTAATAAATAAATAAAATGCAAAATATAACCTATTGAGACAATACATTTATTTTATTTTTTTATATCATCAATCATCTACTGATTTCTTTCGGTGTATCGCCTAATCCATCTGTGAAATAGAAATGGCGCCACCTAGGTTAAGAAAAGATAAACAGTTGCCTTTAGTTGCATGACTTCCCGCTGGAT

4

3

2

1

=)|•(

yp

GENSCAN (Burge & Karlin)

Page 34: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Some Facts About Human Genes

Comprise about 3% of the genome Average gene length: ~ 8,000 bp Average of 5-6 exons/gene Average exon length: ~200 bp Average intron length: ~2,000 bp ~8% genes have a single exon

Some exons can be as small as 1 or 3 bp. HUMFMR1S is not atypical: 17 exons 40-60 bp long, comprising 3% of a

67,000 bp gene

Page 35: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Idea Behind a GHMM GeneFinder

States represent standard gene features: intergenic region, exon, intron, perhaps more (promotor, 5’UTR, 3’UTR, Poly-A,..).

Observations embody state-dependent base composition, dependence, and signal features.

In a GHMM, duration must be included

as well.

Finally, reading frames and both

strands must be dealt with.

E0 E1 E2

E

poly-A

3'UTR5'UTR

tEi

Es

I0 I1 I2

intergenicregion

Forward (+) strand

Reverse (-) strand

Forward (+) strand

Reverse (-) strand

promoter

Page 36: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The HMM Algorithms

Questions:

Evaluation: What is the probability of the observed sequence? Forward

Decoding: What is the probability that the state of the 3rd position is Bk, given the observed sequence? Forward-Backward

Decoding: What is the most likely die sequence? Viterbi Learning: Under what parameterization are the observed

sequences most probable? Baum-Welch (EM)

Page 37: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Forward Algorithm

We want to calculate P(x), the likelihood of x, given the HMM Sum over all possible ways of generating x:

To avoid summing over an exponential number of paths y, define

(the forward probability)

The recursion:

),,...,()(def

11 1 ktt

kt

kt yxxPy

i

kiit

ktt

kt ayxp ,)|( 11

k

kTP )(x

yyxx

1 2 112 1

y y y

T

t

T

tttyyy

N ttyxpapp )|(),()( ,

Page 38: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Forward Algorithm – derivation

Compute the forward probability:

),,,...,( 111 k

tttkt yxxxP

1

1111ty

ktttt yyxxxP ),,,,...,(

),,...,,|(),...,,|(),,...,( 111111111 111

ttk

ttttk

ty tt yxxyxPxxyyPyxxPt

)|()|(),,...,( 11 11111

kttt

kty tt yxPyyPyxxP

t

)|(),,...,()|( 1111 1111 it

kti

itt

ktt yyPyxxPyxP

kii

it

ktt ayxP ,)|( 11

AA xtx1

yty1 ...

Axt-1

yt-1

...

...

...

),|()|()(),,( :ruleChain BACPCBPAPCBAP

Page 39: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Forward Algorithm

We can compute for all k, t, using dynamic programming!

Initialization:

Iteration:

Termination:

kt

kkk yxP )|( 1111

kk

kk

kk

yxP

yPyxP

yxP

)|(

)()|(

),(

1

11

1

11

111

111

kii

it

ktt

kt ayxP ,)|( 11

k

kTP )(x

Page 40: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Backward Algorithm

We want to compute ,

the posterior probability distribution on the t th position, given x

We start by computing

The recursion:

)|( x1ktyP

Forward, tk Backward,

),...,,,,...,(),( Ttk

ttk

t xxyxxPyP 11 11 x

)|...()...(

),,...,|,...,(),,...,(

, 11

11

11

111

ktTt

ktt

kttTt

ktt

yxxPyxxP

yxxxxPyxxP

)|,...,( 11 k

tTtkt yxxP

it

itt

iik

kt yxpa 111 1 )|(∑ ,

A Axt+1 xT

yt+1 yT...

Axt

yt

...

...

...

Page 41: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Backward Algorithm – derivation

Define the backward probability:

)|,...,( 11 k

tTtkt yxxP

1

111ty

kttTt yyxxP )|,,...,(

),,|,...,(),|()|( 111111 112111 kt

ittTt

kt

itti

kt

it yyxxxPyyxpyyP

it

itti ik yxpa 111 1 )|(,

A Axt+1 xT

yt+1 yT...

Axt

yt

...

...

...

)|,...,()|()|( 1111 12111 itTt

itti

kt

it yxxPyxpyyP

),,|(),|(),()|,,( :ruleChain BACPCBPAPCBAP

Page 42: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Backward Algorithm

We can compute for all k, t, using dynamic programming!

Initialization:

Iteration:

Termination:

kt

kkT ,1

it

itti ik

kt yxPa 111 1 )|(,

k

kkP 11 )(x

Page 43: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Posterior decoding

We can now calculate

Then, we can ask What is the most likely state at position t of sequence x:

Note that this is an MPA of a single hidden state, what if we want to a MPA of a whole hidden state sequence?

Posterior Decoding:

This is different from MPA of a whole sequence of hidden states

This can be understood as bit error ratevs. word error rate

)()(

),()|(

xx

xx

PPyP

yPkt

kt

ktk

t

11

)|(maxarg* x1 ktkt yPk

: *

Tty tkt 11

Example:MPA of X ?MPA of (X, Y) ?

x y P(x,y)

0 0 0.35

0 1 0.05

1 0 0.3

1 1 0.3

Page 44: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Viterbi decoding

GIVEN x = x1, …, xT, we want to find y = y1, …, yT, such that P(y|x) is maximized:

y* = argmaxy P(y|x) = argmax P(y,x) Let

= Probability of most likely sequence of states ending at state yt = k The recursion:

Underflows are a significant problem

These numbers become extremely small – underflow Solution: Take the logs of all values:

),,...,,,...,(max ,--},...{ -1111111

kttttyy

kt yxyyxxPV

t

itkii

ktt

kt VayxpV 11 ,max)|(

x1 x2 x3 ……………………...……..xN

State 1

2

K

x1 x2 x3 ……………………...……..xN

State 1

2

K

x1 x2 x3 ……………………...……..xN

State 1

2

K

x1 x2 x3 ……………………...……..xN

State 1

2

K

Vi(t)k

tV

tttt xyxyyyyyytt bbaayyxxp ,,,,),,,,,( 11121111

itkii

ktt

kt VayxpV 11 ,logmax)|(log

Page 45: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Viterbi Algorithm – derivation

Define the viterbi probability:

),,...,,,...,(max ,},...{ 111111 1

kttttyy

kt yxyyxxPV

t

),...,,,...,(),...,,,...,|(max ,},...{ ttttk

ttyy yyxxPyyxxyxPt 111111 1

1

),,,...,,,...,()|(max ,},...{ tttttk

ttyy yxyyxxPyyxPt 111111 1

1

itki

ktti VayxP ,, )|(max 111

),,,...,,,...,(max)|(max },...{, 111 111111 11

ittttyy

it

ktti yxyyxxPyyxP

t

itkii

ktt VayxP ,, max)|( 111

Page 46: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Viterbi Algorithm

Input: x = x1, …, xT,

Initialization:

Iteration:

Termination:

TraceBack:

itkii

ktt

kt VayxPV 11 ,, max)|(

kTkVP max),( * yx

kkk yxPV )|( 1111

itkii Vak,t 1 ,maxarg)Ptr(

),Ptr(

maxarg**

*

tyy

Vy

tt

kTkT

1

Page 47: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Computational Complexity and implementation details

What is the running time, and space required, for Forward, and Backward?

Time: O(K2N); Space: O(KN).

Useful implementation technique to avoid underflows Viterbi: sum of logs Forward/Backward: rescaling at each position by multiplying by a

constant

i

kiit

ktt

kt ayxp ,)|( 11

it

itt

iik

kt yxpa 111 1 )|(∑ ,

itkii

ktt

kt VayxpV 11 ,max)|(

Page 48: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Learning HMM: two scenarios

Supervised learning: estimation when the “right answer” is known Examples:

GIVEN: a genomic region x = x1…x1,000,000 where we have good(experimental) annotations of the CpG islands

GIVEN: the casino player allows us to observe him one evening, as he changes dice and produces 10,000 rolls

Unsupervised learning: estimation when the “right answer” is unknown Examples:

GIVEN: the porcupine genome; we don’t know how frequent are the CpG islands there, neither do we know their composition

GIVEN: 10,000 rolls of the casino player, but we don’t see when he changes dice

QUESTION: Update the parameters of the model to maximize P(x|) --- Maximal likelihood (ML) estimation

Page 49: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

(Homework!)

Supervised ML estimation

Given x = x1…xN for which the true state path y = y1…yN is known, Define:

Aij = # times state transition ij occurs in y

Bik = # times state i in y emits k in x

We can show that the maximum likelihood parameters are:

What if y is continuous? We can treat as NT observations of, e.g., a Gaussian, and apply learning rules for Gaussian …

' ',

,,

)(#

)(#

j ij

ij

n

T

t

itn

jtnn

T

t

itnML

ij A

A

y

yy

iji

a2 1

2 1

' ',

,,

)(#

)(#

k ik

ik

n

T

t

itn

ktnn

T

t

itnML

ik BB

y

xy

iki

b1

1

NnTtyx tntn :,::, ,, 11

(Homework!)

Page 50: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Supervised ML estimation, ctd. Intuition:

When we know the underlying states, the best estimate of is the average frequency of transitions & emissions that occur in the training data

Drawback: Given little data, there may be overfitting:

P(x|) is maximized, but is unreasonable0 probabilities – VERY BAD

Example: Given 10 casino rolls, we observe

x = 2, 1, 5, 6, 1, 2, 3, 6, 2, 3y = F, F, F, F, F, F, F, F, F, F

Then: aFF = 1; aFL = 0

bF1 = bF3 = .2;

bF2 = .3; bF4 = 0; bF5 = bF6 = .1

Page 51: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Pseudocounts

Solution for small training sets: Add pseudocounts

Aij = # times state transition ij occurs in y + Rij

Bik = # times state i in y emits k in x + Sik

Rij, Sij are pseudocounts representing our prior belief

Total pseudocounts: Ri = jRij , Si = kSik ,

--- "strength" of prior belief, --- total number of imaginary instances in the prior

Larger total pseudocounts strong prior belief

Small total pseudocounts: just to avoid 0 probabilities --- smoothing

Page 52: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Unsupervised ML estimation

Given x = x1…xN for which the true state path y = y1…yN is unknown,

EXPECTATION MAXIMIZATION

0. Starting with our best guess of a model M, parameters :

1. Estimate Aij , Bik in the training data How? , , How? (homework)

2. Update according to Aij , Bik

Now a "supervised learning" problem

3. Repeat 1 & 2, until convergence

This is called the Baum-Welch Algorithm

We can get to a provably more (or equally) likely parameter set each iteration

ktntn

itnik xyB ,, ,

tn

jtn

itnij yyA

, ,, 1

Page 53: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Baum Welch algorithm

The complete log likelihood

The expected complete log likelihood

EM The E step

The M step ("symbolically" identical to MLE)

n

T

ttntn

T

ttntnnc xxpyypypp

1211 )|()|()(log),(log),;( ,,,,,yxyxθl

n

T

tkiyp

itn

ktn

n

T

tjiyyp

jtn

itn

niyp

inc byxayyy

ntnntntnnn 1211

11,)|(,,,)|,(,,)|(, logloglog),;(

,,,, xxxyxθ l

)|( ,,, nitn

itn

itn ypy x1

)|,( ,,,,,, n

jtn

itn

jtn

itn

jitn yypyy x1111

n

T

t

itn

n

T

t

jitnML

ija 1

1

2

,

,,

n

T

t

itn

ktnn

T

t

itnML

ik

xb 1

1

1

,

,,

Nn

inML

i 1,

Page 54: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

The Baum-Welch algorithm -- comments

Time Complexity:

# iterations O(K2N)

Guaranteed to increase the log likelihood of the model

Not guaranteed to find globally best parameters

Converges to local optimum, depending on initial conditions

Too many parameters / too large model: Overt-fitting

Page 55: Advanced Algorithms  and Models for  Computational Biology -- a machine learning approach

Acknowledgments

Serafim Batzoglou: for some of the slides adapted or modified from his lecture slides at Stanford University

Terry Speed: for some of the slides modified from his lectures at UC Berkeley


Recommended