+ All Categories
Home > Documents > Compbio _ Hidden Markov Model

Compbio _ Hidden Markov Model

Date post: 14-Apr-2018
Category:
Upload: ana-lucia
View: 227 times
Download: 0 times
Share this document with a friend
12
23/09/13 compbio / Hidden Markov Model compbio.pbworks.com/w/page/16252896/Hidden Markov Model 1/12 compbio log in hel p Get a free wiki | Try our free business product Wiki Pages & Files VIEW EDIT Page history earch this workspace  Hidden Markov Model last edited by PBworks 5 years ago Hidden Markov Model  Lecture by Professor Xiaole Shirley Liu Wiki by Da Lin and Allen Cheng 1. Discrete Markov Chain i. Markov Chain - Example 1 ii. Markov Chain - Example 2 iii. Markov Chain - Example 3 2. Hidden Markov Model 3. Problems 4. Problem 1 5. Problem 2 6. Viterbi Algorithm 7. Problem 3 8. Additional Applications of Hidden Markov Models 9. Additional Resources  The Markov chain was named in honor of Professor Andrei A M arkov, who first published hi s findings in 1906. Markov was born on June 14, 1856 in Ryazan, Russia, and studied mathematics at the University of St. Petersburg where he later taught as a professo r until his death in 1922. Alth ough his  work in mathematic s ranges from differential equations to proba bility theory, he has become best known for this research on stochastic processes and the Marko v chain.  On this page, we first give a brief introduction to Markov processes and the discrete Markov chain. We then present the Hidden Markov model, and solve it with three different methods: forward, backwards, and the Viterbi algorithm. Finally, we will examine and solve a Hidden Markov problem in a biological context. Discrete Markov Chain  To join this workspace, request access. Already have an account? Log in! Navigator  options Pages Files  2 easy ways About Us Association Studies and Haplotype Inference Basic Bioconductor Basic HTML Basic Python  SideBar  To use the template to start a new chapter, go to the template page; click edit, then copy everything in the edit box; go to your own page, click edit, and paste. You are ready to create your own chapter! Recent Activity  Microarr ay Classification edited by Microarr ay Classification edited by Microarr ay Classification edited by Microarr ay Classification edited by Microarr ay Classification edited by Microarray Classification edited by Microarr ay Classification edited by More activity...  
Transcript

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 1/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 1/12

compbio log in helpGet a free wiki | Try our free business product

Wiki Pages & Files

VIEW EDIT

Page history

earch this workspace

Hidden Markov Model

last edited by PBworks 5 years ago

Hidden Markov Model

Lecture by Professor Xiaole Shirley Liu

Wiki by Da Lin and Allen Cheng

1. Discrete Markov Chain

i. Markov Chain - Example 1

ii. Markov Chain - Example 2

iii. Markov Chain - Example 3

2. Hidden Markov Model

3. Problems

4. Problem 1

5. Problem 2

6. Viterbi Algorithm

7. Problem 3

8. Additional Applications of Hidden Markov Models

9. Additional Resources

The Markov chain was named in honor of Professor Andrei A Markov, who first published his findings

in 1906. Markov was born on June 14, 1856 in Ryazan, Russia, and studied mathematics at the

University of St. Petersburg where he later taught as a professor until his death in 1922. Although his

work in mathematics ranges from differential equations to probability theory, he has become best

known for this research on stochastic processes and the Markov chain.

On this page, we first give a brief introduction to Markov processes and the discrete Markov chain. We

then present the Hidden Markov model, and solve it with three different methods: forward, backwards,

and the Viterbi algorithm. Finally, we will examine and solve a Hidden Markov problem in a biologicalcontext.

Discrete Markov Chain

To join this workspace, request access.

Already have an account? Log in!

Navigator

optionsPages Files

2 easy ways

About Us

Association Studies and Haplotype Inference

Basic Bioconductor

Basic HTML

Basic Python

SideBar

To use the template to start a new chapter, go to the

template page; click edit, then copy everything in the

edit box; go to your own page, click edit, and paste.

You are ready to create your own chapter!

Recent Activity

Microarray Classification

edited by

Microarray Classification

edited by

Microarray Classification

edited by

Microarray Classification

edited by

Microarray Classification

edited by

Microarray Classification

edited by

Microarray Classification

edited by

More activity...

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 2/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 2/12

Markov Chain - Example 1

Suppose Diana has a test tomorrow, and she

wants t o k now what is the probability of receiving a

passing grade. In the 6 previous exams this

semester, she has passed the first three and failed

the most recent three. Her genius roommate has

also calculated for her the transition probabilities

a(i,j) for her exam states:

What is the probability that Sam will pass her exam tomorrow?

Solution:

We can see this as a Markov chain problem with states S(1) - pass, and S(2) - fail. Each exam can be

associated with a time frame, denoted as q(n), where n is the exam number. We want to find the

probability that q(7) = P, given the transition probabilities and the observed exam states PPPFFF.However, if we remember that the Markov process has no memory, the problem simplifies to finding

the probability that q(7) = P given that q(6) = F. This is shown mathematically below:

Markov Chain - Example 2

Frank the Weatherman’s computer has malfunctioned,

so he needs to predict the 7-day weather forecast by

hand. To simplify his task, he decides to report only 3

different states, rain (R), cloudy (C), or sunny (S).

From college, he remembers that the transition

probabilities for the different weather states are:

If it is sunny today, what is the probability that the observed weather for the next 7 days is

S(today)SSRRSCS?

Solution:

This is again a Markov chain problem with states S(1) - rain, and S(2) - cloudy, and S(3) - sunny. Eachday can be associated with a time frame, denoted as q(n), where n is the number of days since today.

We want to find the probability of observing SSSRRSCS given that q(0)=3. This is simply

multiplication of transition probabilities. If today is sunny, the probability that q(1)=3 is just a(3,3). If

1 =3 the robabilit that 2 =3 is a ain a 3 3 and so forth. This is solved mathematicall below:

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 3/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 3/12

, , , .

Markov Chain - Example 3

Every year at the company-wide meeting, a select

employee is honored with the chance at a promotion.In order to get the promotion, they must guess

correctly on the following game. The CEO has two

coin, one fair and the other biased, and holds one in

each hand. The CEO hands the employee a coin with

the following initial probability:

After eight tosses, the CEO asks the employer to guess the order of the coins he or she flipped (fair or

biased). Knowing the initial probabilities, and these transition probabilities:

Daniel guesses FFBBFFFB. What is the probability that he receives the promotion?

Solution:

This is a Markov chain with 2 states S(1)= fair, S(2) = biased. This question is very similar to example

2, only instead of knowing for certain what the initial state is, we need to include an initial probability.

From the initial probability, we can see that the probability of being handed a fair coin at the beginning

is 0.6. Each subsequent probability is just the probability of transitioning from one state to the other.

Intuitively, we are asking, what is the probability of being handed a fair coin initially? Once we have the

fair coin, what is the probability of tossing the fair coin again? And after the second toss with the fair

coin, what is the probability of tossing a biased coin? And so forth. This is solved mathematicallybelow:

Hidden Markov Model

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 4/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 4/12

Problems

Given an observation sequence, it becomes of interest to solve several problems. The final goal is to

formulate the most likely state sequence that produces the observation sequence.

Problem 1

First, given the parameters and probabilities, what is the probability of observing the observation

sequence? Additionally, what is the probability that, at a given observation point, a state is observed?

The first problem is a simple collection of probabilities. Suppose that we know or formulate a state

sequence. The probability that the observation sequence will be observed with this state sequence is

calculated by the multiplication of the relevant emission probability at each point:

We will then need to calculate the probability of observing the state sequence using transitionprobabilities:

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 5/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 5/12

The probability that the observation sequence will be observed given all probability conditions is thus

the sum over all possible state paths: each state sequence with its own probability multiplied by the

probability of the observation sequence given the state sequence.

This is clearly an exponential calculation, however, and so with long paths and many hidden states thebrute-force calculation is unfeasible.

Dynamic programming can be employed by using the results of previous calculations at a given point

to calculate the most recent probability. In essence, the algorithm uses the immediately preceding

probability to calculate the current one:

This can consist of two procedures, one that proceeds from the beginning of the observation sequence

forward, or one that begins at the end of the observation sequence and moves backwards.

The forward procedure essentially calculates the probability of a state at each point by considering

transitions from the previous states and emissions from the given state. The algorithm begins by using

the initial state distribution to calculate initial probabilities of states at the first observation point.

At the next observation point, the probability that a fair coin is used is calculated.

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 6/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 6/12

The procedure is as follows:

1. Multiply the previous probability that a fair coin was used by the transition probability from fair coin to

fair coin.

2. Multiply the previous probability that a biased coin was used by the transition probability from biased

coin to fair coin.

3. Sum the results of (1) and (2). This yields the probability of obtaining a fair coin from either previous

state.

4. Multiply the result of 3 with the probability that the fair coin will yield a Tail on a flip. This gives the

probability that the fair coin was used at the second point.

The same procedure can be used to calculate the probability that a biased coin was used:

1. Multiply the previous probability that a fair coin was used by the transition probability from fair coin to

biased coin.

2. Multiply the previous probability that a biased coin was used by the transition probability from biased

coin to biased coin.

3. Sum the results of (1) and (2). This yields the probability of obtaining a biased coin from either

previous state.

4. Multiply the result of 3 with the probability that the biased coin will yield a Tail on a flip. This gives

the probability that the biased coin was used at the second point.

This can be repeated at each observation point to yield state probabilities.

The final calculation is the sum of the final probabilities of both states. This yields the probability that

the observation sequence is observed given the initial probability sequences.

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 7/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 7/12

The backward procedure proceeds in a similar way, but it calculates the probability that a coin will see

a certain flip after it. Again, it calculates the probability that a state exists at any given point – in this

case, whether the coin is fair or biased. The algorithm initiates with probabilities of 1 for both fair and

biased.

We will define the current point as the last tail in the observation sequence. The previous point will thus

be the head before it. Then, to calculate the probability that the state preceding the current one is a fair

coin, we use this procedure:

1. Multiply the probability that a fair coin will flip a Tail by the probability that a fair coin will transition

into a fair coin.

2. Multiply the result of (1) by the probability that the current state is a fair coin.

3. Multiply the probability that a biased coin will flip a Tail by the probability that a fair coin will

transition into a biased coin.

4. Multiply the result of (3) by the probability that the current state is a biased coin.

5. Sum (2) and (4) to yield the probability that the previous state was a fair coin.

The probability that the previous state was a biased coin can be calculated similarly:

1. Multiply the probability that a fair coin will flip a Tail by the probability that a biased coin will

transition into a fair coin.

2. Multiply the result of (1) by the probability that the current state is a fair coin.

3. Multiply the probability that a biased coin will flip a Tail by the probability that a biased coin will

transition into a biased coin.

4. Multiply the result of (3) by the probability that the current state is a biased coin.

5. Sum (2) and (4) to yield the probability that the previous state was a biased coin.

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 8/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 8/12

Once again, these two calculations can be performed for the rest of the observation sequence.

Problem 2

The next problem involves choosing the most likely state sequence given the observation sequence.

This can be done with the calculations from the forward/backward procedures. By running the two

procedures separately and storing the probabilities at each point, the probabilistic prediction can be

made at every point.

This maximizes the expected number of correctly predicted states.

Viterbi Algorithm

The Viterbi Algorithm is a dynamic programming algorithm used to find the most likely sequence of

hidden states, or the Viterbi path. Similar to the forward-backward algorithm, the Viterbi algorithm uses

dynamic programming when the current state's probability is calculated using the results of previous

calculations.

The algorithm initiates by calculating the initial probabilities of each state. We continue with the coin

flipping example, so there are two states: fair or biased coin. There can be, of course, many more

states.

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 9/12

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 10/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 10/12

At the end, we can obtain the complete psi series for both states:

The termination proceeds as follows:

and we can ascertain that the fair state yields the higher probability at the end. The purpose of the

Viterbi algorithm then surfaces when we trace the Viterbi sequence. Starting at the fair coin in the last

observation, we simply determine which state was most likely to lead to the terminated state. A path

can thus be traced:

Thus, the state sequence FFFF will be most likely to produce the observation sequence HTTH.

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 11/12

23/09/13 compbio / Hidden Markov Model

compbio.pbworks.com/w/page/16252896/Hidden Markov Model 11/12

ore comp ca e sys ems can e ana yze , o course:

Assuming that state B produces the highest probability in the last observation, we can trace a Viterbi

sequence of BDDDAB.

Problem 3

The last basic problem for a Hidden Markov Model involves estimating all the parameters (transition

and emission probabilities, initial probabilities, etc.) so as to maximize the probability of an observation

sequence given those parameters.

A method of estimating the optimal parameters is with the Baum-Welch algorithm, which is basically

an expectation-maximization algorithm. Begin by randomly initializing the parameters, run the Viterbi

algorithm based on the random parameters and the observation sequence, and update the parameters

to achieve a better score.

Additional Applications of Hidden Markov Models

Stock Market - predicts Bear/Bull Market state (hidden) based on observed stock fluctuations.

Speech Recognition - predicts sentences and words (hidden) based on observed/perceived sounds.

Digital Sound Processing - Predicts source signal (hidden) based on observed signal fluctuations

Bioinformatics - Multitude of applications including sequence motif finding, gene

prediction, genome copy number change, protein structure prediction, protein-DNA interaction

prediction

Additional Resources

Readings

Eddy, S.R. Hidden Markov Models. Current Opinion in Structural Biology 6:361-365. 1996

Link

Eddy, S.R. What is a Hidden Markov Model. Nature Biotechnology 22: 1315 - 1316. 2004

Harvard Link

Krogh, A. An Introduction to Hidden Markov Models for Biological Sequences. Computational Methods

in Molecular Biology. Edited by S.L Salzberg, D.B. Searls, and S. Kasif. Elsevier. 1998

Link

Rabiner, L.R. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition.

Proceedings of the IEEE, 77 (2), p. 257–286, February 1989.

Link

Wai-Ki Ching, Michael K. Ng. Markov Chains: Models, Algorithms and Applications. Springer-Verlag

New York. 2005

Links

7/28/2019 Compbio _ Hidden Markov Model

http://slidepdf.com/reader/full/compbio-hidden-markov-model 12/12

23/09/13 compbio / Hidden Markov Model

Printable version

PBworks / Help

Terms of use / Privacy policy

About this workspace

Contact the owner / RSS feed / This workspace is public

Wikipedia on HMM Link

HMMER Updated Link

Understanding HMM from a mathematical perspective Link

References

Stat115 Lecture (Xiaole Shirley Liu) Harvard Link

Cartoon images from YahooImages

Comments (0)

You don't have permission to comment on this page.


Recommended