+ All Categories
Home > Documents > Tutorial on Hidden Markov Model

Tutorial on Hidden Markov Model

Date post: 02-Jan-2022
Category:
Upload: others
View: 10 times
Download: 0 times
Share this document with a friend
23
Applied and Computational Mathematics 2017; 6(4-1): 16-38 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.s.2017060401.12 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) Tutorial on Hidden Markov Model Loc Nguyen Sunflower Soft Company, Ho Chi Minh city, Vietnam Email address: [email protected] To cite this article: Loc Nguyen. Tutorial on Hidden Markov Model. Applied and Computational Mathematics. Special Issue: Some Novel Algorithms for Global Optimization and Relevant Subjects. Vol. 6, No. 4-1, 2017, pp. 16-38. doi: 10.11648/j.acm.s.2017060401.12 Received: September 11, 2015; Accepted: September 13, 2015; Published: June 17, 2016 Abstract: Hidden Markov model (HMM) is a powerful mathematical tool for prediction and recognition. Many computer software products implement HMM and hide its complexity, which assist scientists to use HMM for applied researches. How- ever comprehending HMM in order to take advantages of its strong points requires a lot of efforts. This report is a tutorial on HMM with full of mathematical proofs and example, which help researchers to understand it by the fastest way from theory to practice. The report focuses on three common problems of HMM such as evaluation problem, uncovering problem, and learn- ing problem, in which learning problem with support of optimization theory is the main subject. Keywords: Hidden Markov Model, Optimization, Evaluation Problem, Uncovering Problem, Learning Problem 1. Introduction There are many real-world phenomena (so-called states) that we would like to model in order to explain our observations. Often, given sequence of observations symbols, there is de- mand of discovering real states. For example, there are some states of weather: sunny, cloudy, rainy [1, p. 1]. Suppose you are in the room and do not know the weather outside but you are notified observations such as wind speed, atmospheric pressure, humidity, and temperature from someone else. Basing on these observations, it is possible for you to forecast the weather by using hidden Markov model (HMM). Before discussing about HMM, we should glance over the definition of Markov model (MM). First, MM is the statistical model which is used to model the stochastic process. MM is defined as below [2]: - Given a finite set of state S={s 1 , s 2 ,…, s n } whose cardi- nality is n. Let ∏ be the initial state distribution where π i ∏ represents the probability that the stochastic pro- cess begins in state s i . In other words π i is the initial probability of state s i , where =1 - The stochastic process which is modeled gets only one state from S at all time points. This stochastic process is defined as a finite vector X=(x 1 , x 2 ,…, x T ) whose element x t is a state at time point t. The process X is called state stochastic process and x t S equals some state s i S. Note that X is also called state sequence. Time point can be in terms of second, minute, hour, day, month, year, etc. It is easy to infer that the initial probability π i = P(x 1 =s i ) where x 1 is the first state of the stochastic process. The state stochastic process X must meet fully the Markov property, namely, given previous state x t–1 of process X, the conditional probability of current state x t is only de- pendent on the previous state x t–1 , not relevant to any further past state (x t–2 , x t–3 ,…, x 1 ). In other words, P(x t | x t–1 , x t–2 , x t–3 ,…, x 1 ) = P(x t | x t–1 ) with note that P(.) also denotes probability in this report. Such process is called first-order Markov process. - At time point, the process changes to the next state based on the transition probability distribution a ij , which de- pends only on the previous state. So a ij is the probability that the stochastic process changes current state s i to next state s j . It means that a ij = P(x t =s j | x t–1 =s i ) = P(x t+1 =s j | x t =s i ). The probability of transitioning from any given state to some next state is 1, we have ∈ , =1 All transition probabilities a ij (s) constitute the transition probability matrix A. Note that A is n by n matrix because there are n distinct states. It is easy to infer that matrix A represents state stochastic process X. It is possible to
Transcript
Page 1: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38

http://www.sciencepublishinggroup.com/j/acm

doi: 10.11648/j.acm.s.2017060401.12

ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online)

Tutorial on Hidden Markov Model

Loc Nguyen

Sunflower Soft Company, Ho Chi Minh city, Vietnam

Email address: [email protected]

To cite this article: Loc Nguyen. Tutorial on Hidden Markov Model. Applied and Computational Mathematics. Special Issue: Some Novel Algorithms for Global

Optimization and Relevant Subjects. Vol. 6, No. 4-1, 2017, pp. 16-38. doi: 10.11648/j.acm.s.2017060401.12

Received: September 11, 2015; Accepted: September 13, 2015; Published: June 17, 2016

Abstract: Hidden Markov model (HMM) is a powerful mathematical tool for prediction and recognition. Many computer

software products implement HMM and hide its complexity, which assist scientists to use HMM for applied researches. How-

ever comprehending HMM in order to take advantages of its strong points requires a lot of efforts. This report is a tutorial on

HMM with full of mathematical proofs and example, which help researchers to understand it by the fastest way from theory to

practice. The report focuses on three common problems of HMM such as evaluation problem, uncovering problem, and learn-

ing problem, in which learning problem with support of optimization theory is the main subject.

Keywords: Hidden Markov Model, Optimization, Evaluation Problem, Uncovering Problem, Learning Problem

1. Introduction

There are many real-world phenomena (so-called states) that

we would like to model in order to explain our observations.

Often, given sequence of observations symbols, there is de-

mand of discovering real states. For example, there are some

states of weather: sunny, cloudy, rainy [1, p. 1]. Suppose you

are in the room and do not know the weather outside but you are

notified observations such as wind speed, atmospheric pressure,

humidity, and temperature from someone else. Basing on these

observations, it is possible for you to forecast the weather by

using hidden Markov model (HMM). Before discussing about

HMM, we should glance over the definition of Markov model

(MM). First, MM is the statistical model which is used to model

the stochastic process. MM is defined as below [2]:

- Given a finite set of state S={s1, s2,…, sn} whose cardi-

nality is n. Let ∏ be the initial state distribution where

πi ∈ ∏ represents the probability that the stochastic pro-

cess begins in state si. In other words πi is the initial

probability of state si, where

� ����∈� = 1

- The stochastic process which is modeled gets only one

state from S at all time points. This stochastic process is

defined as a finite vector X=(x1, x2,…, xT) whose element

xt is a state at time point t. The process X is called state

stochastic process and xt ∈ S equals some state si ∈ S.

Note that X is also called state sequence. Time point can

be in terms of second, minute, hour, day, month, year, etc.

It is easy to infer that the initial probability πi = P(x1=si)

where x1 is the first state of the stochastic process. The

state stochastic process X must meet fully the Markov

property, namely, given previous state xt–1 of process X,

the conditional probability of current state xt is only de-

pendent on the previous state xt–1, not relevant to any

further past state (xt–2, xt–3,…, x1). In other words, P(xt |

xt–1, xt–2, xt–3,…, x1) = P(xt | xt–1) with note that P(.) also

denotes probability in this report. Such process is called

first-order Markov process.

- At time point, the process changes to the next state based

on the transition probability distribution aij, which de-

pends only on the previous state. So aij is the probability

that the stochastic process changes current state si to next

state sj. It means that aij = P(xt=sj | xt–1=si) = P(xt+1=sj |

xt=si). The probability of transitioning from any given

state to some next state is 1, we have

∀�� ∈ , � �����∈� = 1

All transition probabilities aij (s) constitute the transition

probability matrix A. Note that A is n by n matrix because

there are n distinct states. It is easy to infer that matrix A

represents state stochastic process X. It is possible to

Page 2: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 17

understand that the initial probability matrix ∏ is deg-

radation case of matrix A.

Briefly, MM is the triple ⟨S, A, ∏⟩. In typical MM, states are

observed directly by users and transition probabilities (A and

∏) are unique parameters. Otherwise, hidden Markov model

(HMM) is similar to MM except that the underlying states

become hidden from observer, they are hidden parameters.

HMM adds more output parameters which are called obser-

vations. Each state (hidden parameter) has the conditional

probability distribution upon such observations. HMM is

responsible for discovering hidden parameters (states) from

output parameters (observations), given the stochastic process.

The HMM has further properties as below [2]:

- Suppose there is a finite set of possible observations Φ =

{φ1, φ2,…, φm} whose cardinality is m. There is the se-

cond stochastic process which produces observations

correlating with hidden states. This process is called

observable stochastic process, which is defined as a fi-

nite vector O = (o1, o2,…, oT) whose element ot is an

observation at time point t. Note that ot ∈ Φ equals some

φk. The process O is often known as observation se-

quence.

- There is a probability distribution of producing a given

observation in each state. Let bi(k) be the probability of

observation φk when the state stochastic process is in

state si. It means that bi(k) = bi(ot=φk) = P(ot=φk | xt=si).

The sum of probabilities of all observations which ob-

served in a certain state is 1, we have ∀�� ∈ , � ��������� 1

All probabilities of observations bi(k) constitute the ob-

servation probability matrix B. It is convenient for us to

use notation bik instead of notation bi(k). Note that B is n

by m matrix because there are n distinct states and m

distinct observations. While matrix A represents state

stochastic process X, matrix B represents observable

stochastic process O.

Thus, HMM is the 5-tuple ∆ = ⟨S, Φ, A, B, ∏⟩. Note that

components S, Φ, A, B, and ∏ are often called parameters of

HMM in which A, B, and ∏ are essential parameters. Going

back weather example, suppose you need to predict how

weather tomorrow is: sunny, cloudy or rainy since you know

only observations about the humidity: dry, dryish, damp,

soggy. The HMM is totally determined based on its parame-

ters S, Φ, A, B, and ∏ according to weather example. We have

S = {s1=sunny, s2=cloudy, s3=rainy}, Φ = {φ1=dry, φ2=dryish,

φ3=damp, φ4=soggy}. Transition probability matrix A is

shown in table 1.

Table 1. Transition probability matrix A.

Weather current day (Time point t)

sunny cloudy rainy

Weather previous day

(Time point t – 1)

sunny a11=0.50 a12=0.25 a13=0.25

cloudy a21=0.30 a22=0.40 a23=0.30

rainy a31=0.25 a32=0.25 a33=0.50

From table 1, we have a11+a12+a13=1, a21+a22+a23=1, a31+a32+a33=1.

Initial state distribution specified as uniform distribution is

shown in table 2.

Table 2. Uniform initial state distribution ∏.

sunny cloudy rainy

π1=0.33 π2=0.33 π3=0.33

From table 2, we have π1+π2+π3=1.

Observation probability matrix B is shown in table 3.

Table 3. Observation probability matrix B.

Humidity

dry dryish damp soggy

Weather

sunny b11=0.60 b12=0.20 b13=0.15 b14=0.05

cloudy b21=0.25 b22=0.25 b23=0.25 b24=0.25

rainy b31=0.05 b32=0.10 b33=0.35 b34=0.50

From table 3, we have b11+b12+b13+b14=1, b21+b22+b23+b24=1,

b31+b32+b33+b34=1.

The whole weather HMM is depicted in fig. 1.

Figure 1. HMM of weather forecast (hidden states are shaded).

There are three problems of HMM [2] [3, pp. 262-266]:

1. Given HMM ∆ and an observation sequence O = {o1,

o2,…, oT} where ot � Φ, how to calculate the probability

P(O|∆) of this observation sequence. Such probability

P(O|∆) indicates how much the HMM ∆ affects on se-

quence O. This is evaluation problem or explanation

problem. Note that it is possible to denote O = {o1 → o2

→…→ oT} and the sequence O is aforementioned ob-

servable stochastic process.

2. Given HMM ∆ and an observation sequence O = {o1,

o2,…, oT} where ot � Φ, how to find the sequence of

states X = {x1, x2,…, xT} where xt � S so that X is most

likely to have produced the observation sequence O.

This is uncovering problem. Note that the sequence X is

aforementioned state stochastic process.

3. Given HMM ∆ and an observation sequence O = {o1,

o2,…, oT} where ot � Φ, how to adjust parameters of ∆

such as initial state distribution ∏, transition probability

matrix A, and observation probability matrix B so that

the quality of HMM ∆ is enhanced. This is learning

problem.

These problems will be mentioned in sections 2, 3, and 4, in

turn.

Page 3: Tutorial on Hidden Markov Model

18 Loc Nguyen: Tutorial on Hidden Markov Model

2. HMM Evaluation Problem

The essence of evaluation problem is to find out the way to

compute the probability P(O|∆) most effectively given the

observation sequence O = {o1, o2,…, oT}. For example, given

HMM ∆ whose parameters A, B, and ∏ specified in tables 1, 2,

and 3, which is designed for weather forecast. Suppose we

need to calculate the probability of event that humidity is

soggy and dry in days 1 and 2, respectively. This is evaluation

problem with sequence of observations O = {o1=φ4=soggy,

o2=φ1=dry, o3=φ2=dryish}. There is a complete set of 33=27

mutually exclusive cases of weather states for three days:

{x1=s1=sunny, x2=s1=sunny, x3=s1=sunny}, {x1=s1=sunny,

x2=s1=sunny, x3=s2=cloudy}, {x1=s1=sunny, x2=s1=sunny,

x3=s3=rainy}, {x1=s1=sunny, x2=s2=cloudy, x3=s1=sunny},

{x1=s1=sunny, x2=s2=cloudy, x3=s2=cloudy}, {x1=s1=sunny,

x2=s2=cloudy, x3=s3=rainy}, {x1=s1=sunny, x2=s3=rainy,

x3=s1=sunny}, {x1=s1=sunny, x2=s3=rainy, x3=s2=cloudy},

{x1=s1=sunny, x2=s3=rainy, x3=s3=rainy}, {x1=s2=cloudy,

x2=s1=sunny, x3=s1=sunny}, {x1=s2=cloudy, x2=s1=sunny,

x3=s2=cloudy}, {x1=s2=cloudy, x2=s1=sunny, x3=s3=rainy},

{x1=s2=cloudy, x2=s2=cloudy, x3=s1=sunny}, {x1=s2=cloudy,

x2=s2=cloudy, x3=s2=cloudy}, {x1=s2=cloudy, x2=s2=cloudy,

x3=s3=rainy}, {x1=s2=cloudy, x2=s3=rainy, x3=s1=sunny},

{x1=s2=cloudy, x2=s3=rainy, x3=s2=cloudy}, {x1=s2=cloudy,

x2=s3=rainy, x3=s3=rainy}, {x1=s3=rainy, x2=s1=sunny,

x3=s1=sunny}, {x1=s3=rainy, x2=s1=sunny, x3=s2=cloudy},

{x1=s3=rainy, x2=s1=sunny, x3=s3=rainy}, {x1=s3=rainy,

x2=s2=cloudy, x3=s1=sunny}, {x1=s3=rainy, x2=s2=cloudy,

x3=s2=cloudy}, {x1=s3=rainy, x2=s2=cloudy, x3=s3=rainy},

{x1=s3=rainy, x2=s3=rainy, x3=s1=sunny}, {x1=s3=rainy,

x2=s3=rainy, x3=s2=cloudy}, {x1=s3=rainy, x2=s3=rainy,

x3=s3=rainy}.

According to total probability rule [4, p. 101], the proba-

bility P(O|∆) is: ���|∆� ��� !", �# = ! , �$ = !#� +��� = !", �# = ! , �$ = !#|& = � , &# = � , &$ = � �∗ ��& = � , &# = � , &$ = � � +��� = !", �# = ! , �$ = !#|& = � , &# = � , &$ = �#�∗ ��& = � , &# = � , &$ = �#� +��� = !", �# = ! , �$ = !#|& = � , &# = � , &$ = �$�∗ ��& = � , &# = � , &$ = �$� +��� = !", �# = ! , �$ = !#|& = � , &# = �#, &$ = � �∗ ��& = � , &# = �#, &$ = � � +��� = !", �# = ! , �$ = !#|& = � , &# = �#, &$ = �#�∗ ��& = � , &# = �#, &$ = �#� +��� = !", �# = ! , �$ = !#|& = � , &# = �#, &$ = �$�∗ ��& = � , &# = �#, &$ = �$� +��� = !", �# = ! , �$ = !#|& = � , &# = �$, &$ = � �∗ ��& = � , &# = �$, &$ = � � +��� = !", �# = ! , �$ = !#|& = � , &# = �$, &$ = �#�∗ ��& = � , &# = �$, &$ = �#� +��� = !", �# = ! , �$ = !#|& = � , &# = �$, &$ = �$�∗ ��& = � , &# = �$, &$ = �$� +��� = !", �# = ! , �$ = !#|& = �#, &# = � , &$ = � �∗ ��& = �#, &# = � , &$ = � � +��� = !", �# = ! , �$ = !#|& = �#, &# = � , &$ = �#�∗ ��& = �#, &# = � , &$ = �#�

+��� = !", �# = ! , �$ = !#|& = �#, &# = � , &$ = �$�∗ ��& = �#, &# = � , &$ = �$� +��� = !", �# = ! , �$ = !#|& = �#, &# = �#, &$ = � �∗ ��& = �#, &# = �#, &$ = � � +��� = !", �# = ! , �$ = !#|& = �#, &# = �#, &$ = �#�∗ ��& = �#, &# = �#, &$ = �#� +��� = !", �# = ! , �$ = !#|& = �#, &# = �#, &$ = �$�∗ ��& = �#, &# = �#, &$ = �$� +��� = !", �# = ! , �$ = !#|& = �#, &# = �$, &$ = � �∗ ��& = �#, &# = �$, &$ = � � +��� = !", �# = ! , �$ = !#|& = �#, &# = �$, &$ = �#�∗ ��& = �#, &# = �$, &$ = �#� +��� = !", �# = ! , �$ = !#|& = �#, &# = �$, &$ = �$�∗ ��& = �#, &# = �$, &$ = �$� +��� = !", �# = ! , �$ = !#|& = �$, &# = � , &$ = � �∗ ��& = �$, &# = � , &$ = � � +��� = !", �# = ! , �$ = !#|& = �$, &# = � , &$ = �#�∗ ��& = �$, &# = � , &$ = �#� +��� = !", �# = ! , �$ = !#|& = �$, &# = � , &$ = �$�∗ ��& = �$, &# = � , &$ = �$� +��� = !", �# = ! , �$ = !#|& = �$, &# = �#, &$ = � �∗ ��& = �$, &# = �#, &$ = � � +��� = !", �# = ! , �$ = !#|& = �$, &# = �#, &$ = �#�∗ ��& = �$, &# = �#, &$ = �#� +��� = !", �# = ! , �$ = !#|& = �$, &# = �#, &$ = �$�∗ ��& = �$, &# = �#, &$ = �$� +��� = !", �# = ! , �$ = !#|& = �$, &# = �$, &$ = � �∗ ��& = �$, &# = �$, &$ = � � +��� = !", �# = ! , �$ = !#|& = �$, &# = �$, &$ = �#�∗ ��& = �$, &# = �$, &$ = �#� +��� = !", �# = ! , �$ = !#|& = �$, &# = �$, &$ = �$�∗ ��& = �$, &# = �$, &$ = �$�

We have: ��� = !", �# = ! , �$ = !#|& = � , &# = � , &$ = � �∗ ��& = � , &# = � , &$ = � � = ��� = !"|& = � , &# = � , &$ = � �∗ ���# = ! |& = � , &# = � , &$ = � �∗ ���$ = !#|& = � , &# = � , &$ = � �∗ ��& = � , &# = � , &$ = � �

(Because observations o1, o2, and o3 are mutually independ-

ent) = ��� = !"|& = � � ∗ ���# = ! |&# = � �∗ ���$ = !#|&$ = � �∗ ��& = � , &# = � , &$ = � �

(Because an observation is only dependent on the day when it

is observed) = ��� = !"|& = � � ∗ ���# = ! |&# = � �∗ ���$ = !#|&$ = � �∗ ��&$ = � |& = � , &# = � �∗ ��& = � , &# = � �

(Due to multiplication rule [4, p. 100]) = ��� = !"|& = � � ∗ ���# = ! |&# = � �∗ ���$ = !#|&$ = � �∗ ��&$ = � |&# = � �∗ ��& = � , &# = � �

(Due to Markov property, current state is only dependent on

right previous state)

Page 4: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 19

��� !"|& � � ∗ ���# ! |&# � �∗ ���$ !#|&$ � �∗ ��&$ � |&# � �∗ ��&# � |& � � ∗ ��& � �

(Due to multiplication rule [4, p. 100]) � "� � #� � �

(According to parameters A, B, and ∏ specified in tables 1, 2,

and 3)

Similarly, we have: ��� !", �# = ! , �$ = !#|& = � , &# = � , &$ = �#�∗ ��& = � , &# = � , &$ = �#�= � "� �##� #� � ��� = !", �# = ! , �$ = !#|& = � , &# = � , &$ = �$�∗ ��& = � , &# = � , &$ = �$�= � "� �$#� $� � ��� = !", �# = ! , �$ = !#|& = � , &# = �#, &$ = � �∗ ��& = � , &# = �#, &$ = � �= � "�# � #�# � #� ��� = !", �# = ! , �$ = !#|& = � , &# = �#, &$ = �#�∗ ��& = � , &# = �#, &$ = �#�= � "�# �##�##� #� ��� = !", �# = ! , �$ = !#|& = � , &# = �#, &$ = �$�∗ ��& = � , &# = �#, &$ = �$�= � "�# �$#�#$� #� ��� = !", �# = ! , �$ = !#|& = � , &# = �$, &$ = � �∗ ��& = � , &# = �$, &$ = � �= � "�$ � #�$ � $� ��� = !", �# = ! , �$ = !#|& = � , &# = �$, &$ = �#�∗ ��& = � , &# = �$, &$ = �#�= � "�$ �##�$#� $� ��� = !", �# = ! , �$ = !#|& = � , &# = �$, &$ = �$�∗ ��& = � , &# = �$, &$ = �$�= � "�$ �$#�$$� $� ��� = !", �# = ! , �$ = !#|& = �#, &# = � , &$ = � �∗ ��& = �#, &# = � , &$ = � �= �#"� � #� �# �# ��� = !", �# = ! , �$ = !#|& = �#, &# = � , &$ = �#�∗ ��& = �#, &# = � , &$ = �#�= �#"� �##� #�# �# ��� = !", �# = ! , �$ = !#|& = �#, &# = � , &$ = �$�∗ ��& = �#, &# = � , &$ = �$�= �#"� �$#� $�# �# ��� = !", �# = ! , �$ = !#|& = �#, &# = �#, &$ = � �∗ ��& = �#, &# = �#, &$ = � �= �#"�# � #�# �##�# ��� = !", �# = ! , �$ = !#|& = �#, &# = �#, &$ = �#�∗ ��& = �#, &# = �#, &$ = �#�= �#"�# �##�##�##�# ��� = !", �# = ! , �$ = !#|& = �#, &# = �#, &$ = �$�∗ ��& = �#, &# = �#, &$ = �$�= �#"�# �$#�#$�##�# ��� = !", �# = ! , �$ = !#|& = �#, &# = �$, &$ = � �∗ ��& = �#, &# = �$, &$ = � �= �#"�$ � #�$ �#$�# ��� = !", �# = ! , �$ = !#|& = �#, &# = �$, &$ = �#�∗ ��& = �#, &# = �$, &$ = �#�= �#"�$ �##�$#�#$�#

��� = !", �# = ! , �$ = !#|& = �#, &# = �$, &$ = �$�∗ ��& = �#, &# = �$, &$ = �$�= �#"�$ �$#�$$�#$�# ��� = !", �# = ! , �$ = !#|& = �$, &# = � , &$ = � �∗ ��& = �$, &# = � , &$ = � �= �$"� � #� �$ �$ ��� = !", �# = ! , �$ = !#|& = �$, &# = � , &$ = �#�∗ ��& = �$, &# = � , &$ = �#�= �$"� �##� #�$ �$ ��� = !", �# = ! , �$ = !#|& = �$, &# = � , &$ = �$�∗ ��& = �$, &# = � , &$ = �$�= �$"� �$#� $�$ �$ ��� = !", �# = ! , �$ = !#|& = �$, &# = �#, &$ = � �∗ ��& = �$, &# = �#, &$ = � �= �$"�# � #�# �$#�$ ��� = !", �# = ! , �$ = !#|& = �$, &# = �#, &$ = �#�∗ ��& = �$, &# = �#, &$ = �#�= �$"�# �##�##�$#�$ ��� = !", �# = ! , �$ = !#|& = �$, &# = �#, &$ = �$�∗ ��& = �$, &# = �#, &$ = �$�= �$"�# �$#�#$�$#�$ ��� = !", �# = ! , �$ = !#|& = �$, &# = �$, &$ = � �∗ ��& = �$, &# = �$, &$ = � �= �$"�$ � #�$ �$$�$ ��� = !", �# = ! , �$ = !#|& = �$, &# = �$, &$ = �#�∗ ��& = �$, &# = �$, &$ = �#�= �$"�$ �##�$#�$$�$ ��� = !", �# = ! , �$ = !#|& = �$, &# = �$, &$ = �$�∗ ��& = �$, &# = �$, &$ = �$�= �$"�$ �$#�$$�$$�$

It implies ���|∆� = ��� = !", �# = ! , �$ = !#� = � "� � #� � � + � "� �##� #� � + � "� �$#� $� � +� "�# � #�# � #� + � "�# �##�##� #� + � "�# �$#�#$� #� +� "�$ � #�$ � $� + � "�$ �##�$#� $� + � "�$ �$#�$$� $� +�#"� � #� �# �# + �#"� �##� #�# �#+ �#"� �$#� $�# �# +�#"�# � #�# �##�# + �#"�# �##�##�##�#+ �#"�# �$#�#$�##�# +�#"�$ � #�$ �#$�# + �#"�$ �##�$#�#$�#+ �#"�$ �$#�$$�#$�# +�$"� � #� �$ �$ + �$"� �##� #�$ �$+ �$"� �$#� $�$ �$ +�$"�# � #�# �$#�$ + �$"�# �##�##�$#�$+ �$"�# �$#�#$�$#�$ +�$"�$ � #�$ �$$�$ + �$"�$ �##�$#�$$�$+ �$"�$ �$#�$$�$$�$ = 0.012980859375 It is easy to explain that given weather HMM modeled by

parameters A, B, and ∏ specified in tables 1, 2, and 3, the

event that it is soggy, dry, and dryish in three successive days

is rare because the probability of such event P(O|∆) is low

(≈1.3%). It is easy to recognize that it is impossible to browse

all combinational cases of given observation sequence O = {o1,

o2,…, oT} as we knew that it is necessary to survey 33=27

Page 5: Tutorial on Hidden Markov Model

20 Loc Nguyen: Tutorial on Hidden Markov Model

mutually exclusive cases of weather states with a tiny number

of observations {soggy, dry, dryish}. Exactly, given n states

and T observations, it takes extremely expensive cost to sur-

vey nT cases. According to [3, pp. 262-263], there is a

so-called forward-backward procedure to decrease computa-

tional cost for determining the probability P (O|∆). Let αt(i) be

the joint probability of partial observation sequence {o1, o2,…,

ot} and state xt=si where 1 1 2 1 3, specified by (1).

45�6� ��� , �#, … , �5 , &5 ��|∆� (1)

The joint probability αt(i) is also called forward variable at

time point t and state si. The product αt(i)aij where aij is the

transition probability from state i to state j counts for proba-

bility of join event that partial observation sequence {o1, o2,…,

ot} exists and the state si at time point t is changed to sj at time

point t+1. 45�6���� ��� , �#, … , �5 , &5 ��|∆��8&59 ��:&5 ��; ��� , �#, … , �5|&5 �����&5 ����8&59 ��:&5 ��;

(Due to multiplication rule [4, p. 100]) ��� , �#, … , �5|&5 ����8&59 ��:&5 ��;��&5 ��� �8� , �#, … , �5 , &59 ��:&5 ��;��&5 ���

(Because the partial observation sequence {o1, o2,…, ot} is

independent from next state xt+1 given current state xt) �8� , �#, … , �5 , &5 �� , &59 ��;

(Due to multiplication rule [4, p. 100])

Summing product αt(i)aij over all n possible states of xt

produces probability of join event that partial observation

sequence {o1, o2,…, ot} exists and the next state is xt+1=sj

regardless of the state xt.

� 45�6����<

�= � �8� , �#, … , �5 , &5 �� , &59 ��;

<

�= �8� , �#, … , �5 , &59 ��;

The forward variable at time point t+1 and state sj is cal-

culated on αt(i) as follows: 459 �>� �8� , �#, … , �5 , �59 , &59 ��:∆; �8�59 :� , �#, … , �5 , &59 ��;�8� , �#, … , �5 , &59 ��;

(Due to multiplication rule) �8�59 :&59 ��;�8� , �#, … , �5 , &59 ��;

(Due to observations are mutually independent)

����59 � � 45�6����<

�=

Where bj(ot+1) is the probability of observation ot+1 when

the state stochastic process is in state sj, please see an example

of observation probability matrix shown in table 3. In brief,

please pay attention to recurrence property of forward variable

specified by (2).

459 �>� 8∑ 45�6����<�= ;����59 � (2)

The aforementioned construction of forward recurrence

equation (2) is essentially to build up Markov chain, illustrated

by fig. 2 [3, p. 262].

Figure 2. Construction of recurrence formula for forward variable.

According to the forward recurrence equation (2), given

observation sequence O = {o1, o2,…, oT}, we have:

4@�6� ��� , �#, … , �@ , &@ ��|∆�

The probability P(O|∆) is sum of αT(i) over all n possible

states of xT, specified by (3).

���|∆� ��� , �#, … , �@� ∑ ��� , �#, … , �@ , &@ ��|∆�<�= ∑ 4@�6�<�= (3)

The forward-backward procedure to calculate the proba-

bility P(O|∆), based on forward equations (2) and (3), includes

three steps as shown in table 4 [3, p. 262].

Table 4. Forward-backward procedure based on forward variable to calcu-

late the probability P(O|∆).

1. Initialization step: Initializing α1(i) = bi(o1)πi for all 1 1 6 1 A

2. Recurrence step: Calculating all αt+1(j) for all 1 1 > 1 A and 1 1 2 1 3 B 1 according to (2).

459 �>� C� 45�6����<

�= D ����59 �

3. Evaluation step: Calculating the probability ���|∆� ∑ 4@�6�<�=

It is required to execute n+2n2(T–1)+n–1 = 2n

2(T–1)+2n–1

operations for forward-backward procedure based on forward

variable due to:

- There are n multiplications at initialization step.

- There are n multiplications, n–1 additions, and 1 multi-

plication over the expression 8∑ 45�6����<�= ;����59 �.

There are n cases of values αt+1(j) for all 1 1 > 1 A at

time point t+1. So, there are (n+n–1+1) n = 2n2 opera-

tions over values αt+1(j) for all 1 1 > 1 A at time point t.

The recurrence step runs over T–1 times and so, there are

2n2 (T–1) operations at recurrence step.

- There are n–1 additions at evaluation step.

Inside 2n2(T–1)+2n–1 operations, there are n+(n+1)n(T–1)

= n+(n2+n)(T–1) multiplications and (n–1)n(T–1)+n–1 =

(n2+n)(T–1)+n–1 additions.

Going back example with weather HMM whose parameters

A, B, and ∏ are specified in tables 1, 2, and 3. We need to

re-calculate the probability of observation sequence O =

{o1=φ4=soggy, o2=φ1=dry, o3=φ2=dryish} by for-

ward-backward procedure shown in table 4 (based on forward

variable). According to initialization step of for-

ward-backward procedure based on forward variable, we

have: 4 �1� � �� !"�� � "� 0.0165

Page 6: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 21

4 �2� �#�� !"��# �#"�# 0.0825 4 �3� �$�� !"��$ �$"�$ 0.165

According to recurrence step of forward-backward proce-

dure based on forward variable, we have:

4#�1� C� 4 �6��� $

�= D � ��# ! � C� 4 �6��� $

�= D � 0.04455

4#�2� C� 4 �6���#$

�= D �#��# ! � 0.01959375

4#�3� C� 4 �6���$$

�= D �$��# ! � 0.00556875

4$�1� C� 4#�6��� $

�= D � ��$ !#� 0.0059090625

4$�2� C� 4#�6���#$

�= D �#��$ !#� 0.005091796875

4$�3� C� 4#�6���$$

�= D �$��$ !#� 0.00198

According to evaluation step of forward-backward proce-

dure based on forward variable, the probability of observation

sequence O = {o1=s4=soggy, o2=s1=dry, o3=s2=dryish} is: ���|∆� 4$�1� % 4$�2� % 4$�3� 0.012980859375

The result from the forward-backward procedure based on

forward variable is the same to the one from aforementioned

brute-force method that browses all 33=27 mutually exclusive

cases of weather states.

There is interesting thing that the forward-backward pro-

cedure can be implemented based on so-called backward

variable. Let βt(i) be the backward variable which is condi-

tional probability of partial observation sequence {ot, ot+1,…,

oT} given state xt=si where 1 1 2 1 3, specified by (4).

G5�6� ���59 , �59#, … , �@|&5 �� , ∆� (4)

We have �������59 �G59 �>� �8&59 ��:&5 ��;' �8�59 :&59 ��;' �8�59#, �59$, … , �@:&59 �� , ∆; �8&59 ��:&5 ��;' �8�59 , �59#, �59$, … , �@:&59 �� , ∆;

(Because observations ot+1, ot+2,…, oT are mutually inde-

pendent) �8&59 ��:&5 ��;' �8�59 , �59#, �59$, … , �@:&5 �� , &59 �� , ∆;

(Because partial observation sequence ot+1, ot+2,…, oT is in-

dependent from state xt at time point t) �8�59 , �59#, �59$, … , �@ , &59 ��:&5 �� , ∆;

(Due to multiplication rule [4, p. 100])

Summing the product aijbj(ot+1)βt+1(j) over all n possible

states of xt+1=sj, we have:

� �������59 �G59 �>�<

�= � �8�59 , �59#, �59$, … , �@ , &59 ��:&5 �� , ∆;

<

�=

���59 , �59#, �59$, … , �@|&5 �� , ∆� (Due to the total probability rule [4, p. 101]) G5�6�

In brief, the recurrence property of backward variable

specified by (5).

G5�6� ∑ �������59 �G59 �>�<�= (5)

Where bj(ot+1) is the probability of observation ot+1 when

the state stochastic process is in state sj, please see an example

of observation probability matrix shown in table 3. The con-

struction of backward recurrence equation (5) is essentially to

build up Markov chain, illustrated by fig. 3 [3, p. 263].

Figure 3. Construction of recurrence equation for backward variable.

According to the backward recurrence equation (5), given

observation sequence O = {o1, o2,…, oT}, we have:

G �6� ���#, �$, … , �@|& �� , ∆�

The product πibi(o1)β1(i) is: ������ �G �6� ��& ������ |& ������#, �$, … , �@|& �� , ∆� ��& ������ , �#, �$, … , �@|& �� , ∆� (Because observations o1, o2,…, oT are mutually independent) ��� , �#, �$, … , �@ , & ��|∆�

It implies that the probability P(O|∆) is: ���|∆� ��� , �#, … , �@�

� ��� , �#, … , �@ , & ��|∆�<

�=

(Due to the total probability rule [4, p. 101])

� ������ �G �6�<

�=

Shortly, the probability P(O|∆) is sum of product πi-

bi(o1)β1(i) over all n possible states of x1=si, specified by (6).

���|∆� ∑ ������ �G �6�<�= (6)

The forward-backward procedure to calculate the proba-

bility P(O|∆), based on backward equations (5) and (6), in-

cludes three steps as shown in table 5 [3, p. 263].

Page 7: Tutorial on Hidden Markov Model

22 Loc Nguyen: Tutorial on Hidden Markov Model

Table 5. Forward-backward procedure based on backward variable to cal-

culate the probability P(O|∆).

1. Initialization step: Initializing βT(i) = 1 for all 1 ≤ 6 ≤ A

2. Recurrence step: Calculating all βt(i) for all 1 ≤ 6 ≤ A and t=T–1, t=T–2,…, t=1, according to (5).

3. Evaluation step: Calculating the probability P(O|∆) according to (6), ���|∆� = ∑ ������ �G �6�<�=

It is required to execute (3n–1)n(T–1)+2n+n–1 =

3n2(T–1)–n(T–4)–1 operations for forward-backward proce-

dure based on forward variable due to:

- There are 2n multiplications and n–1 additions over the

sum ∑ �������59 �G59 �>�<�= . So, there are (2n+n–1)n =

(3n–1)n operations over values βt(i) for all 1 ≤ 6 ≤ A at

time point t. The recurrence step runs over T–1 times and

so, there are (3n–1)n(T–1) operations at recurrence step.

- There are 2n multiplications and n–1 additions over the

sum ∑ ������ �G �6�<�= at evaluation step.

Inside 3n2(T–1)–n(T–4)–1 operations, there are

2n2(T–1)+2n multiplications and (n–1)n(T–1)+n–1 =

n2(T–1)–n(T–2)–1 additions.

Going back example with weather HMM whose parameters

A, B, and ∏ are specified in tables 1, 2, and 3. We need to

re-calculate the probability of observation sequence O =

{o1=φ4=soggy, o2=φ1=dry, o3=φ2=dryish} by for-

ward-backward procedure shown in table 5 (based on back-

ward variable). According to initialization step of for-

ward-backward procedure based on backward variable, we

have:

G$�1� = G$�2� = G$�3� = 1

According to recurrence step of forward-backward proce-

dure based on backward variable, we have:

G#�1� = � � �����$ = !#�G$�>�<�= = � � ���#G$�>�<

�= = 0.1875

G#�2� = � �#�����$ = !#�G$�>�<�= = 0.19

G#�3� = � �$�����$ = !#�G$�>�<�= = 0.1625

G �1� = � � �����# = ! �G#�>�<�= = 0.07015625

G �2� = � �#�����# = ! �G#�>�<�= = 0.0551875

G �3� = � �$�����# = ! �G#�>�<�= = 0.0440625

According to evaluation step of forward-backward proce-

dure based on backward variable, the probability of observa-

tion sequence O = {o1=φ4=soggy, o2=φ1=dry, o3=φ2=dryish}

is:

���|∆� = � ������ = !"�G �6�$�= = � ����"G �6�$

�= = 0.012980859375

The result from the forward-backward procedure based on

backward variable is the same to the one from aforementioned

brute-force method that browses all 33=27 mutually exclusive

cases of weather states and the one from forward-backward

procedure based on forward variable.

The evaluation problem is now described thoroughly in this

section. The uncovering problem is mentioned particularly in

successive section.

3. HMM Uncovering Problem

Recall that given HMM ∆ and observation sequence O =

{o1, o2,…, oT} where ot ∈ Φ, how to find out a state sequence

X = {x1, x2,…, xT} where xt ∈ S so that X is most likely to have

produced the observation sequence O. This is the uncovering

problem: which sequence of state transitions is most likely to

have led to given observation sequence. In other words, it is

required to establish an optimal criterion so that the state

sequence X leads to maximizing such criterion. The simple

criterion is the conditional probability of sequence X with

respect to sequence O and model ∆, denoted P(X|O,∆). We can

apply brute-force strategy: “go through all possible such X and

pick the one leading to maximizing the criterion P(X|O,∆)”.

H = argmaxN 8��H|�, ∆�;

This strategy is impossible if the number of states and ob-

servations is huge. Another popular way is to establish a

so-called individually optimal criterion [3, p. 263] which is

described right later.

Let γt(i) be joint probability that the stochastic process is in

state si at time point t with observation sequence O = {o1, o2,…,

oT}, equation (7) specifies this probability based on forward

variable αt and backward variable βt. O5�6� = ��� , �#, … , �@ , &5 = ��|∆� = 45�6�G5�6� (7)

The variable γt(i) is also called individually optimal crite-

rion with note that forward variable αt and backward variable

βt are calculated according to (2) and (5), respectively.

Following is proof of (7). O5�6� = ��� , �#, … , �@ , &5 = ��|∆� = ��&5 = �� , � , �#, … , �@|∆� (Due to Bayes’ rule [4, p. 99]) = ��� , �#, … , �5 , &5 = �� , �59 , �59#, … , �@|∆� = ��� , �#, … , �5 , &5 = ��|∆�∗ ���59 , �59#, … , �@|� , �#, … , �5 , &5 = �� , ∆�

(Due to multiplication rule [4, p. 100]) = ��� , �#, … , �5 , &5 = ��|∆����59 , �59#, … , �@|&5 = �� , ∆�

(Because observations o1, o2,…, oT are observed inde-

pendently) = 45�6�G5�6� (According to (1) and (4) for determining forward variable

Page 8: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 23

and backward variable)

The state sequence X = {x1, x2,…, xT} is determined by se-

lecting each state xt � S so that it maximizes γt(i). &5 argmax� ��&5 ��|� , �#, … , �@ , ∆�= argmax�

45�6�G5�6���� , �#, … , �@|∆�

= argmax���� , �#, … , �@ , &5 = ��|∆���� , �#, … , �@|∆�

(Due to Bayes’ rule [4, p. 99])

= argmax�O5�6���� , �#, … , �@|∆�

(Due to (7))

Because the probability ��� , �#, … , �@|∆� is not relevant

to state sequence X, it is possible to remove it from the opti-

mization criterion. Thus, equation (8) specifies how to find out

the optimal state xt of X at time point t.

&5 = argmax� O5�6� = argmax� 45�6�G5�6� (8)

Note that index i is identified with state �� ∈ according to

(8). The optimal state xt of X at time point t is the one that

maximizes product αt(i) βt(i) over all values si. The procedure

to find out state sequence X = {x1, x2,…, xT} based on indi-

vidually optimal criterion is called individually optimal pro-

cedure that includes three steps, shown in table 6.

Table 6. Individually optimal procedure to solve uncovering problem.

1. Initialization step:

- Initializing α1(i) = bi(o1)πi for all 1 ≤ 6 ≤ A

- Initializing βT(i) = 1 for all 1 ≤ 6 ≤ A 2. Recurrence step:

- Calculating all αt+1(i) for all 1 ≤ 6 ≤ A and 1 ≤ 2 ≤ 3 − 1

according to (2).

- Calculating all βt(i) for all 1 ≤ 6 ≤ A and t=T–1, t=T–2,…, t=1, according to (5).

- Calculating all γt(i)=αt(i)βt(i) for all 1 ≤ 6 ≤ A and 1 ≤ 2 ≤ 3 according to (7).

- Determining optimal state xt of X at time point t is the one that

maximizes γt(i) over all values si. &5 = argmax� O5�6�

3. Final step: The state sequence X = {x1, x2,…, xT} is totally determined

when its partial states xt (s) where 1 ≤ 2 ≤ 3 are found in recurrence step.

It is required to execute n+(5n2–n)(T–1)+2nT operations for

individually optimal procedure due to:

- There are n multiplications for calculating α1(i) (s).

- The recurrence step runs over T–1 times. There are

2n2(T–1) operations for determining αt+1(i) (s) over all 1 ≤ 6 ≤ A and 1 ≤ 2 ≤ 3 − 1. There are (3n–1)n(T–1)

operations for determining βt(i) (s) over all 1 ≤ 6 ≤ A

and t=T–1, t=T–2,…, t=1. There are nT multiplications

for determining γt(i)=αt(i)βt(i) over all 1 ≤ 6 ≤ A and 1 ≤ 2 ≤ 3. There are nT comparisons for determining

optimal state &5 = argmax� O5�6� over all 1 ≤ 6 ≤ A

and 1 ≤ 2 ≤ 3 . In general, there are 2n2(T–1)+

(3n–1)n(T–1) + nT + nT = (5n2–n)(T–1) + 2nT operations

at the recurrence step.

Inside n+(5n2–n)(T–1)+2nT operations, there are

n+(n+1)n(T–1)+2n2(T–1)+nT = (3n

2+n)(T–1)+nT+n multi-

plications and (n–1)n(T–1)+(n–1)n(T–1) = 2(n2–n)(T–1) ad-

ditions and nT comparisons.

For example, given HMM ∆ whose parameters A, B, and ∏

specified in tables 1, 2, and 3, which is designed for weather

forecast. Suppose humidity is soggy and dry in days 1 and 2,

respectively. We apply individual optimal procedure into

solving the uncovering problem that finding out the optimal

state sequence X = {x1, x2} with regard to observation se-

quence O = {o1=φ4=soggy, o2=φ1=dry, o3=φ2=dryish}. Ac-

cording to (2) and (5), forward variable and backward variable

are calculated as follows: 4 �1� = � �� = !"�� = � "� = 0.0165 4 �2� = �#�� = !"��# = �#"�# = 0.0825 4 �3� = �$�� = !"��$ = �$"�$ = 0.165

4#�1� = C� 4 �6��� $

�= D � ��# = ! � = C� 4 �6��� $

�= D � = 0.04455

4#�2� = C� 4 �6���#$

�= D �#��# = ! � = 0.01959375

4#�3� = C� 4 �6���$$

�= D �$��# = ! � = 0.00556875

4$�1� = C� 4#�6��� $

�= D � ��$ = !#� = 0.0059090625

4$�2� = C� 4#�6���#$

�= D �#��$ = !#� = 0.005091796875

4$�3� = C� 4#�6���$$

�= D �$��$ = !#� = 0.00198

G$�1� = G$�2� = G$�3� = 1

G#�1� = � � �����$ = !#�G$�>�<�= = � � ���#G$�>�<

�= = 0.1875

G#�2� = � �#�����$ = !#�G$�>�<�= = 0.19

G#�3� = � �$�����$ = !#�G$�>�<�= = 0.1625

G �1� = � � �����# = ! �G#�>�<�= = 0.07015625

G �2� = � �#�����# = ! �G#�>�<�= = 0.0551875

G �3� = � �$�����# = ! �G#�>�<�= = 0.0440625

According to recurrence step of individually optimal pro-

cedure, individually optimal criterion γt(i) and optimal state xt

are calculated as follows: O �1� = 4 �1�G �1� = 0.001157578125 O �2� = 4 �2�G �2� = 0.00455296875 O �3� = 4 �3�G �3� = 0.0072703125

Page 9: Tutorial on Hidden Markov Model

24 Loc Nguyen: Tutorial on Hidden Markov Model

& argmax� PO �6�Q argmax� PO �1�, O �2�, O �3�Q = �$= R�6AS O#�1� = 4#�1�G#�1� = 0.008353125 O#�2� = 4#�2�G#�2� = 0.0037228125 O#�3� = 4#�3�G#�3� = 0.000904921875 &# = argmax� PO#�6�Q = argmax� PO#�1�, O#�2�, O#�3�Q = � = �TAAS O$�1� = 4$�1�G$�1� = 0.0059090625 O$�2� = 4$�2�G$�2� = 0.005091796875 O$�3� = 4$�3�G$�3� = 0.00198 &$ = argmax� PO$�6�Q = argmax� PO$�1�, O$�2�, O$�3�Q = � = �TAAS

As a result, the optimal state sequence is X = {x1=rainy,

x2=sunny, x3=sunny}.

The individually optimal criterion γt(i) does not reflect the

whole probability of state sequence X given observation se-

quence O because it focuses only on how to find out each

partially optimal state xt at each time point t. Thus, the indi-

vidually optimal procedure is heuristic method. Viterbi algo-

rithm [3, p. 264] is alternative method that takes interest in the

whole state sequence X by using joint probability P(X,O|∆) of

state sequence and observation sequence as optimal criterion

for determining state sequence X. Let δt(i) be the maximum

joint probability of observation sequence O and state xt=si over

t–1 previous states. The quantity δt(i) is called joint optimal

criterion at time point t, which is specified by (9).

U5�6� =maxVW,VX,…,VYZW8��� , �#, … , �5 , & , &#, … , &5 = ��|∆�;(9)

The recurrence property of joint optimal criterion is speci-

fied by (10).

U59 �>� = 8max�8U5�6����;;����59 � (10)

The semantic content of joint optimal criterion δt is similar

to the forward variable αt. Following is the proof of (10).

U59 �>� = maxVW,VX,…,VY [�8� , �#, … , �5 , �59 , & , &#, … , &5 , &59 = ��:∆;\

= maxVW,VX,…,VY [�8�59 :� , �#, … , �5 , & , &#, … , &5 , &59 = ��; ∗ �8� , �#, … , �5 , & , &#, … , &5 , &59 = ��;\

(Due to multiplication rule [4, p. 100]) = maxVW,VX,…,VY [�8�59 :&59 = ��; ∗ �8� , �#, … , �5 , & , &#, … , &5 , &59 = ��;\

(Due to observations are mutually independent) = maxVW,VX,…,VY [����59 � ∗ �8� , �#, … , �5 , & , &#, … , &5 , &59 = ��;\

= maxVW,VX,…,VY [�8� , �#, … , �5 , & , &#, … , &5 , &59 = ��;\ ∗ ����59 �

(The probability bj(ot+1) is moved out of the maximum operation because it is independent from states x1, x2,…, xt) = maxVW,VX,…,VY [�8� , �#, … , �5 , & , &#, … , &5] , &59 = ��:&5; ∗ ��&5�\ ∗ ����59 �

(Due to multiplication rule [4, p. 100]) = maxVW,VX,…,VY [�8� , �#, … , �5 , & , &#, … , &5] :&59 = �� , &5; ∗ �8&59 = ��:&5; ∗ ��&5�\ ∗ ����59 �

(Due to multiplication rule [4, p. 100]) = maxVW,VX,…,VY [��� , �#, … , �5 , & , &#, … , &5] |&5� ∗ �8&59 = ��:&5; ∗ ��&5�\ ∗ ����59 �

(Because observation xt+1 is dependent from o1, o2,…, ot, x1, x2,…, xt–1) = maxVW,VX,…,VY [��� , �#, … , �5 , & , &#, … , &5] |&5� ∗ ��&5� ∗ �8&59 = ��:&5;\ ∗ ����59 �

= maxVW,VX,…,VY [��� , �#, … , �5 , & , &#, … , &5] , &5� ∗ �8&59 = ��:&5;\ ∗ ����59 �

(Due to multiplication rule [4, p. 100]) = maxVY ^ maxVW,VX,…,VYZW [��� , �#, … , �5 , & , &#, … , &5] , &5� ∗ �8&59 = ��:&5;\_ ∗ ����59 �

= maxVY `^ maxVW,VX,…,VYZW ��� , �#, … , �5 , & , &#, … , &5] , &5�_ ∗ �8&59 = ��:&5;a ∗ ����59 �

= max� `^ maxVW,VX,…,VYZW ��� , �#, … , �5 , & , &#, … , &5] , &5 = ���_ ∗ �8&59 = ��:&5 = ��;a ∗ ����59 �

= max� `^ maxVW,VX,…,VYZW ��� , �#, … , �5 , & , &#, … , &5] , &5 = ���_ ∗ ���a ∗ ����59 �

= max� 8U5�6� ∗ ���; ∗ ����59 �

= [max� 8U5�6����;\ ����59 �

Given criterion δt+1(j), the state xt+1=sj that maximizes δt+1(j)

is stored in the backtracking state qt+1(j) that is specified by

(11).

b59 �>� = argmax�8U5�6����; (11)

Page 10: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 25

Note that index i is identified with state �� � according to

(11). The Viterbi algorithm based on joint optimal criterion δt(i)

includes three steps described in table 7.

Table 7. Viterbi algorithm to solve uncovering problem.

1. Initialization step:

- Initializing δ1(i) = bi(o1)πi for all 1 ≤ 6 ≤ A

- Initializing q1(i) = 0 for all 1 ≤ 6 ≤ A 2. Recurrence step:

- Calculating all U59 �>� = [max� 8U5�6����;\ ����59 �

for all 1 ≤ 6, > ≤ A and 1 ≤ 2 ≤ 3 − 1 according to (10).

- Keeping tracking optimal states b59 �>� = argmax� 8U5�6����;

for all 1 ≤ > ≤ A and 1 ≤ 2 ≤ 3 − 1 according to (11). 3. State sequence backtracking step: The resulted state sequence X = {x1,

x2,…, xT} is determined as follows:

- The last state &@ = argmax�8U@�>�;

- Previous states are determined by backtracking: xt = qt+1(xt+1) for t=T–1, t=T–1,…, t=1.

The total number of operations inside the Viterbi algorithm

is 2n+(2n2+n)(T–1) as follows:

- There are n multiplications for initializing n values δ1(i)

when each δ1(i) requires 1 multiplication.

- There are (2n2+n)(T–1) operations over the recurrence

step because there are n(T–1) values δt+1(j) and each

δt+1(j) requires n multiplications and n comparisons for

maximizing max�8U5�6����; plus 1 multiplication.

- There are n comparisons for constructing the state se-

quence X, &@ = max�8b@�>�;.

Inside 2n+(2n2+n)(T–1) operations, there are n+(n

2+n)(T–1)

multiplications and n2(T–1)+n comparisons. The number of

operations with regard to Viterbi algorithm is smaller than the

number of operations with regard to individually optimal

procedure when individually optimal procedure requires

(5n2–n)(T–1)+2nT+n operations. Therefore, Viterbi algorithm

is more effective than individually optimal procedure. Besides,

individually optimal procedure does not reflect the whole

probability of state sequence X given observation sequence O.

Going backing the weather HMM ∆ whose parameters A, B,

and ∏ are specified in tables 1, 2, and 3. Suppose humidity is

soggy and dry in days 1 and 2, respectively. We apply Viterbi

algorithm into solving the uncovering problem that finding out

the optimal state sequence X = {x1, x2, x3} with regard to ob-

servation sequence O = {o1=φ4=soggy, o2=φ1=dry,

o3=φ2=dryish}. According to initialization step of Viterbi

algorithm, we have: U �1� = � �� = !"�� = � "� = 0.0165 U �2� = �#�� = !"��# = �#"�# = 0.0825 U �3� = �$�� = !"��$ = �$"�$ = 0.165 b �1� = b �2� = b �3� = 0

According to recurrence step of Viterbi algorithm, we have: U �1�� = 0.00825 U �2��# = 0.02475 U �3��$ = 0.04125

U#�1� = [max� PU �6��� Q\ � ��# = ! �= [max� PU �6��� Q\ � = 0.04125 ∗ 0.6= 0.02475 b#�1� = argmax� PU �6��� Q= argmax� PU �1�� , U �2��# , U �3��$ Q= �$ = R�6AS U �1�� # = 0.004125 U �2��## = 0.033 U �3��$# = 0.04125 U#�2� = [max� PU �6���#Q\ �#��# = ! �= [max� PU �6���#Q\ �# = 0.04125 ∗ 0.25= 0.0103125 b#�2� = argmax� PU �6���#Q= argmax� PU �1�� #, U �2��##, U �3��$#Q= �$ = R�6AS U �1�� $ = 0.004125 U �2��#$ = 0.02475 U �3��$$ = 0.0825 U#�3� = [max� PU �6���$Q\ �$��# = ! �= [max� PU �6���$Q\ �$ = 0.0825 ∗ 0.05= 0.004125 b#�2� = argmax� PU �6���$Q= argmax� PU �1�� $, U �2��#$, U �3��$$Q= �$ = R�6AS U#�1�� = 0.012375 U#�2��# = 0.00309375 U#�3��$ = 0.00103125 U$�1� = [max� PU#�6��� Q\ � ��$ = !#�= [max� PU#�6��� Q\ � # = 0.012375 ∗ 0.2= 0.002475 b$�1� = argmax� PU#�6��� Q= argmax� PU#�1�� , U#�2��# , U#�3��$ Q= � = �TAAS U#�1�� # = 0.0061875 U#�2��## = 0.004125 U#�3��$# = 0.00103125 U$�2� = [max� PU#�6���#Q\ �#��$ = !#�= [max� PU#�6���#Q\ �##= 0.0061875 ∗ 0.25 = 0.001546875 b$�2� = argmax� PU#�6���#Q= argmax� PU#�1�� #, U#�2��##, U#�3��$#Q= � = �TAAS U#�1�� $ = 0.0061875 U#�2��#$ = 0.00309375 U#�3��$$ = 0.0020625

Page 11: Tutorial on Hidden Markov Model

26 Loc Nguyen: Tutorial on Hidden Markov Model

U$�3� = [max� PU#�6���$Q\ �$��$ = !#�= [max� PU#�6���$Q\ �$#= 0.0061875 ∗ 0.1 = 0.00061875 b$�3� = argmax� PU#�6���$Q= argmax� PU#�1�� $, U#�2��#$, U#�3��$$Q= � = �TAAS

According to state sequence backtracking of Viterbi algo-

rithm, we have: &$ = argmax� PU$�>�Q = argmax� PU$�1�, U$�2�, U$�3�Q = � = �TAAS &# = b$�&$ = � � = b$�1� = � = �TAAS & = b#�&# = � � = b#�1� = �$ = R�6AS

As a result, the optimal state sequence is X = {x1=rainy,

x2=sunny, x3=sunny}. The result from the Viterbi algorithm is

the same to the one from aforementioned individually optimal

procedure described in table 6.

The uncovering problem is now described thoroughly in

this section. Successive section will mention the learning

problem of HMM which is the main subject of this tutorial.

4. HMM Learning Problem

The learning problem is to adjust parameters such as initial

state distribution ∏, transition probability matrix A, and ob-

servation probability matrix B so that given HMM ∆ gets more

appropriate to an observation sequence O = {o1, o2,…, oT}

with note that ∆ is represented by these parameters. In other

words, the learning problem is to adjust parameters by max-

imizing probability of observation sequence O, as follows:

�c, d, � = argmaxf,g,h ���|�

The Expectation Maximization (EM) algorithm is applied

successfully into solving HMM learning problem, which is

equivalently well-known Baum-Welch algorithm [3]. Suc-

cessive sub-section 4.1 describes EM algorithm in detailed

before going into Baum-Welch algorithm.

4.1. EM Algorithm

Expectation Maximization (EM) is effective parameter es-

timator in case that incomplete data is composed of two parts:

observed part and missing (or hidden) part. EM is iterative

algorithm that improves parameters after iterations until

reaching optimal parameters. Each iteration includes two steps:

E(xpectation) step and M(aximization) step. In E-step the

missing data is estimated based on observed data and current

estimate of parameters; so the lower-bound of likelihood

function is computed by the expectation of complete data. In

M-step new estimates of parameters are determined by

maximizing the lower-bound. Please see document [5] for

short tutorial of EM. This sub-section focuses on practice

general EM algorithm; the theory of EM algorithm is de-

scribed comprehensively in article “Maximum Likelihood

from Incomplete Data via the EM algorithm” by authors [6].

Suppose O and X are observed data and missing (hidden)

data, respectively. Note O and X can be represented in any

form such as discrete values, scalar, integer number, real

number, vector, list, sequence, sample, and matrix. Let Θ

represent parameters of probability distribution. Concretely, Θ includes initial state distribution ∏, transition probability

matrix A, and observation probability matrix B inside HMM.

In other words, Θ represents HMM ∆ itself. EM algorithm

aims to estimate Θ by finding out which Θk maximizes the

likelihood function l�� = ���|�.

Θk = argmaxm l�Θ� = argmaxm ���|Θ�

Where Θk is the optimal estimate of parameters which is

called usually parameter estimate. Because the likelihood

function is product of factors, it is replaced by the

log-likelihood function LnL(Θ) that is natural logarithm of the

likelihood function l�Θ�, for convenience. We have:

Θk = argmaxm lAl�Θ� = argmaxm nA8l�Θ�;= argmaxm nA8���|Θ�;

Where, lAl�� = nA8l��; = nA8���|�;

The method finding out the parameter estimate Θk by

maximizing the log-likelihood function is called maximum

likelihood estimation (MLE). Of course, EM algorithm is

based on MLE.

Suppose the current parameter is Θ5 after the tth

iteration.

Next we must find out the new estimate Θk that maximizes the

next log-likelihood function lAl�Θ� . In other words it

maximizes the deviation between current log-likelihood lAl�Θ5� and next log-likelihood lAl�Θ� with regard to Θ.

Θk = argmaxm 8lAl�Θ� − lAl�Θ5�; = argmaxm 8o�Θ, Θ5�;

Where o�Θ, Θ5� = lAl�Θ� − lAl�Θ5� is the deviation

between current log-likelihood lAl�Θ5� and next

log-likelihood lAl�Θ� with note that o�Θ, Θ5� is function of Θ when Θ5 was determined.

Suppose the total probability of observed data can be de-

termined by marginalizing over missing data: ���|� = � ���|H, ���H|�N

The expansion of ���|� is total probability rule [4, p.

101]. The deviation o�Θ, Θ5� is re-written: o�Θ, Θ5� = lAl�Θ� − lAl�Θ5�= nA8���|Θ�; − nA8���|Θ5�;

= nA C� ���|H, Θ���H|Θ�N D − nA8���|Θ5�;

= nA C� ���, H|Θ�N D − nA8���|Θ5�;

(Due to multiplication rule [4, p. 100])

Page 12: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 27

nA C� ��H|�, Θ5� ���, H|Θ���H|�, Θ5�N D − nA8���|Θ5�;

Because hidden X is the complete set of mutually exclusive

variables, the sum of conditional probabilities of X is equal to

1 given O and Θ5.

� ��H|�, Θ5�N = 1

Applying Jensen’s inequality [5, pp. 3-4]

nA C� p�&�� D ≥ � p�nA�&��� where � p�� = 1

into deviation o�Θ, Θ5�, we have:

o�Θ, Θ5� ≥ C� ��H|�, Θ5�nA ` ���, H|Θ���H|�, Θ5�aN D− nA8���|Θ5�;

= C� ��H|�, Θ5� [nA8���, H|Θ�; − nA8��H|�, Θ5�;\N D− nA8���|Θ5�;

= C� ��H|�, Θ5�nA8���, H|Θ�;N D− C� ��H|�, Θ5�nA8��H|�, Θ5�;N D− nA8���|Θ5�; = � ��H|�, Θ5�nA8���, H|Θ�;N + u

Where,

u = − C� ��H|�, Θ5�nA8��H|�, Θ5�;N D − nA8���|Θ5�;

Because C is constant with regard to Θ, it is possible to

eliminate C in order to simplify the optimization criterion as

follows:

Θk = argmaxm 8o�Θ, Θ5�;≈ argmaxm C� ��H|�, Θ5�nA8���, H|Θ�;N− uD= argmaxm � ��H|�, Θ5�nA8���, H|Θ�;N

The expression ∑ ��H|�, Θ5�nA8���, H|Θ�;N is essen-

tially expectation of nA8���, H|�; given conditional prob-

ability distribution ��H|�, Θ5� when ��H|�, Θ5� is totally

determined. Let vN|w,mYxnA8���, H|�;y denote this condi-

tional expectation, equation (12) specifies EM optimization

criterion for determining the parameter estimate, which is the

most important aspect of EM algorithm.

Θk = argmaxm vN|w,mYxnA8���, H|Θ�;y (12)

Where, vN|w,mYxnA8���, H|Θ�;y = � ��H|�, Θ5�nA8���, H|Θ�;N

If ��H|�, Θ5� is continuous density function, the contin-

uous version of this conditional expectation is: vN|w,mYxnA8���, H|Θ�;y = z ��H|�, Θ5�nA8���, H|Θ�;N

Finally, the EM algorithm is described in table 8.

Table 8. General EM algorithm.

Starting with initial parameter Θ{, each iteration in EM algorithm has two

steps: 1. E-step: computing the conditional expectation vN|w,mYxnA8���, H|Θ�;y based on the current parameter Θ5 according

to (12).

2. M-step: finding out the estimate Θk that maximizes such conditional

expectation. The next parameter Θ59 is assigned by the estimate Θk, we have: Θ59 = Θk

Of course Θ59 becomes current parameter for next iteration. How to maximize the conditional expectation is optimization problem which is dependent on applications. For example, the popular method to

solve optimization problem is Lagrangian duality [7, p. 8].

EM algorithm stops when it meets the terminating condition, for example,

the difference of current parameter Θ5 and next parameter Θ59 is smaller

than some pre-defined threshold ε. |Θ59 − Θ5| < } In addition, it is possible to define a custom terminating condition.

General EM algorithm is simple but please pay attention to

the concept of lower-bound and what the essence of EM is.

Recall that the next log-likelihood function lAl�Θ� is current

likelihood function lAl�Θ5� plus the deviation o�Θ, Θ5�. We

have:

lAl�Θ� = lAl�Θ5� + o�Θ, Θ5�≥ lAl�Θ5� + vN|w,mYxnA8���, H|Θ�;y + u

Where,

u = − C� ��H|�, Θ5�nA8��H|�, Θ5�;N D − nA8���|Θ5�;

Let n��Θ, Θ5� denote the lower-bound of the log-likelihood

function lAl�Θ� given current parameter Θ5 [5, pp. 7-8].

The lower-bound n��Θ, Θ5� is the function of Θ as specified

by (13):

n��Θ, Θ5� = lAl�Θ5� + vN|w,mYxnA8���, H|Θ�;y + u (13)

Determining n��Θ, Θ5� is to calculate the EM conditional

expectation vN|w,mYxnA8���, H|Θ�;y because terms lAl�Θ5�

and C were totally determined. The lower-bound n��Θ, Θ5�

has a feature where its evaluation at Θ = Θ5 equals the

log-likelihood function lAl�Θ�.

n��Θ5 , Θ5� = lAl�Θ5�

Page 13: Tutorial on Hidden Markov Model

28 Loc Nguyen: Tutorial on Hidden Markov Model

In fact, n��Θ, Θ5� lAl�Θ5� % vN|w,mYxnA8���, H|Θ�;y % u

lAl�Θ5� % C� ��H|�, Θ5�nA8���, H|Θ�;N DB C� ��H|�, Θ5�nA8��H|�, Θ5�;N DB nA8���|Θ5�;

lAl�Θ5� % C� ��H|�, Θ5�nA ` ���, H|Θ���H|�, Θ5�aN DB nA8���|Θ5�;

lAl�Θ5� % C� ��H|�, Θ5�nA `��H|�, Θ����|Θ���H|�, Θ5� aN DB nA8���|Θ5�;

(Due to multiplication rule [4, p. 100])

It implies n��Θ5 , Θ5� lAl�Θ5� % C� ��H|�, Θ5�nA `��H|�, Θ5����|Θ5���H|�, Θ5� aN DB nA8���|Θ5�;

lAl�Θ5� % C� ��H|�, Θ5�nA8���|Θ5�;N DB nA8���|Θ5�; lAl�Θ5� % nA8���|Θ5�; � ��H|�, Θ5�

N B nA8���|Θ5�;

lAl�Θ5� % nA8���|Θ5�; B nA8���|Θ5�;

Cdue to � ��H|�, Θ5�N 1D

lAl�Θ5�

Fig. 4. [8, p. 7] shows relationship between the

log-likelihood function lAl�Θ� and its lower-bound n��Θ, Θ5�.

Figure 4. Relationship between the log-likelihood function and its low-

er-bound.

The essence of maximizing the deviation o�Θ, Θ5� is to

maximize the lower-bound n��Θ, Θ5� with respect to Θ. For

each iteration the new lower-bound and its maximum are

computed based on previous lower-bound. A single iteration

in EM algorithm can be understood as below:

1. E-step: the new lower-bound n��Θ, Θ5� is determined

based on current parameter Θ5 according to (13). Of

course, determining n��Θ, Θ5� is to calculate the EM

conditional expectation vN|w,mYxnA8���, H|�;y.

2. M-step: finding out the estimate Θk so that n��Θ, Θ5�

reaches maximum at Θk . The next parameter Θ59 is

assigned by the estimate Θk, we have:

Θ59 Θk

Of course Θ59 becomes current parameter for next itera-

tion. Note, maximizing n��Θ, Θ5� is to maximize the EM

conditional expectation vN|w,mYxnA8���, H|�;y.

In general, it is easy to calculate the EM expectation vN|w,mYxnA8���, H|Θ�;y but finding out the estimate Θk based

on maximizing such expectation is complicated optimization

problem. It is possible to state that the essence of EM algo-

rithm is to determine the estimate Θk. Now the EM algorithm is

introduced with full of details. How to apply it into solving

HMM learning problem is described in successive

sub-section.

4.2. Applying EM Algorithm into Solving Learning Prob-

lem

Now going back the HMM learning problem, the EM al-

gorithm is applied into solving this problem, which is equiv-

alently well-known Baum-Welch algorithm [3]. The parame-

ter Θ becomes the HMM model ∆ = (A, B, ∏). Recall that the

learning problem is to adjust parameters by maximizing

probability of observation sequence O, as follows:

Δk 8c�, d� , Πk; 8���� , ������, ���; argmax� ���|Δ�

Where ���� , ������, ��� are parameter estimates and so, the

purpose of HMM learning problem is to determine them.

The observation sequence O = {o1, o2,…, oT} and state

sequence X = {x1, x2,…, xT} are observed data and missing

(hidden) data within context of EM algorithm, respectively.

Note O and X is now represented in sequence. According to

EM algorithm, the parameter estimate Δk is determined as

follows:

Δk 8���� , ������, ���; argmax� vN|w,��xnA8���, H|Δ�;y

Where ∆r = (Ar, Br, ∏r) is the known parameter at the cur-

rent iteration. Note that we use notation ∆r instead of popular

notation ∆t in order to distinguish iteration indices of EM

algorithm from time points inside observation sequence O and

state sequence X. The EM conditional expectation in accord-

ance with HMM is:

vN|w,��xnA8���, H|�;y � ��H|�, ��nA8���, H|�;N

� ��H|�, ��nA8���|H, ���H|�;N

� ��H|�, Δ��nA8��� , �#, … , �@|& , &#, … , &@ , Δ�N ' ��& , &#, … , &@|Δ�;

Page 14: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 29

� ��H|�, Δ��nA8��� |& , &#, … , &@ , Δ�N ∗ ���#|& , &#, … , &@ , Δ� ∗ ⋯∗ ���@|& , &#, … , &@ , Δ�∗ ��& , &#, … , &@|Δ�;

(Because observations o1, o2,…, oT are mutually independent) = � ��H|�, Δ��nA8��� |& , Δ� ∗ ���#|&#, Δ� ∗ ⋯N ∗ ���@|&@ , Δ� ∗ ��& , &#, … , &@|Δ�;

(Because each observations ot is only dependent on state xt)

= � ��H|�, ��nA �C� ���5|&5 , �@5= DN

∗ ��& , &#, … , &@|Δ��

= � ��H|�, Δ��nA �C� ���5|&5 , Δ�@5= DN ∗ ��&@|& , &#, … , &@] , Δ�

∗ ��& , &#, … , &@] |Δ��

= � ��H|�, Δ��nA �C� ���5|&5 , Δ�@5= D ∗ ��&@|&@] , Δ�

N∗ ��& , &#, … , &@] |Δ��

(Because each state xt is only dependent on previous state xt–1)

= � ��H|�, Δ��nA �C� ���5|&5 , Δ�@5= D ∗ ��&@|&@] , Δ�

N ∗ ��&@] |&@]#, Δ� ∗ ⋯ ∗ ��&#|& , Δ�∗ ��& |Δ��

(Due to recurrence on probability P(x1, x2,…, xt))

= � ��H|�, ��nA �C� ���5|&5 , �@5= DN

∗ C� ��&5|&5] , Δ�@5=# D ∗ ��& |Δ��

It is conventional that ��& |&{, � = ��& |� where x0 is

pseudo-state, equation (14) specifies general EM conditional

expectation for HMM:

vN|w,∆�xnA8���, H|∆�;y =∑ ��H|�, Δ��nA�∏ ��&5|&5] , Δ����5|&5 , Δ�@5= �N =∑ ��H|�, Δ�� ∑ [nA8��&5|&5] , Δ�; + nA8���5|&5 , Δ�;\@5= N

(14)

Let �8&5] = �� , &5 = ��; and �8&5 = �� , �5 = !�; are

two index functions so that

�8�� = &5] , �� = &5; = �1 if �� = &5] and �� = &5 0 otherwise

�8&5 = �� , �5 = !�; = �1 if &5 = �� and �5 = !� 0 otherwise

We have: vN|w,∆�xnA8���, H|∆�;y

= � ��H|�, �� C� nA8��&5|&5] , �;@5= N

+ � nA8���5|&5 , �;@5= D

= � ��H|�, �� �� � � �8&5] = �� , &5@

5= <

�= <

�= N = ��;nA [�8��:�� , Δ;\+ � � � �8&5 = �� , �5

@5=

��=

<�=

= !�;nA [�8!�:�� , Δ;\�

= � ��H|�, �� �� � � �8&5] = �� , &5 = ��;nA8���;@5=

<�=

<�= N

+ � � � �8&5 = �� , �5@

5= �

�= <

�= = !�;nA [�����\�

Because of the convention ��& |&{, � = ��& |�, matrix

∏ is degradation case of matrix A at time point t=1. In other

words, the initial probability πj is equal to the transition

probability aij from pseudo-state x0 to state x1=sj. �8& = ��:&{, ∆; = �8& = ��:∆; = ��

Note that n=|S| is the number of possible states and m=|Φ| is

the number of possible observations.

Shortly, the EM conditional expectation for HMM is spec-

ified by (15).

vN|w,∆�xnA8���, H|∆�;y =∑ ��H|�, Δ�� [∑ ∑ ∑ �8&5] = �� , &5 =@5= <�= <�= N��;nA8���; + ∑ ∑ ∑ �8&5 = �� , �5 = !�;nA [�����\@5= ��= <�= \

(15)

Where,

�8&5] = �� , &5 = ��; = �1 if &5] = �� and &5 = �� 0 otherwise

�8&5 = �� , �5 = !�; = �1 if &5 = �� and �5 = !� 0 otherwise

Page 15: Tutorial on Hidden Markov Model

30 Loc Nguyen: Tutorial on Hidden Markov Model

�8& ��:&{, ∆; = �8& = ��:∆; = ��

Note that the conditional expectation vN|w,∆�xnA8���, H|∆�;y is function of ∆. There are two con-

straints for HMM as follows:

� ���<

�= = 1, ∀6 = 1, A����� � �������= = 1, ∀� = 1, �������

Maximizing vN|w,∆�xnA8���, H|∆�;y with subject to these

constraints is optimization problem that is solved by Lagran-

gian duality theorem [7, p. 8]. Original optimization problem

mentions minimizing target function but it is easy to infer that

maximizing target function shares the same methodology. Let

l(∆, λ, µ) be Lagrangian function constructed from vN|w,∆�xnA8���, H|∆�;y together with these constraints [9, p.

9], we have (16) for specifying HMM Lagrangian function as

follows:

n�∆, p, �� = n8��� , �����, p� , ��; = vN|w,∆�xnA8���, H|∆�;y +∑ p�81 − ∑ ���<�= ;<�= + ∑ ��81 − ∑ �������= ;<�= (16)

Where λ is n-component vector λ = (λ1, λ2,…, λn) and µ is

m-component vector µ = (µ1, µ2,…, µm). Factors λi ≥ 0 and µj ≥

0 are called Lagrange multipliers or Karush-Kuhn-Tucker

multipliers [10] or dual variables. The expectation vN|w,∆�xnA8���, H|∆�;y is specified by (15).

The parameter estimate Δk is extreme point of the Lagran-

gian function. According to Lagrangian duality theorem [11, p.

216] [7, p. 8], we have:

Δk = 8Ak, Bk; = [���� , ������\ = argmaxf,g n�∆, p, ��

8p�, �; = argmin�,� n�∆, p, ��

The parameter estimate Δk = [���� , ������\ is determined by

setting partial derivatives of l(∆, λ, µ) with respect to aij and

bj(k) to be zero. The partial derivative of l(∆, λ, µ) with respect

to aij is: �n�∆, p, ������ = �vN|w,∆�xnA8���, H|∆�;y����+ ����� �� p� �1 − � ���

<�= �<

�= �+ ����� �� �� C1 − � ������

�= D<�= �

= �vN|w,∆�xnA8���, H|∆�;y���� − p�

= ����� �� ��H|�, �� �� � � �8&5] = �� , &5@

5= <

�= <

�= N = ��;nA8���;+ � � � �8&5 = �� , �5

@5=

��=

<�=

= !�;nA [�����\�� − p�

= �� ��H|�, �� � � � �8&5] = �� , &5@

5= <

�= <

�= N= ��; �nA8���;���� � − p�

= C� ��H|�, �� � �8&5] = �� , &5 = ��; 1���@

5= N D − p�

= 1��� C� ��H|�, Δ�� � �8&5] = �� , &5 = ��;@5= N D − p�

Setting the partial derivative ���∆,�,��� �� to be zero:

�n�∆, p, ������ = 0 ⟺ 1��� C� ��H|�, Δ�� � �8&5] = �� , &5@

5= N= ��;D − p� = 0

The parameter estimate ���� is solution of equation ���∆,�,��� �� = 0, we have:

���� = ∑ ��H|�, Δ�� ∑ �8&5] = �� , &5 = ��;@5= N p�

It is required to estimate the Lagrange multiplier λi. The

multiplier estimate p�� is determined by setting the partial

derivative of l(∆, λ, µ) with respect to λi to be zero as follows: �n�∆, p, ���p� = 0

⟹ �vN|w,∆�xnA8���, H|∆�;y�p� + ��p� �� p� �1 − � ���<

�= �<�= �

+ ��p� �� �� C1 − � �������= D<

�= � = 0

⟹ 1 − � ���<

�= = 0

Substituting ���� for aij, we have:

1 − � ����<

�= = 1 − � ∑ ��H|�, Δ�� ∑ �8&5] = �� , &5 = ��;@5= N p�

<�=

Page 16: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 31

1 − 1p� � � ��H|�, Δ�� � �8&5] = �� , &5 = ��;@5= N

<�= = 0

It implies:

p�� = � � ��H|�, �� � �8&5] = �� , &5 = ��;@5= N

<�=

= � ��H|�, �� � � �8&5] = �� , &5 = ��;<�=

@5= N

= � ��H|�, �� � ��&5] = ���@5= N

Where, ���� = &5] � is index function. ��&5] = ��� = £1 if &5] = ��0 otherwise

Substituting p�� for λi inside

���� = ∑ ��H|�, Δ�� ∑ �8&5] = �� , &5 = ��;@5= N p�

We have:

���� = ∑ ��H|�, Δ�� ∑ �8&5] = �� , &5 = ��;@5= N p��= ∑ ��H|�, Δ�� ∑ �8&5] = �� , &5 = ��;@5= N ∑ ��H|�, Δ�� ∑ ��&5] = ���@5= N

Evaluating the numerator, we have:

� ��H|�, �� � �8&5] = �� , &5 = ��;@5= N

= � � �8&5] = �� , &5 = ��;��H|�, ��N

@5=

= � � �8&5] = �� , &5N@

5= = ��;��& , … , &5] , &5 , … , &@|�, Δ��

= � �8&5] = �� , &5 = ��:�, �;@5=

(Due to total probability rule [4, p. 101])

= � �8�, &5] = �� , &5 = ��:�;���|��@

5=

(Due to multiplication rule [4, p. 100])

= 1���|�� � �8�, &5] = �� , &5 = ��:�;@5=

Evaluating the denominator, we have:

� ��H|�, �� � ��&5] = ���@5= N

= � � ��&5] = �����H|�, ��N

@5=

= � � ��&5] = �����& , … , &5] , &5 , … , &@|�, Δ��N

@5=

= � ��&5] = ��|�, ��@5=

(Due to total probability rule [4, p. 101])

= � ���, &5] = ��|�����|��@

5=

(Due to multiplication rule [4, p. 100])

= 1���|�� � ���, &5] = ��|��@5=

It implies

���� = ∑ ��H|�, Δ�� ∑ �8&5] = �� , &5 = ��;@5= N ∑ ��H|�, Δ�� ∑ ��&5] = ���@5= N= ∑ �8�, &5] = �� , &5 = ��:Δ�;@5= ∑ ���, &5] = ��|Δ��@5=

Because of the convention ��& |&{, � = ��& |�, the es-

timate ���� is fixed as follows:

���� = ∑ �8�, &5] = �� , &5 = ��:Δ�;@5=#∑ ���, &5] = ��|Δ��@5=#

The estimate of initial probability ��� is known as specific

estimate ���� from pseudo-state x0 to state x1=sj. It means that

��� = �8�, & = ��:Δ�;∑ ���, & = ��|Δ��<�=

Recall that the parameter estimate Δk = [���� , ������\ is de-

termined by setting partial derivatives of l(∆, λ, µ) with respect

to aij and bj(k) to be zero. The parameter estimate ���� was

determined. Now it is required to calculate the parameter

estimate ������. The partial derivative of Lagrangian function

l(∆, λ, µ) with respect to bj(k) is: �n�∆, p, �������� = �vN|w,∆�xnA8���, H|∆�;y������+ ������� �� p� �1 − � ���

<�= �<

�= �+ ������� �� �� C1 − � ������

�= D<�= �

= �vN|w,∆�xnA8���, H|∆�;y������ − ��

= ������� �� ��H|�, �� �� � � �8&5] = �� , &5@

5= <

�= <

�= N = ��;nA8���;+ � � � �8&5 = �� , �5

@5=

��=

<�=

= !�;nA [�����\�� − ��

Page 17: Tutorial on Hidden Markov Model

32 Loc Nguyen: Tutorial on Hidden Markov Model

�� ��H|�, �� � � � �8&5 = �� , �5@

5= �

�= <

�= N= !�; �nA [�����\������ � − ��

= C� ��H|�, �� � �8&5 = �� , �5 = !�; 1�����@

5= N D − ��

= 1����� C� ��H|�, Δ�� � �8&5 = �� , �5 = !�;@5= N D − ��

Setting the partial derivative ���∆,�,���¤���� to be zero:

�n�∆, p, �������� = 0 ⟺ 1����� C� ��H|�, Δ�� � �8&5 = �� , �5@

5= N= !�;D − �� = 0

The parameter estimate ���� is solution of equation ���∆,�,���¤���� = 0, we have:

������ = ∑ ��H|�, Δ�� ∑ �8&5 = �� , �5 = !�;@5= N ��

It is required to estimate the Lagrange multiplier µj. The

multiplier estimate �� is determined by setting the partial

derivative of l(∆, λ, µ) with respect to µj to be zero as follows: �n�∆, p, ����� = 0

⟹ �vN|w,∆�xnA8���, H|∆�;y��� + ���� �� p� �1 − � ���<

�= �<�= �

+ ���� �� �� C1 − � �������= D<

�= � = 0

⟹ 1 − � �������= = 0

Substituting ������ for bj(k) we have:

1 − � ��������= = 1 − � ∑ ��H|�, Δ�� ∑ �8&5 = �� , �5 = !�;@5= N ��

��= = 1

− 1�� � � ��H|�, Δ�� � �8&5 = �� , �5 = !�;@5= N

��= = 0

It implies:

�� = � � ��H|�, �� � �8&5 = �� , �5 = !�;@5= N

��=

= � ��H|�, �� � � �8&5 = �� , �5 = !�;��=

@5= N

= � ��H|�, �� � �8&5 = ��;@5= N

Where, �8�� = &5; is index function.

�8&5 = ��; = �1 if &5 = ��0 otherwise

Substituting �� for µj inside

������ = ∑ ��H|�, Δ�� ∑ �8&5 = �� , �5 = !�;@5= N ��

We have:

������ = ∑ ��H|�, Δ�� ∑ �8&5 = �� , �5 = !�;@5= N ��= ∑ ��H|�, Δ�� ∑ �8&5 = �� , �5 = !�;@5= N ∑ ��H|�, Δ�� ∑ �8&5 = ��;@5= N

Evaluating the numerator, we have:

� ��H|�, �� � �8&5 = �� , �5 = !�;@5= N

= � � �8&5 = �� , �5 = !�;��H|�, ��N

@5=

= � � �8&5 = �� , �5 = !�;��& , … , &5 , … , &@|�, Δ��N

@5=

= � �8&5 = ��:�, Δ�;@5= ¥Y=¦�

(Due to total probability rule [4, p. 101])

= � �8�, &5 = ��:�;���|��@

5= ¥Y=¦�

(Due to multiplication rule [4, p. 100])

= 1���|Δ�� � �8�, &5 = ��:Δ�;@5= ¥Y=¦�

Note, the expression ∑ �8�, &5 = ��:Δ�;@5= ¥Y=¦� expresses

the sum of probabilities �8�, &5 = ��:�; over T time points

with condition ot = φk.

Evaluating the denominator, we have:

� ��H|�, �� � �8&5 = ��;@5= N

= � � �8&5 = ��;��H|�, ��N

@5=

= � � �8&5 = ��;��& , … , &5 , … , &@|�, Δ��N

@5=

= � �8&5 = ��:�, �;@5=

(Due to total probability rule [4, p. 101])

= � �8�, &5 = ��:�;���|��@

5=

Page 18: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 33

(Due to multiplication rule [4, p. 100])

1���|�� � �8�, &5 = ��:�;@5=

It implies

������ = ∑ ��H|�, Δ�� ∑ �8&5 = �� , �5 = !�;@5= N ∑ ��H|�, Δ�� ∑ �8&5 = ��;@5= N= ∑ �8�, &5 = ��:Δ�;@5= ¥Y=¦�∑ �8�, &5 = ��:Δ�;@5=

In general, the parameter estimate Δk = 8���� , ������, ���; is

totally determined as follows:

���� = ∑ �8�, &5] = �� , &5 = ��:Δ�;@5=#∑ ���, &5] = ��|Δ��@5=#

������ = ∑ �8�, &5 = ��:Δ�;@5= ¥Y=¦�∑ �8�, &5 = ��:Δ�;@5=

��� = �8�, & = ��:Δ�;∑ ���, & = ��|Δ��<�=

As a convention, we use notation ∆ instead of ∆r for de-

noting known HMM at current iteration of EM algorithm. We

have (17) for specifying HMM parameter estimate Δk =8���� , ������, ���; given current parameter ∆ = (aij, bj(k), πj) as

follows:

���� = ∑ �8�, &5] = �� , &5 = ��:Δ;@5=#∑ ���, &5] = ��|Δ�@5=#

������ = ∑ §8w,VY=��:�;¨ Y©WªY©«�∑ §8w,VY=��:�;Y©W (17)

��� = �8�, & = ��:Δ;∑ ���, & = ��|Δ�<�=

The parameter estimate Δk = 8���� , ������, ���; is the ultimate

solution of the learning problem. As seen in (17), it is neces-

sary to calculate probabilities P(O, xt–1=si, xt=sj) and P(O,

xt–1=si) when other probabilities P(O, xt=sj), P(O, x1=si), and

P(O, x1=sj) are represented by the joint probability γt specified

by (7).

�8�, &5 = ��:Δ; = O5�>� = 45�>�G5�>�

���, & = ��|� = O �6� = 4 �6�G �6�

�8�, & = ��:Δ; = O �>� = 4 �>�G �>�

Let ξt(i, j) is the joint probability that the stochastic process

receives state si at time point t–1 and state sj at time point t

given observation sequence O [3, p. 264].

¬5�6, >� = �8�, &5] = �� , &5 = ��:∆;

Given forward variable αt and backward variable βt, if

2 ≥ 2, we have: 45] �6��������5�G5�>� = ��� , �#, … , �5 , &5] = ��|∆� ∗ �8&5 = ��:&5] = ��;∗ ����5� ∗ G5�>� = ��� , �#, … , �5|&5] = �� , ∆� ∗ ��&5] = ��|∆�∗ �8&5 = ��:&5] = ��; ∗ ����5� ∗ G5�>�

(Due to multiplication rule [4, p. 100]) = ��� , �#, … , �5|&5] = �� , ∆� ∗ �8&5 = ��:&5] = ��;∗ ��&5] = ��|∆� ∗ ����5� ∗ G5�>� = �8� , �#, … , �5 , &5 = ��:&5] = �� , ∆; ∗ ��&5] = ��|∆�∗ ����5� ∗ G5�>�

(Because the partial observation sequence {o1, o2,…, ot} is

independent from current state xt given previous state xt–1) = �8� , �#, … , �5 , &5] = �� , &5 = ��:∆; ∗ ����5� ∗ G5�>� = �8� , �#, … , �5 , &5] = ��:&5 = �� , ∆; ∗ �8&5 = ��:∆;∗ ����5� ∗ G5�>�

(Due to multiplication rule [4, p. 100]) = �8� , �#, … , �5 , &5] = ��:&5 = �� , ∆; ∗ �8&5 = ��:∆;∗ �8�5:&5 = ��;∗ �8�59 , �59#, … , �@:&5 = �� , ∆; = �8� , �#, … , �5 , &5] = ��:&5 = �� , ∆; ∗ �8&5 = ��:∆;∗ �8�5 , �59 , �59#, … , �@:&5 = �� , ∆;

(Because observations ot, ot+1, ot+2,…, oT are mutually inde-

pendent) = �8� , �#, … , �5 , &5] = ��:&5 = �� , ∆;∗ �8�5 , �59 , �59#, … , �@:&5 = �� , ∆;∗ �8&5 = ��:∆; = �8� , �#, … , �5 , �59 , �59#, … , �@ , &5] = ��:&5 = �� , ∆;∗ �8&5 = ��:∆;

(Due to multiplication rule [4, p. 100]) = �8� , �#, … , �@ , &5 = �� , &59 = ��:∆;

(Due to multiplication rule [4, p. 100]) = �8�, &5] = �� , &5 = ��:∆; = ¬5�6, >�

In general, equation (18) determines the joint probability

ξt(i, j) based on forward variable αt and backward variable βt. ¬5�6, >� = 45] �6��������5�G5�>� where 2 ≥ 2 (18)

Where forward variable αt and backward variable βt are

calculated by previous recurrence equations (2) and (5).

459 �>� = C� 45�6����<

�= D ����59 �

G5�6� = � �������59 �G59 �>�<�=

Shortly, the joint probability ξt(i, j) is constructed from

forward variable and backward variable, as seen in fig. 5 [3, p.

264].

Page 19: Tutorial on Hidden Markov Model

34 Loc Nguyen: Tutorial on Hidden Markov Model

Figure 5. Construction of the joint probability ξt(i, j).

Recall that γt(j) is the joint probability that the stochastic

process is in state sj at time point t with observation sequence

O = {o1, o2,…, oT}, specified by (7).

O5�>� �8�, &5 ��:∆; 45�>�G5�>�

According to total probability rule [4, p. 101], it is easy to

infer that γt is sum of ξt over all states with 2 q 2, as seen in

(19).

�2 q 2, O5�>� ∑ ¬5�6, >�<�= and O5] �6� ∑ ¬5�6, >�<�=

(19)

Deriving from (18) and (19), we have:

�8�, &5] �� , &5 ��:Δ; ¬5�6, >�

���, &5] ��|Δ� � ¬5�6, >�<

�= , �2 q 2

�8�, &5 ��:Δ; O5�>�

�8�, & ��:Δ; O �>�

By extending (17), we receive (20) for specifying HMM

parameter estimate Δk 8���� , ������, ���; given current pa-

rameter ∆ = (aij, bi(k), πi) in detailed.

���� ∑ ¬5�6, >�@5=#∑ ∑ ¬5�6, n�<�= @5=#

������ ∑ ­Y���¨ Y©WªY©«�∑ ­Y���Y©W (20)

��� O �>�∑ O �6�<�=

Followings are interpretations relevant to the joint proba-

bilities ξt(i, j) and γt(j) with observation sequence O.

- The sum ∑ ¬5�6, >�@5=# expresses expected number of

transitions from state si to state sj [3, p. 265].

- The double sum ∑ ∑ ¬5�6, n�<�= @5=# expresses expected

number of transitions from state si [3, p. 265].

- The sum ∑ O5�>�@5= ¥Y=¦� expresses expected number of

times in state sj and in observation φk [3, p. 265].

- The sum ∑ O5�>�@5= expresses expected number of times

in state sj [3, p. 265].

Followings are interpretations of the parameter estimate Δk 8���� , ������, ���;:

- The transition estimate ���� is expected frequency of

transitions from state si to state sj.

- The observation estimate ������ is expected frequency of

times in state sj and in observation φk.

- The initial estimate ��� is (normalized) expected fre-

quency of state sj at the first time point (t=1).

It is easy to infer that the parameter estimate Δk 8���� , ������, ���; is based on joint probabilities ξt(i, j) and γt(j)

which, in turn, are based on current parameter ∆ = (aij, bj(k), πj).

The EM conditional expectation vN|w,∆�xnA8���, H|∆�;y is

determined by joint probabilities ξt(i, j) and γt(j); so, the main

task of E-step in EM algorithm is essentially to calculate the

joint probabilities ξt(i, j) and γt(j) according to (18) and (7).

The EM conditional expectation vN|w,∆�xnA8���, H|∆�;y gets

maximal at estimate Δk 8���� , ������, ���; and so, the main

task of M-step in EM algorithm is essentially to calculate ���� , ������, ��� according to (20). The EM algorithm is inter-

preted in HMM learning problem, as shown in table 9.

Table 9. EM algorithm for HMM learning problem.

Starting with initial value for ∆, each iteration in EM algorithm has two

steps: 1. E-step: Calculating the joint probabilities ξt(i, j) and γt(j) according to

(18) and (7) given current parameter ∆ = (aij, bj(k), πj). ¬5�6, >� 45] �6��������5�G5�>� where 2 q 2 O5�>� �8�, &5 ��:∆; 45�>�G5�>�

Where forward variable αt and backward variable βt are calculated by

(2) and (5).

2. M-step: Calculating the estimate Δk 8���� , ������, ���; based on the

joint probabilities ξt(i, j) and γt(j) determined at E-step, according to (20).

���� ∑ ¬5�6, >�@5=#∑ ∑ ¬5�6, n�<�= @5=#

������ ∑ O5�>�@5= ¥Y=¦�∑ O5�>�@5=

��� O �>�∑ O �6�<�=

The estimate Δk becomes the current parameter for next iteration.

EM algorithm stops when it meets the terminating condition, for example,

the difference of current parameter ∆ and next parameter Δk is insignificant. It is possible to define a custom terminating condition.

The algorithm to solve HMM learning problem shown in

table 9 is known as Baum-Welch algorithm [3]. Please see

document “Hidden Markov Models Fundamentals” by [9, pp.

8-13] for more details about HMM learning problem. As

aforementioned in sub-section 4.1, the essence of EM algo-

rithm applied into HMM learning problem is to determine the

estimate Δk 8���� , ������, ���;.

As seen in table 9, it is not difficult to run E-step and M-step

of EM algorithm but how to determine the terminating condi-

tion is considerable problem. It is better to establish a com-

putational terminating criterion instead of applying the gen-

eral statement “EM algorithm stops when it meets the termi-

nating condition, for example, the difference of current pa-

rameter ∆ and next parameter Δk is insignificant”. Going back

the learning problem that EM algorithm solves, the EM algo-

rithm aims to maximize probability P(O|∆) of given observa-

tion sequence O=(o1, o2,… , oT) so as to find out the estimate ∆� . Maximizing the probability P(O|∆) is equivalent to max-

Page 20: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 35

imizing the conditional expectation. So it is easy to infer that

EM algorithm stops when probability P(O|∆) approaches to

maximal value and EM algorithm cannot maximize P(O|∆)

any more. In other words, the probability P(O|∆) is termi-

nating criterion. Calculating criterion P(O|∆) is evaluation

problem described in section 2. Criterion P(O|∆) is deter-

mined according to forward-backward procedure; please see

tables 4 and 5 for more details about forward-backward pro-

cedure.

At the end of M-step, the next criterion P(O|∆�) that is cal-

culated based on the next parameter (also estimate) ∆� is

compared with the current criterion P(O|∆) that is calculated

in the previous time. If these two criterions are the same or

there is no significantly difference between them then, EM

algorithm stops. This implies EM algorithm cannot maximize

P(O|∆) any more. However, calculating the next criterion

P(O|∆�) according to forward-backward procedure causes EM

algorithm to run slowly. This drawback is overcome by fol-

lowing comment and improvement. The essence of for-

ward-backward procedure is to determine forward variables αt

while EM algorithm must calculate all forward variables and

backward variables in its learning process (E-step). Thus, the

evaluation of terminating condition is accelerated by execut-

ing forward-backward procedure inside the E-step of EM

algorithm. In other words, when EM algorithm results out

forward variables in E-step, the forward-backward procedure

takes advantages of such forward variables so as to determine

criterion P(O|∆) the at the same time. As a result, the speed of

EM algorithm does not decrease. However, there is always a

redundant iteration; suppose that the terminating criterion

approaches to maximal value at the end of the rth

iteration but

the EM algorithm only stops at the E-step of the (r+1)th

itera-

tion when it really evaluates the terminating criterion. In

general, the terminating criterion P(O|∆) is calculated based

on the current parameter ∆ at E-step instead of the estimate ∆�

at M-step. Table 10 shows the proposed implementation of

EM algorithm with terminating criterion P(O|∆). Pseudo-code

like programming language C is used to describe the imple-

mentation of EM algorithm. Variables are marked as italic

words and comments are followed by the signs // and /*.

Please pay attention to programming language keywords:

while, for, if, [], ==, !=, &&, //, /*, */, etc. For example, no-

tation [] denotes array index operation; concretely, α[t][i]

denotes forward variable αt(i) at time point t with regard to

state si.

Table 10. Proposed implementation of EM algorithm for learning HMM with

terminating criterion P(O|∆).

/*

Input: HMM with current parameter ∆ = {aij, πj, bjk}

Observation sequence O = {o1, o2,…, oT}

Output: HMM with optimized parameter ∆ = {aij, πj, bjk}

*/

Allocating memory for two matrices α and β representing forward variables

and backward variables.

previous_criterion = –1 current_criterion = –1

iteration = 0

/*Pre-defined number MAX_ITERATION is used to prevent from infinite

loop.*/

MAX_ITERATION = 10000 While (iteration < MAX_ITERATION)

//Calculating forward variables and backward variables For t = 1 to T

For i = 1 to n

Calculating forward variables α[t][i] and backward variables β[T–t+1][i] based on observation sequence O according to (2) and

(5).

End for i End for t

//Calculating terminating criterion current_criterion = P(O|∆) current_criterion = 0

For i = 1 to n

current_criterion = current_criterion + α[T][i] End for i

//Terminating condition If previous_criterion >= 0 && previous_criterion == current_criterion

then

break //breaking out the loop, the algorithm stops Else

previous_criterion = current_criterion

End if

//Updating transition probability matrix

For i = 1 to n denominator = 0

Allocating numerators as a 1-dimension array including n zero ele-

ments. For t = 2 to T

For k = 1 to n

ξ = α[t–1][i] * aik * bk(ot) * β[t][k] numerators[k] = numerators[k] + ξ

denominator = denominator + ξ End for k

End for t

If denominator != 0 then

For j = 1 to n

aij = numerators[j] / denominator End for j

End if

End for i

//Updating initial probability matrix

Allocating g as a 1-dimension array including n elements. sum = 0

For j = 1 to n

g[j] = α[1][j] * β[1][j] sum = sum + g[j]

End for j

If sum != 0 then

For j = 1 to n

πj = g[j] / sum End for j

End if

//Updating observation probability distribution

For j = 1 to n

Allocating γ as a 1-dimension array including T elements. denominator = 0

For t = 1 to T

γ[t] = α[t][j] * β[t][j] denominator = denominator + γ[t]

End for t

Let m be the columns of observation distribution matrix B.

For k = 1 to m

numerator = 0 For t = 1 to T

Page 21: Tutorial on Hidden Markov Model

36 Loc Nguyen: Tutorial on Hidden Markov Model

If ot == k then

numerator = numerator + γ[t] End if

End for t

bjk = numerator / denominator

End for k

End for j

iteration = iteration + 1

End while

According to table 10, the number of iterations is limited by

a pre-defined maximum number, which aims to solve a

so-called infinite loop optimization. Although it is proved that

EM algorithm always converges, maybe there are two dif-

ferent estimates ∆� and ∆�# at the final convergence. This

situation causes EM algorithm to alternate between ∆� and ∆�# in infinite loop. Therefore, the final estimate ∆� or ∆�# is

totally determined but the EM algorithm does not stop. This is

the reason that the number of iterations is limited by a

pre-defined maximum number.

Going back given weather HMM ∆ whose parameters A, B,

and ∏ are specified in tables 1, 2, and 3, suppose observation

sequence is O = {o1=φ4=soggy, o2=φ1=dry, o3=φ2=dryish},

the EM algorithm and its implementation described in tables 9

and 10 are applied into calculating the parameter estimate Δk = 8���� , ������, ���; which is the ultimate solution of the

learning problem, as below.

At the first iteration (r=1) we have: 4 �1� = � �� = !"�� = � "� = 0.0165 4 �2� = �#�� = !"��# = �#"�# = 0.0825 4 �3� = �$�� = !"��$ = �$"�$ = 0.165

4#�1� = C� 4 �6��� $

�= D � ��# = ! � = C� 4 �6���

$�=

D � = 0.04455

4#�2� = C� 4 �6���#$

�= D �#��# = ! � = 0.01959375

4#�3� = C� 4 �6���$$

�= D �$��# = ! � = 0.00556875

4$�1� = C� 4#�6��� $

�= D � ��$ = !#� = 0.0059090625

4$�2� = C� 4#�6���#$

�= D �#��$ = !#� = 0.005091796875

4$�3� = C� 4#�6���$$

�= D �$��$ = !#� = 0.00198

G$�1� = G$�2� = G$�3� = 1

G#�1� = � � �����$ = !#�G$�>�<�=

= � � ���#G$�>�<�=

= 0.1875

G#�2� = � �#�����$ = !#�G$�>�<�=

= 0.19

G#�3� = � �$�����$ = !#�G$�>�<�=

= 0.1625

G �1� = � � �����# = ! �G#�>�<�=

= 0.07015625

G �2� = � �#�����# = ! �G#�>�<�=

= 0.0551875

G �3� = � �$�����# = ! �G#�>�<�=

= 0.0440625

Within the E-step of the first iteration (r=1), the terminating

criterion P(O|∆) is calculated according to forward-backward

procedure (see table 4) as follows:

���|∆� = 4$�1� + 4$�2� + 4$�3� ≈ 0.013

Within the E-step of the first iteration (r=1), the joint

probabilities ξt(i,j) and γt(j) are calculated based on (18) and (7)

as follows: ¬#�1,1� = 4 �1�� � ��# = ! �G#�1� = 4 �1�� � G#�1�= 0.000928125 ¬#�1,2� = 4 �1�� #�#��# = ! �G#�2� = 0.0001959375 ¬#�1,3� = 4 �1�� $�$��# = ! �G#�3� = 0.000033515625 ¬#�2,1� = 4 �2��# � ��# = ! �G#�1� = 0.002784375 ¬#�2,2� = 4 �2��##�#��# = ! �G#�2� = 0.0015675 ¬#�2,3� = 4 �2��#$�$��# = ! �G#�3� = 0.00020109375 ¬#�3,1� = 4 �3��$ � ��# = ! �G#�1� = 0.004640625 ¬#�3,2� = 4 �3��$#�#��# = ! �G#�2� = 0.001959375 ¬#�3,3� = 4 �3��$$�$��# = ! �G#�3� = 0.0006703125 ¬$�1,1� = 4#�1�� � ��$ = !#�G$�1� = 0.004455 ¬$�1,2� = 4#�1�� #�#��$ = !#�G$�2� = 0.002784375 ¬$�1,3� = 4#�1�� $�$��$ = !#�G$�3� = 0.00111375 ¬$�2,1� = 4#�2��# � ��$ = !#�G$�1� = 0.001175625 ¬$�2,2� = 4#�2��##�#��$ = !#�G$�2� = 0.001959375 ¬$�2,3� = 4#�2��#$�$��$ = !#�G$�3� = 0.0005878125 ¬$�3,1� = 4#�3��$ � ��$ = !#�G$�1� = 0.0002784375 ¬$�3,2� = 4#�3��$#�#��$ = !#�G$�2� = 0.000348046875 ¬$�3,3� = 4#�3��$$�$��$ = !#�G$�3� = 0.0002784375 O �1� = 4 �1�G �1� = 0.001157578125 O �2� = 4 �2�G �2� = 0.00455296875 O �3� = 4 �3�G �3� = 0.0072703125 O#�1� = 4#�1�G#�1� = 0.008353125 O#�2� = 4#�2�G#�2� = 0.0037228125 O#�3� = 4#�3�G#�3� = 0.000904921875 O$�1� = 4$�1�G$�1� = 0.0059090625 O$�2� = 4$�2�G$�2� = 0.005091796875 O$�3� = 4$�3�G$�3� = 0.00198

Within the M-step of the first iteration (r=1), the estimate ∆�= 8���� , ������, ���; is calculated based on joint probabilities

ξt(i,j) and γt(j) determined at E-step.

�� = ∑ ¬5�1,1�$5=#∑ ∑ ¬5�1, n�$�= $5=# ≈ 0.5660

�� # = ∑ ¬5�1,2�$5=#∑ ∑ ¬5�1, n�$�= $5=# ≈ 0.3134

�� $ = ∑ ¬5�1,3�$5=#∑ ∑ ¬5�1, n�$�= $5=# ≈ 0.1206

��# = ∑ ¬5�2,1�$5=#∑ ∑ ¬5�2, n�$�= $5=# ≈ 0.4785

��## = ∑ ¬5�2,2�$5=#∑ ∑ ¬5�2, n�$�= $5=# ≈ 0.4262

��#$ = ∑ ¬5�2,3�$5=#∑ ∑ ¬5�2, n�$�= $5=# ≈ 0.0953

��$ = ∑ ¬5�3,1�$5=#∑ ∑ ¬5�3, n�$�= $5=# ≈ 0.6017

��$# = ∑ ¬5�3,2�$5=#∑ ∑ ¬5�3, n�$�= $5=# ≈ 0.2822

Page 22: Tutorial on Hidden Markov Model

Applied and Computational Mathematics 2017; 6(4-1): 16-38 37

��$$ ∑ ¬5�3,3�$5=#∑ ∑ ¬5�3, n�$�= $5=# ≈ 0.1161

��#�1� = ∑ O5�2�$5= ¥Y=¦W∑ O5�2�$5= = O#�2�O �2� + O#�2� + O$�2� ≈ 0.2785

��#�2� = ∑ O5�2�$5= ¥Y=¦X∑ O5�2�$5= = O$�2�O �2� + O#�2� + O$�2� ≈ 0.3809

��#�3� = ∑ O5�2�$5= ¥Y=¦®∑ O5�2�$5= = 0O �2� + O#�2� + O$�2� = 0

��#�4� = ∑ O5�2�$5= ¥Y=¦¯∑ O5�2�$5= = O �2�O �2� + O#�2� + O$�2� ≈ 0.3406

��$�1� = ∑ O5�3�$5= ¥Y=¦W∑ O5�3�$5= = O#�3�O �3� + O#�3� + O$�3� ≈ 0.0891

��$�2� = ∑ O5�3�$5= ¥Y=¦X∑ O5�3�$5= = O$�3�O �3� + O#�3� + O$�3� ≈ 0.1950

��$�3� = ∑ O5�3�$5= ¥Y=¦®∑ O5�3�$5= = 0O �3� + O#�3� + O$�3� = 0

��$�4� = ∑ O5�3�$5= ¥Y=¦¯∑ O5�3�$5= = O �3�O �3� + O#�3� + O$�3� ≈ 0.7159

�� = O �1�O �1� + O �2� + O �3� ≈ 0.0892

��# = O �2�O �1� + O �2� + O �3� ≈ 0.3507

��$ = O �3�O �1� + O �2� + O �3� ≈ 0.5601

At the second iteration (r=2), the current parameter ∆ = (aij,

bj(k), πj) is received values from the estimate ∆�= 8���� , ������, ���; above. By repeating the similar calcula-

tion, it is easy to determine HMM parameters at the second

iteration. Table 11 summarizes HMM parameters resulted

from the first iteration and the second iteration of EM algo-

rithm.

Table 11. HMM parameters resulted from the first iteration and the second

iteration of EM algorithm.

Iteration HMM parameters

1st

�� = 0.5660 �� # = 0.3134 �� $ = 0.1206 ��# = 0.4785 ��## = 0.4262 ��#$ = 0.0953 ��$ = 0.6017 ��$# = 0.2822 ��$$ = 0.1161 �� �1�= 0.5417

�� �2�= 0.3832

�� �3�= 0

�� �4�= 0.0751 ��#�1�= 0.2785

��#�2�= 0.3809

��#�3�= 0

��#�4�= 0.3406 ��$�1�= 0.0891

��$�2�= 0.1950

��$�3�= 0

��$�4�= 0.7159 �� = 0.0892 ��# = 0.3507 ��$ = 0.5601 Terminating criterion P(O|∆) = 0.013

2nd

�� = 0.6053 �� # = 0.3299 �� $ = 0.0648 ��# = 0.5853 ��## = 0.3781 ��#$ = 0.0366 ��$ = 0.7793 ��$# = 0.1946 ��$$ = 0.0261 �� �1�= 0.5605

�� �2�= 0.4302

�� �3�= 0

�� �4�= 0.0093

��#�1�= 0.2757

��#�2�= 0.4517

��#�3�= 0

��#�4�= 0.2726 ��$�1�= 0.0283

��$�2�= 0.0724

��$�3�= 0

��$�4�= 0.8993 �� = 0.0126 ��# = 0.2147 ��$ = 0.7727 Terminating criterion P(O|∆) = 0.0776

As seen in table 11, the EM algorithm does not converge yet

when it produces two different terminating criterions (0.013

and 0.0776) at the first iteration and the second iteration. It is

necessary to run more iterations so as to gain the most optimal

estimate. Within this example, the EM algorithm converges

absolutely after 10 iterations when the criterion P(O|∆) ap-

proaches to the same value 1 at the 9th

and 10th

iterations.

Table 12 shows HMM parameter estimates along with ter-

minating criterion P(O|∆) at the 9th

and 10th

iterations of EM

algorithm.

Table 12. HMM parameters along with terminating criterions after 10 itera-

tions of EM algorithm.

Iteration HMM parameters

9th

�� = 0 �� # = 1 �� $ = 0 ��# = 0 ��## = 1 ��#$ = 0 ��$ = 1 ��$# = 0 ��$$ = 0 �� �1� = 1 �� �2� = 0 �� �3� = 0 �� �4� = 0 ��#�1� = 0 ��#�2� = 1 ��#�3� = 0 ��#�4� = 0 ��$�1� = 0 ��$�2� = 0 ��$�3� = 0 ��$�4� = 1 �� = 0 ��# = 0 ��$ = 1 Terminating criterion P(O|∆) = 1

10th

�� = 0 �� # = 1 �� $ = 0 ��# = 0 ��## = 1 ��#$ = 0 ��$ = 1 ��$# = 0 ��$$ = 0 �� �1� = 1 �� �2� = 0 �� �3� = 0 �� �4� = 0 ��#�1� = 0 ��#�2� = 1 ��#�3� = 0 ��#�4� = 0 ��$�1� = 0 ��$�2� = 0 ��$�3� = 0 ��$�4� = 1 �� = 0 ��# = 0 ��$ = 1 Terminating criterion P(O|∆) = 1

As a result, the learned parameters A, B, and ∏ are shown in

table 13:

Table 13. HMM parameters of weather example learned from EM algorithm.

Weather current day (Time point t)

sunny cloudy rainy

Weather previous day

(Time point t – 1)

sunny a11=0 a12=1 a13=0

cloudy a21=0 a22=1 a23=0

rainy a31=1 a32=0 a33=0

sunny cloudy rainy

π1=0 π2=0 π3=1

Humidity

dry dryish damp soggy

Weather

sunny b11=1 b12=0 b13=0 b14=0

cloudy b21=0 b22=1 b23=0 b24=0

rainy b31=0 b32=0 b33=0 b34=1

Page 23: Tutorial on Hidden Markov Model

38 Loc Nguyen: Tutorial on Hidden Markov Model

Such learned parameters are more appropriate to the train-

ing observation sequence O = {o1=φ4=soggy, o2=φ1=dry,

o3=φ2=dryish} than the original ones shown in tables 1, 2, and

3 when the terminating criterion P(O|∆) corresponding to its

optimal state sequence is 1.

Now three main problems of HMM are described; please

see an excellent document “A tutorial on hidden Markov

models and selected applications in speech recognition”

written by the author Rabiner [3] for advanced details about

HMM.

5. Conclusion

In general, there are three main problems of HMM such as

evaluation problem, uncovering problem, and learning prob-

lem. For evaluation problem and uncovering problem, re-

searchers should pay attention to forward variable and back-

ward variable. Most computational operations are relevant to

them. They reflect unique aspect of HMM. The Viterbi algo-

rithm is very effective to solve the uncovering problem. The

Baum-Welch algorithm is often used to solve the learning

problem. It is easier to explain Baum-Welch algorithm by

combination of EM algorithm and optimization theory, in

which the Lagrangian function is maximized so as to find out

optimal parameters of EM algorithms when such parameters

are also learned parameters of HMM.

Observations of normal HMM described in this report are

quantified by discrete probability distribution which is ob-

servation probability matrix B. In the most general case, ob-

servation is represented by continuous variable and matrix B is

replaced by probability density function. At that time the

normal HMM becomes continuously observational HMM.

Readers are recommended to research continuously observa-

tional HMM, an enhanced variant of normal HMM.

References

[1] E. Fosler-Lussier, "Markov Models and Hidden Markov Mod-els: A Brief Tutorial," 1998.

[2] J. G. Schmolze, "An Introduction to Hidden Markov Models," 2001.

[3] L. R. Rabiner, "A tutorial on hidden Markov models and se-lected applications in speech recognition," Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, 1989.

[4] L. Nguyen, "Mathematical Approaches to User Modeling," Journals Consortium, 2015.

[5] B. Sean, "The Expectation Maximization Algorithm - A short tutorial," Sean Borman's Homepage, 2009.

[6] A. P. Dempster, N. M. Laird and D. B. Rubin, "Maximum Likelihood from Incomplete Data via the EM Algorithm," Journal of the Royal Statistical Society, Series B (Methodo-logical), vol. 39, no. 1, pp. 1-38, 1977.

[7] Y.-B. Jia, "Lagrange Multipliers," 2013.

[8] S. Borman, "The Expectation Maximization Algorithm - A short tutorial," Sean Borman's Home Page, South Bend, Indi-ana, 2004.

[9] D. Ramage, "Hidden Markov Models Fundamentals," 2007.

[10] Wikipedia, "Karush–Kuhn–Tucker conditions," Wikimedia Foundation, 4 August 2014. [Online]. Available: http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions. [Accessed 16 November 2014].

[11] S. Boyd and L. Vandenberghe, Convex Optimization, New York, NY: Cambridge University Press, 2009, p. 716.G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of Lip-schitz-Hankel type involving products of Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955. (references).


Recommended