Date post: | 22-Dec-2015 |
Category: |
Documents |
View: | 232 times |
Download: | 1 times |
CS262 Lecture 4, Win06, Batzoglou
Indexing-based local alignment
(BLAST- Basic Local Alignment Search Tool)
1.1. SEEDSEED
Construct a dictionary of all the words in query (database) & search database (query) in linear time for word matches
2.2. EXTENDEXTEND
Initiate fast local alignment procedures to the left and right of each word match
3.3. REPORTREPORT
Retain all alignments that score above a threshold and report them
……
……
query
DB
query
scan
CS262 Lecture 4, Win06, Batzoglou
Indexing-based local alignment—Extensions
A C G A A G T A A G G T C C A G T
C
C
C
T
T
C C
T
G
G
A T
T
G
C
G
A
Example:
k = 4
The matching word GGTC initiates an alignment
Extension to the left and right with no gaps until alignment falls < C below best so far
Output:
GTAAGGTCC
GTTAGGTCC
CS262 Lecture 4, Win06, Batzoglou
Indexing-based local alignment—Extensions
A C G A A G T A A G G T C C A G T
C
T
G
A
T
C C
T
G
G
A
T
T
G C
G
A
Gapped extensions until threshold
• Extensions with gaps until score < C below best score so far
Reference: Zhang, Bergman, Miller, RECOMB ‘98
Output:
GTAAGGTCCAGTGTTAGGTC-AGT
CS262 Lecture 4, Win06, Batzoglou
Sensitivity-Speed Tradeoff
long words
(k = 15)
short words
(k = 7)
Sensitivity Speed
Kent WJ, Genome Research 2002
Sens.
Speed
X%
CS262 Lecture 4, Win06, Batzoglou
Sensitivity-Speed Tradeoff
Methods to improve sensitivity/speed
1. Using pairs of words
2. Using inexact words
3. Patterns—non consecutive positions
……ATAACGGACGACTGATTACACTGATTCTTAC……
……GGCACGGACCAGTGACTACTCTGATTCCCAG……
……ATAACGGACGACTGATTACACTGATTCTTAC……
……GGCGCCGACGAGTGATTACACAGATTGCCAG……
TTTGATTACACAGAT T G TT CAC G
CS262 Lecture 4, Win06, Batzoglou
Non-consecutive words—Patterns
Patterns increase the likelihood of at least one match within a long conserved region
3 common
5 common
7 common
Consecutive Positions Non-Consecutive Positions
6 common
On a 100-long 70% conserved region: Consecutive Non-consecutive
Expected # hits: 1.07 0.97Prob[at least one hit]: 0.30 0.47
CS262 Lecture 4, Win06, Batzoglou
Multiple patterns
• K different patterns Construct K distinct dictionaries, one for each pattern Takes K times longer to scan Patterns can complement one another
• Computational problem: Given: a model (prob distribution) for homology between two regions Find: best set of K patterns that maximizes Prob(at least one match)
TTTGATTACACAGAT T G TT CAC G T G T C CAG TTGATT A G
Buhler et al. RECOMB 2003Sun & Buhler RECOMB 2004
How long does it take to search the query?
CS262 Lecture 4, Win06, Batzoglou
Variants of BLAST
• NCBI BLAST: search the universe http://www.ncbi.nlm.nih.gov/BLAST/• MEGABLAST: http://genopole.toulouse.inra.fr/blast/megablast.html
Optimized to align very similar sequences• Works best when k = 4i 16• Linear gap penalty
• WU-BLAST: (Wash U BLAST) http://blast.wustl.edu/ Very good optimizations Good set of features & command line arguments
• BLAT http://genome.ucsc.edu/cgi-bin/hgBlat Faster, less sensitive than BLAST Good for aligning huge numbers of queries
• CHAOS http://www.cs.berkeley.edu/~brudno/chaos Uses inexact k-mers, sensitive
• PatternHunter http://www.bioinformaticssolutions.com/products/ph/index.php Uses patterns instead of k-mers
• BlastZ http://www.psc.edu/general/software/packages/blastz/ Uses patterns, good for finding genes
• Typhon http://typhon.stanford.edu Uses multiple alignments to improve sensitivity/speed tradeoff
CS262 Lecture 4, Win06, Batzoglou
Example
Query: Human atoh enhancer, 179 letters [1.5 min]
Result: 57 blast hits1. gi|7677270|gb|AF218259.1|AF218259 Homo sapiens ATOH1 enhanc... 355 1e-95 2. gi|22779500|gb|AC091158.11| Mus musculus Strain C57BL6/J ch... 264 4e-68 3. gi|7677269|gb|AF218258.1|AF218258 Mus musculus Atoh1 enhanc... 256 9e-66 4. gi|28875397|gb|AF467292.1| Gallus gallus CATH1 (CATH1) gene... 78 5e-12 5. gi|27550980|emb|AL807792.6| Zebrafish DNA sequence from clo... 54 7e-05 6. gi|22002129|gb|AC092389.4| Oryza sativa chromosome 10 BAC O... 44 0.068 7. gi|22094122|ref|NM_013676.1| Mus musculus suppressor of Ty ... 42 0.27 8. gi|13938031|gb|BC007132.1| Mus musculus, Similar to suppres... 42 0.27
gi|7677269|gb|AF218258.1|AF218258 Mus musculus Atoh1 enhancer sequence Length = 1517 Score = 256 bits (129), Expect = 9e-66 Identities = 167/177 (94%),
Gaps = 2/177 (1%) Strand = Plus / Plus Query: 3 tgacaatagagggtctggcagaggctcctggccgcggtgcggagcgtctggagcggagca 62 ||||||||||||| ||||||||||||||||||| |||||||||||||||||||||||||| Sbjct: 1144 tgacaatagaggggctggcagaggctcctggccccggtgcggagcgtctggagcggagca 1203
Query: 63 cgcgctgtcagctggtgagcgcactctcctttcaggcagctccccggggagctgtgcggc 122 |||||||||||||||||||||||||| ||||||||| |||||||||||||||| ||||| Sbjct: 1204 cgcgctgtcagctggtgagcgcactc-gctttcaggccgctccccggggagctgagcggc 1262
Query: 123 cacatttaacaccatcatcacccctccccggcctcctcaacctcggcctcctcctcg 179 ||||||||||||| || ||| |||||||||||||||||||| |||||||||||||||
Sbjct: 1263 cacatttaacaccgtcgtca-ccctccccggcctcctcaacatcggcctcctcctcg 1318 http://www.ncbi.nlm.nih.gov/BLAST/
CS262 Lecture 4, Win06, Batzoglou
The Four-Russian Algorithmbrief overview
A (not so useful) speedup of Dynamic Programming
[Arlazarov, Dinic, Kronrod, Faradzev 1970]
CS262 Lecture 4, Win06, Batzoglou
Main Observation
Within a rectangle of the DP matrix,values of D depend onlyon the values of A, B, C,
and substrings xl...l’, yr…r’
Definition: A t-block is a t t square of
the DP matrix
Idea: Divide matrix in t-blocks,Precompute t-blocks
Speedup: O(t)
A B
C
D
xl xl’
yr
yr’
t
CS262 Lecture 4, Win06, Batzoglou
The Four-Russian Algorithm
Main structure of the algorithm:
1. Divide NN DP matrix into KK log2N-blocks that overlap by 1 column & 1 row
2. For i = 1……K
3. For j = 1……K
4. Compute Di,j as a function of Ai,j, Bi,j, Ci,j, x[li…l’i], y[rj…r’j]
Time: O(N2 / log2N)
times the cost of step 4t t t
CS262 Lecture 4, Win06, Batzoglou
Outline for our next topic
• Hidden Markov models – the theory
• Probabilistic interpretation of alignments using HMMs
Later in the course:
• Applications of HMMs to biological sequence modeling and discovery of features such as genes
CS262 Lecture 4, Win06, Batzoglou
Example: The Dishonest Casino
A casino has two dice:• Fair die
P(1) = P(2) = P(3) = P(5) = P(6) = 1/6• Loaded die
P(1) = P(2) = P(3) = P(5) = 1/10P(6) = 1/2
Casino player switches back-&-forth between fair and loaded die once every 20 turns
Game:1. You bet $12. You roll (always with a fair die)3. Casino player rolls (maybe with fair die,
maybe with loaded die)4. Highest number wins $2
CS262 Lecture 4, Win06, Batzoglou
Question # 1 – Evaluation
GIVEN
A sequence of rolls by the casino player
1245526462146146136136661664661636616366163616515615115146123562344
QUESTION
How likely is this sequence, given our model of how the casino works?
This is the EVALUATION problem in HMMs
Prob = 1.3 x 10-35
CS262 Lecture 4, Win06, Batzoglou
Question # 2 – Decoding
GIVEN
A sequence of rolls by the casino player
1245526462146146136136661664661636616366163616515615115146123562344
QUESTION
What portion of the sequence was generated with the fair die, and what portion with the loaded die?
This is the DECODING question in HMMs
FAIR LOADED FAIR
CS262 Lecture 4, Win06, Batzoglou
Question # 3 – Learning
GIVEN
A sequence of rolls by the casino player
1245526462146146136136661664661636616366163616515615115146123562344
QUESTION
How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back?
This is the LEARNING question in HMMs
Prob(6) = 64%
CS262 Lecture 4, Win06, Batzoglou
The dishonest casino model
FAIR LOADED
0.05
0.05
0.950.95
P(1|F) = 1/6P(2|F) = 1/6P(3|F) = 1/6P(4|F) = 1/6P(5|F) = 1/6P(6|F) = 1/6
P(1|L) = 1/10P(2|L) = 1/10P(3|L) = 1/10P(4|L) = 1/10P(5|L) = 1/10P(6|L) = 1/2
CS262 Lecture 4, Win06, Batzoglou
Definition of a hidden Markov model
Definition: A hidden Markov model (HMM)• Alphabet = { b1, b2, …, bM }• Set of states Q = { 1, ..., K }• Transition probabilities between any two states
aij = transition prob from state i to state j
ai1 + … + aiK = 1, for all states i = 1…K
• Start probabilities a0i
a01 + … + a0K = 1
• Emission probabilities within each state
ei(b) = P( xi = b | i = k)
ei(b1) + … + ei(bM) = 1, for all states i = 1…K
K
1
…
2
CS262 Lecture 4, Win06, Batzoglou
A HMM is memory-less
At each time step t,
the only thing that affects future states
is the current state t
P(t+1 = k | “whatever happened so far”) =
P(t+1 = k | 1, 2, …, t, x1, x2, …, xt) =
P(t+1 = k | t)
K
1
…
2
CS262 Lecture 4, Win06, Batzoglou
A parse of a sequence
Given a sequence x = x1……xN,
A parse of x is a sequence of states = 1, ……, N
1
2
K
…
1
2
K
…
1
2
K
…
…
…
…
1
2
K
…
x1 x2 x3 xK
2
1
K
2
CS262 Lecture 4, Win06, Batzoglou
Likelihood of a parse
Given a sequence x = x1……xN
and a parse = 1, ……, N,
To find how likely is the parse:
(given our HMM)
P(x, ) = P(x1, …, xN, 1, ……, N) =
P(xN, N | N-1) P(xN-1, N-1 | N-2)……P(x2, 2 | 1) P(x1, 1) =
P(xN | N) P(N | N-1) ……P(x2 | 2) P(2 | 1) P(x1 | 1) P(1) =
a01 a12……aN-1N e1(x1)……eN(xN)
1
2
K
…
1
2
K
…
1
2
K
…
…
…
…
1
2
K
…
x1
x2 x3 xK
2
1
K
2
A compact way to write a01 a12……aN-1N e1(x1)……eN(xN)
Number all parameters aij and ei(b); n paramsExample: a0Fair : 1; a0Loaded : 2; … eLoaded(6) = 18
Then, count in x and the # of times eachparameter j = 1, …, n occurs
F(j, x, ) = # parameter j occurs in (x, )
(call F(.,.,.) the feature counts) Then,
P(x, ) = j=1…n jF(j, x, ) =
= exp[j=1…n log(j)F(j, x, )]
CS262 Lecture 4, Win06, Batzoglou
Example: the dishonest casino
Let the sequence of rolls be:
x = 1, 2, 1, 5, 6, 2, 1, 5, 2, 4
Then, what is the likelihood of
= Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair?
(say initial probs a0Fair = ½, aoLoaded = ½)
½ P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) =
½ (1/6)10 (0.95)9 = .00000000521158647211 ~= 0.5 10-9
CS262 Lecture 4, Win06, Batzoglou
Example: the dishonest casino
So, the likelihood the die is fair in this run
is just 0.521 10-9
OK, but what is the likelihood of
= Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded?
½ P(1 | Loaded) P(Loaded, Loaded) … P(4 | Loaded) =
½ (1/10)9 (1/2)1 (0.95)9 = .00000000015756235243 ~= 0.16 10-9
Therefore, it somewhat more likely that all the rolls are done with the fair die, than that they are all done with the loaded die
CS262 Lecture 4, Win06, Batzoglou
Example: the dishonest casino
Let the sequence of rolls be:
x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6
Now, what is the likelihood = F, F, …, F?
½ (1/6)10 (0.95)9 = 0.5 10-9, same as before
What is the likelihood
= L, L, …, L?
½ (1/10)4 (1/2)6 (0.95)9 = .00000049238235134735 ~= 0.5 10-7
So, it is 100 times more likely the die is loaded
CS262 Lecture 4, Win06, Batzoglou
The three main questions on HMMs
1. Evaluation
GIVEN a HMM M, and a sequence x,FIND Prob[ x | M ]
2. Decoding
GIVEN a HMM M, and a sequence x,FIND the sequence of states that maximizes P[ x, | M ]
3. Learning
GIVEN a HMM M, with unspecified transition/emission probs.,and a sequence x,
FIND parameters = (ei(.), aij) that maximize P[ x | ]
CS262 Lecture 4, Win06, Batzoglou
Let’s not be confused by notation
P[ x | M ]: The probability that sequence x was generated by the model
The model is: architecture (#states, etc)
+ parameters = aij, ei(.)
So, P[x | M] is the same with P[ x | ], and P[ x ], when the architecture, and the parameters, respectively, are implied
Similarly, P[ x, | M ], P[ x, | ] and P[ x, ] are the same when the architecture, and the parameters, are implied
In the LEARNING problem we always write P[ x | ] to emphasize that we are seeking the * that maximizes P[ x | ]
CS262 Lecture 4, Win06, Batzoglou
Decoding
GIVEN x = x1x2……xN
We want to find = 1, ……, N,such that P[ x, ] is maximized
* = argmax P[ x, ]
We can use dynamic programming!
Let Vk(i) = max{1… i-1} P[x1…xi-1, 1, …, i-1, xi, i = k] = Probability of most likely sequence of states ending at state i = k
1
2
K
…
1
2
K
…
1
2
K
…
…
…
…
1
2
K
…
x1
x2 x3 xK
2
1
K
2
CS262 Lecture 4, Win06, Batzoglou
Decoding – main idea
Given that for all states k, and for a fixed position i,
Vk(i) = max{1… i-1} P[x1…xi-1, 1, …, i-1, xi, i = k]
What is Vl(i+1)?
From definition,
Vl(i+1) = max{1… i}P[ x1…xi, 1, …, i, xi+1, i+1 = l ]
= max{1… i}P(xi+1, i+1 = l | x1…xi,1,…, i) P[x1…xi, 1,…, i]
= max{1… i}P(xi+1, i+1 = l | i ) P[x1…xi-1, 1, …, i-1, xi, i]
= maxk [P(xi+1, i+1 = l | i = k) max{1… i-1}P[x1…xi-1,1,…,i-1, xi,i=k]] = el(xi+1) maxk akl Vk(i)
CS262 Lecture 4, Win06, Batzoglou
The Viterbi Algorithm
Input: x = x1……xN
Initialization:V0(0) = 1 (0 is the imaginary first position)Vk(0) = 0, for all k > 0
Iteration:Vj(i) = ej(xi) maxk akj Vk(i – 1)
Ptrj(i) = argmaxk akj Vk(i – 1)
Termination:P(x, *) = maxk Vk(N)
Traceback: N* = argmaxk Vk(N) i-1* = Ptri (i)
CS262 Lecture 4, Win06, Batzoglou
The Viterbi Algorithm
Similar to “aligning” a set of states to a sequence
Time:
O(K2N)
Space:
O(KN)
x1 x2 x3 ………………………………………..xN
State 1
2
K
Vj(i)
CS262 Lecture 4, Win06, Batzoglou
Viterbi Algorithm – a practical detail
Underflows are a significant problem
P[ x1,…., xi, 1, …, i ] = a01 a12……ai e1(x1)……ei(xi)
These numbers become extremely small – underflow
Solution: Take the logs of all values
Vl(i) = log ek(xi) + maxk [ Vk(i-1) + log akl ]
CS262 Lecture 4, Win06, Batzoglou
Example
Let x be a long sequence with a portion of ~ 1/6 6’s, followed by a portion of ~ ½ 6’s…
x = 123456123456…12345 6626364656…1626364656
Then, it is not hard to show that optimal parse is (exercise):
FFF…………………...F LLL………………………...L
6 characters “123456” parsed as F, contribute .956(1/6)6 = 1.610-5
parsed as L, contribute .956(1/2)1(1/10)5 = 0.410-5
“162636” parsed as F, contribute .956(1/6)6 = 1.610-5
parsed as L, contribute .956(1/2)3(1/10)3 = 9.010-
5