+ All Categories
Home > Documents > Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer...

Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer...

Date post: 15-Mar-2018
Category:
Upload: lyphuc
View: 217 times
Download: 0 times
Share this document with a friend
20
1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network Models 2 Overview Boolean network models Sample applications Kaufmann’s theory of evolution Learning (reverse engineering) Boolean nets
Transcript
Page 1: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

1

Prof. Yechiam Yemini (YY)

Computer Science DepartmentColumbia University

Chapter 6: Regulatory Networks

6.3 Boolean Network Models

2

Overview

Boolean network modelsSample applicationsKaufmann’s theory of evolutionLearning (reverse engineering) Boolean nets

Page 2: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

2

3

Intro To Boolean Networks

4

Example: Thresholding Gene Expression Boolean model: discretize expressions to on/off model

gene 1

gene 2

gene 3

0 10 20 30 40 50 60

time (min)

τ1

τ2

τ3

ononononon

on

off off off

off

off

offoffoffoff

off

on on on on

on

on

on

off off off off off off

on

off off off

Page 3: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

3

5

Boolean Network Model

A Boolean Network Model:Nodes represent transcription factorsEdges represent regulatory inputBoolean gates (input functions) represent gene expression

fB(A,B,C)=A AND C

fC(A,B,C)=NOT A OR B)fA(A,B,C)=A OR C

Gene A

A OR CGene B

If A AND C

Gene C

(NOT A) OR B

TFC

TFC

TFA TFB

TFA

A

B

C

TFA

6

Dynamics Network State: X=(A,B,C…) is a Boolean vector

State evolution: X(t+1)=f(X(t))=(fA(X(t)),fB(X(t)), ….) E.g., X(t+1)=(A OR C, A AND C, (NOT A) OR B) (0,1,1)=>(1,0,1)

This is discrete time synchronous dynamics State transitions occur through concurrent gates firings

000001010011100101110111

001101001101100110101111

X(t+1) X(t)

Attractors

Cycle

fB(A,B,C)=A AND CfC(A,B,C)=(NOT A) OR B

fA(A,B,C)=A OR C

A

B

C000

011

001

101

010

111

110

100

State-space dynamics

Page 4: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

4

7

Noisy (Stochastic) Dynamics If gene-gates “fire” randomly The network becomes asynchronous The dynamics landscape changes

000001010011100101110111

001101001,000,011101,111,001100110, 111,100101,111,100111

X(t+δ) X(t)

000

011

001

101

010

111

110

fB(A,B,C)=A AND CfC(A,B,C)=(NOT A) OR B

fA(A,B,C)=A OR CA

B

C

100

000

011

001

101

010

111

110

100

ABC ABC

ABC ABC

ABC

ABCABC

ABC

ABC

ABC

ABCABC ABC ABC

Stable attractor

8

Example: Boolean RepressilatorRepressilator has three repressors in a loop:

fB(A,B,C)=NOT AfC(A,B,C)=NOT B

fA(A,B,C)=NOT C

000 111110010

100101001

011

A

BC

000 111

110010

100

101001

011Stable cycle attractor

Page 5: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

5

9

More GenerallyBoolean Network:

A digraph G=<V,E>; nodes = genes, edges=regulation For each node assign a Boolean function over ingress neighbors

Attractors & cycles describe dynamics of expressionLearning (reverse engineering; identification):

Extract Boolean network model from expression levels

0 10 20 30 40 50 60

time (min)

τ1

τ2

τ3

onononononon

off off offoff

off

offoffoffoff

off

on on on onon

on

on

off off off off off off

on

off off off

001101110100

1234

Time001101110

101110100

000

011

001

101

010

111

110

100

State

fB(A,B,C)=?fC(A,B,C)=?

fA(A,B,C)=?

A

B

C

10

Example: Regulation of Drosophila PatternsFundamental question: how do

gene’s regulate spatial patterns?

R. Albert & H.G., Othmer; Journal of Theoretical Biology 223 (2003)

Page 6: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

6

11

Segmentation Is Regulated By A Cascade

Genes are activated in precise temporal orderUse regulatory interaction to coordinate

development functions

12

Segmentation Is Regulated By A Cascade

Pair rule genes initiate stripes eve, ftz..

Segment polarity genes controlanterior/posterior structure Engrailed, wingless

eve stripes

Giant (gt) & krupple (kr)control stripe 2

knirps (kni) & hunchback(hb) control stripes 3-7

Page 7: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

7

13

A Boolean Network ModelR. Albert & H.G., Othmer; Journal of Theoretical Biology 223 (2003)

14

Coordinated Regulation

Page 8: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

8

15

The Steady State (Attractor)

16

Compute Attractor Expression

Page 9: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

9

17

Mutations (Perturbations)

18

Kauffman’s Model

Page 10: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

10

19

Kauffman’s Model [60’s, 93]Study Boolean networks to describe evolutionBN: a graph of “genes” each with a random Boolean function

N=# of nodes; k=connectivityBN traverses trajectories over the hypercube [0,1]n

Converges to best fit response to random inputs

20

Evolution of Boolean NetworksNature evolves an ensemble of networks

Mutations change connectivity/gene-transition-function

Genes select best-fitness transition functionsWhat happens if k is large (e.g., k=N-1)?

X(t+1) is uncorrelated with X(t) The number of attractors is very small; cycles are huge with period of some 20.5N

Most genes would be oscillating Network is very sensitive to small perturbations

Need to keep k small K=1, too small; gene’s do not interact K=2; large number of attractors ~N0.5; avg cycle N0.5

An ensemble of N=8, k=2 Networks

Page 11: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

11

21

Learning Boolean Nets(“Reverse Engineering” “Identification”)

22

The ChallengeDiscretization: Given expression profiles vector X(t)

Set expression thresholds τ (how?)Extract a time-state map S(t); compute state transitions map M(x)

Learning:Given: state transition map MCompute: a Boolean vector function f such that M(x)=f(x)

0 10 20 30 40 50 60

time (min)

τ1

τ2

τ3

onononononon

off off offoff

off

offoffoffoff

off

on on on onon

on

on

off off off off off off

on

off off off

001101110100

1234

t001101110

101110100

000

011

001

101

010

111

110

100

S(t)

fB(A,B,C)=?fC(A,B,C)=?

fA(A,B,C)=?

A

B

C

M(x) G + fx

Page 12: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

12

23

Akutsu Algorithm (99)Brute force search for fFix k, and consider networks of max degree k

For each gene i, and for each subset of k ingress genes find allfunctions fi that are compatible over this ingress set for all {S(r)}

i.e., S’i(r)=fi(S’(r-1)) where S’ is the restriction of S to the ingress set For k fixed: O(k22knk+1m); if k is not fixed, learning is NP complete.

NotesWorks for small k…does not handle noise… Later improvements handle noise

0 10 20 30 40 50 60

time (min)

τ1τ2

τ3

ononononon

on

off off offoff

off

offoffoffoff

off

on on on onon

on

on

off off off off off off

on

off off off

001101110100

1234

t S(t)

24

Suppose the Network Graph is KnownGiven M and G, computing f is simple:

The truth table for fi(X) obtains by projecting M The network graph G guides the projections

101

111

110

000

f1X3X1

f1(X)= X1 OR X3

1

2

3

001001101110100010

Output stateInput state

011111001011011101

101100100000f3f2f1X3X2X1

G

M+

Page 13: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

13

25

How Do We Find The Network Graph ?An intuitive approach:

f(X1,X2,X3) depends on X1, iff f(1,X2,X3)≠ f(0,X2,X3) for some (X2,X3) We call such values <X1,X2,X3,f> “dependency”

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f1 depends on X1

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f1 depends on X3

1

2

3 1

2

3

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f1 is independent of X2

26

An Intuitive AlgorithmRepeat for all Xi and fk :

Scan M to find a dependency of fk on Xi; if found then add an Xi=>fk edge to G

Else (no dependency found) then fk is independent of Xi

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f1 depends on X1

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f1 depends on X3

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f2 depends on X1

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f2 depends on X3

1

2

3

011111001011011101001001101110100010101100100000

f3f2f1X3X2X1

f3 depends on X1

1

2

1

2

3 1

2

1

2

33 3

Page 14: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

14

27

REVEAL (98 Liang)Compute network graph from mutual information measureBase theory:

Let <X,Y> be an <input,output> streamConsider H(Y), the entropy of Y, and M(X,Y), the mutual information

of X and Y If M(Y,X)=H(Y) then X determines Y uniquely

H(X)= - ∑ pi log(pi) pi is the probability that a random element of data stream X is i

M(X, Y)= H(X) + H(Y)- H(X,Y)

28

REVEAL AlgorithmStep 1: compute state transition <input,output> table

001001

101110

100010

Output streamInput stream

011111

001011

011101

101100

100000

CiBiAiCi-1Bi-1Ai-1

Page 15: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

15

29

Step 2a: Compute Entropies

001001

101110

100010

Output streamvalue

Input streamvalue

011111

001011

011101

101100

100000

CiBiAiCi-1Bi-1Ai-1

P(Ai=0)=2/8=0.25P(Ai=1)=6/8=0.75

H(Ai) = -((0.25)log(0.25) + (0.75)log(0.75)) = 0.81

30

Step 2a: Compute Entropies

note: limx+→0xx=1, therefore in the left-hand limit, (0)log(0) = 0.H(Ai) = -((0.25)log(0.25) + (0.75)log(0.75)) = 0.81H(Bi) = -((0.75)log(0.75) + (0.25)log(0.25)) = 0.81H(Ci) = -((0.5)log(0.5) + (0.5)log(0.5)) = 1H(Ai-1) = H(Bi-1) = H(Ci-1) = -((0.5)log(0.5) + (0.5)log(0.5)) = 1H(Ai-1, Ci-1) = -((0.25)log(0.25) +(0.25)log(0.25)+(0.25)log(0.25)+(0.25)log(0.25)) = 2H(Ci, Ai-1) =-((0.5)log(0.5) + (0.5)log(0.5) = 1H(Ai, Ai-1, Ci-1)=-((0.25)log(0.25)+(0.25)log(0.25)+ (0.25)log(0.25)+(0.25)log(0.25))=2H(Bi, Ai-1, Ci-1)=-((0.25)log(0.25)+(0.25)log(0.25)+(0.25)log(0.25)+(0.25)log(0.25))= 2…………

Page 16: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

16

31

Step 2b: Compute NetworkFirst compute mutual information

Use this to determine network graph

(I) M(Ai, [Ai-1, Ci-1]) = H(Ai) + H(Ai-1, Ci-1) - H(Ai, Ai-1, Ci-1)= 0.81 + 2 – 2= 0.81 = H(Ai), therefore Ai-1 and Ci-1 determine Ai

(II) M(Bi, [Ai-1, Ci-1]) = H(Bi) + H(Ai-1, Ci-1) - H(Bi, Ai-1, Ci-1)= 0.81 + 2 – 2 = 0.81 = H(Bi), therefore Ai-1 and Ci-1 determine Bi

(III) M(Ci, Ai-1) = H(Ci) + H(Ai-1) - H(Ci, Ai-1)= 1 + 1 – 1 = 1 = H(Ci), therefore Ai-1 determines Ci

A

B

C

A C

A

B

C

A C

32

Step 3: Compute Boolean Functions

Consider only network dependencies

101

111

110

000

AiCi-1Ai-1

Ai = Ai-1 OR Ci-1

001111

010000BiCi-1Ai-1

Bi = Ai-1 AND Ci-1

01

10

CiAi-1

Ci = NOT Ai-1

Page 17: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

17

33

ExampleConsider the following expression scenario of 4 genes:

A threshold at 0.6 yields: 001010111001100001000010 Note: the intermediate state 0000 of the transition 1000=>0000=>0100 is ignored

The transition map is partial Can admit multiple Boolean net models Exercise: find 2 distinct Boolean net models

Exercise: use REVEAL to compute a network model

0.00

0.20

0.40

0.60

0.80

1.00

1.20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

g1 g2 g3g4

10001

f3

00011g3

000010010001001101101101101000

f4f2f1g4g2g1

34

Computing A Partial Dependency Graph

10

0

0

1f3

00

0

1

1g3

000010010001

001101

101101

101000f4f2f1g4g2g1

10

0

0

1f3

00

0

1

1g3

000010010001

001101

101101

101000f4f2f1g4g2g1

f1 depends on g4

10

0

0

1f3

00

0

1

1g3

000010010001

001101

101101

101000f4f2f1g4g2g1

f2 depends on g4

1

0

00

1f3

0

0

01

1g3

000010

010001

001101101101

101000f4f2f1g4g2g1

f4 depends on g3

g1

g3

g2

g4

0

0

01

1g3

0010

0001

11011101

1000f1g4g2g1

f1=g3∨g4

f1=(¬g1 ∨g4)∧(¬g2)

Dependency means that g4 must appear in any expression of f1;For a partial map f1 may require other genes without depending on them.

Page 18: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

18

35

Sensitivity Considerations & Noisy MapsConsider again the 4 genes example

0.00

0.20

0.40

0.60

0.80

1.00

1.20

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

g1 g2 g3g4

Different thresholds yield different Boolean dynamics: Tr1/2/3/4=0.6: 001010111001100001000010… Tr1/2/3=0.2, Tr4=0.8: 0010 10101011100011000100 011000010 Tr1/2/3/4=0.8: 0010 001110000000010000000010… (non-

deterministic)

36

Research Questions Extend the intuitive algorithm to handle partial noisy maps

Extend REVEAL to handle partial noisy maps

?Probabilistic Boolean net models? ?Max likelihood training…EM…?

?SVM based models… Boolean kernel machines…?

Page 19: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

19

37

Final Notes

38

How Good Are Boolean Models?Advantages Provide good qualitative interpretation of regulation Particularly important for switching behaviors

Phage lysis...sporulation…Drosophila patterns.. Such systems are “robust” wrt exact expression values

Useful connection with evolutionary behaviors

Disadvantages Boolean abstraction is poor fit to real expression data Cannot model important features:

Amplification of a signal; subtraction and addition of signals Handling smoothly varying environmental parameter (e.g. temperature, nutrients) Temporal performance behavior (e.g. cell cycle period) Negative feedback control (Boolean model oscillates vs. stabilize)

Page 20: Chapter 6: Regulatory Networks - Columbia · PDF file1 Prof. Yechiam Yemini (YY) Computer Science Department Columbia University Chapter 6: Regulatory Networks 6.3 Boolean Network

20

39

A Variety of Regulatory Network Models Finite-field models: X(t+1)=p(X)

p is a polynomial over finite fieldGeneralizes the Boolean model

Differential equations models: describe dX/dt=f(X) f describes non-linear control of change by neighbors

Linear model: X(t+1)=W X+ BW is a weight matrix; linear approximation near steady state

Neural network models: xi(t)=σ(WXNeighbors(i) +B)Sigmoid non-linearity can be trained through gradient algorithmComes with a learning algorithm

Bayesian network models…


Recommended