+ All Categories
Home > Documents > Inference in Bayesian networks

Inference in Bayesian networks

Date post: 10-Feb-2022
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
34
1 Inference in Bayesian networks Devika Subramanian Comp 440 Lecture 7 (c) Devika Subramanian, 2006 2 Exact inference in Bayesian networks Inference by enumeration The variable elimination algorithm
Transcript
Page 1: Inference in Bayesian networks

1

Inference in Bayesian networks

Devika SubramanianComp 440Lecture 7

(c) Devika Subramanian, 2006 2

Exact inference in Bayesian networks

Inference by enumerationThe variable elimination algorithm

Page 2: Inference in Bayesian networks

2

(c) Devika Subramanian, 2006 3

Probabilistic inference using Bayesian networks

A single mechanism can account for a wide range of inferences under uncertainty.

Diagnostic inference (from effects to causes).

Example: P(B|J) = 0.016Causal inference (from causes to effects).

Example: P(J|B) = 0.875

(c) Devika Subramanian, 2006 4

Probabilistic inferenceInter-causal inference (between causes of a common effect).

Example: P(B|A) = 0.376 and P(B|A,E) = 0.003. Even though B and E are independent, the presence of one makes the other unlikely. This phenomenon is called “explaining away”. Cannot be captured in logic which is monotonic.

Page 3: Inference in Bayesian networks

3

(c) Devika Subramanian, 2006 5

Probabilistic inferenceMixed inferences (combining two or more of the above).

Example: P(A|J, not E) = 0.03 is a simultaneous use of diagnostic and causal inference.

(c) Devika Subramanian, 2006 6

Inference by enumerationTo compute the probability of variable Q given evidence E (E1,…,Ek), we use the rule of conditional independence:

)(),()|(

EPEQPEQP =

Each of these terms can be computed by summing terms from the full joint distribution.

Page 4: Inference in Bayesian networks

4

(c) Devika Subramanian, 2006 7

ExampleP(B|J)=P(B,J)/P(J)

0008575.0 )]05.0*05.09.0*95.0(998.0)05.0*05.09.0*95.0(002.0[001.0

)]|(),|()|(),|()((

))|(),|()|(),|()(()[(

)|(),|()()(

)|()|(),|()()(

)|()(),|()|()(),,,,(),(,,,,

=+++=

+

++=

=

=

==

∑ ∑

∑ ∑ ∑

∑∑

AJPEBAPAJPEBAPEP

AJPEBAPAJPEBAPEPBP

AJPEBAPEPBP

AMPAJPEBAPEPBP

AMPEPEBAPAJPBPMEAJBPJBP

E A

E A M

MEAMEA

(c) Devika Subramanian, 2006 8

The expression treeBottom upcomputation

Page 5: Inference in Bayesian networks

5

(c) Devika Subramanian, 2006 9

Example (contd.)

0164.0)|(0513413.0

0.05)]*0.9990.9*10.998(0.00 )05.0*71.09.0*29.0(002.0[999.0),(

),(),()(

==

+++=

+=

JBP

JBP

JBPJBPJP

Any query can be answered by computing sums of productsof conditional probabilities from the network.

(c) Devika Subramanian, 2006 10

Inter-causal inference

0.00095 0.95]*0.9980.95*20.001[0.00

)],|()(),|()()[(

),|()()(

)|(),|()()(

)|()|(),|()()(),()(/),()|(

=+=

+=

=

=

=

=

∑∑

∑ ∑∑

EBAPEPEBAPEPBP

EBAPEPBP

AJPEBAPEPBP

AMPAJPEBAPEPBPABPAPABPABP

E

JE

J ME

Page 6: Inference in Bayesian networks

6

(c) Devika Subramanian, 2006 11

Inter-causal inference

376.0)00158.000095.0/(0095.0)|(00158.0

0.001]*0.9980.29*20.999[0.00 )],|()(),|()()[( A),(

=+==

+=+=

ABP

EBAPEPEBAPEPBPBP

(c) Devika Subramanian, 2006 12

Inter-causal inference

003.0 )000579.00000019.0/(0000019.0),|(

000579.00.29*0.002*0.999 ),|()()(),,(

0000019.00.95*0.002*0.001 ),|()()(

)|()|(),|()()(),,(),(/),,(),|(

=+=

===

===

=

=

∑ ∑

EABP

EBAPEPBPEABP

EBAPEPBP

AMPAJPEBAPEPBPEABPEAPEABPEABP

J M

Page 7: Inference in Bayesian networks

7

(c) Devika Subramanian, 2006 13

Variable eliminationBasic inference: a 3-chain

A B C

Priors at A CPT at B: P(B|A)

CPT at C:P(C|B)

)()|()|()( aAPaAbBPbBcCPcCPb a

======= ∑∑

If there are k values for A and B, what is the complexityof this calculation?

(c) Devika Subramanian, 2006 14

Variable eliminationBasic inference: a 3-chain

A B CPriors at A

CPT at B: P(B|A)

CPT at C:P(C|B)

∑=====

=====

a

b

aAPaAbBPbBP

bBPbBcCPcCP

)()|()(

)()|()( Store anddo notrecompute!

∑ ∑ =======b a

aAPaAbBPbBcCPcCP )()|()|()(

factor

Page 8: Inference in Bayesian networks

8

(c) Devika Subramanian, 2006 15

Variable eliminationBasic inference: an n-chain takes

O(nk2) computation

)()|()(

)()|()(

)()|()(

)()|()|()|()(

23

12

1

cfcCdDPdf

bfbBcCPcf

aAPaAbBPbf

aAPaAbBPbBcCPcCdDPdDP

c

b

a

b ac

∑ ∑∑

===

===

====

=========

(c) Devika Subramanian, 2006 16

Singly connected networksSingly connected networks (exactly one undirected path between any two nodes in network)

X

YnY1

Z

What is P(X|Z,Y1,…,Yn)?

Causal support for X

Evidential support for X

Page 9: Inference in Bayesian networks

9

(c) Devika Subramanian, 2006 17

Multiply connected networks

cloudy

Wet grass

sprinklerrain

Compute P(Wet grass) using this network.

P(C) = 0.5

P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

(c) Devika Subramanian, 2006 18

Variable elimination again

∑ ∑

=

=

=

SR

C

SR C

SRfSRWPWf

CPCSPCRPSRf

CPCSPCRPSRWPWP

,12

1

,

),(),|()(

)()|()|(),(

)()|()|(),|()(

We exploit the structure of the belief network andbreak up the computation into two pieces as above.We use a form of dynamic programming to remembervalues of the inner sum to prevent re-computing it.

Page 10: Inference in Bayesian networks

10

(c) Devika Subramanian, 2006 19

The variable elimination algorithm

To evaluate

For i = m to 1 doGroup the terms in which Xi occurs and construct a factor f that sums over Xi. f will be indexed by all other variables that occur in those terms.Replace the sum in the original expression by the factor constructed above.

))(|(....1 2

jX j

jX X

XParentsXPm

∑∏∑∑

(c) Devika Subramanian, 2006 20

Example

)()()(

)|()|(),|()(C. then first, S and R Eliminate

),(),|()(

)()|()|(),(S and R then first, C Eliminate

)()|()|(),|(

)()|()|(),|()(

2

,2

,1

1

,

,,

CfCPWP

CRPCSPSRWPCf

SRfSRWPWP

CPCRPCSPSRf

CPCRPCSPSRWP

CPCSPCRPSRWPWP

C

SR

SR

C

SR C

CSR

∑ ∑

=

=

=

=

=

=

Page 11: Inference in Bayesian networks

11

(c) Devika Subramanian, 2006 21

Complexity of variable elimination

For singly connected networks, time and space complexity of inference is linear in the size of the network.For multiply connected networks, time and space complexity is exponential in the size of the network in the worst case. Exact inference is #P-hard.Practically, choosing a good order to eliminate variables in, makes process tractable.

(c) Devika Subramanian, 2006 22

ClusteringIdea: transform network to a singly connected network by combining nodes.

cloudy

Wet grass

sprinklerrain

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(R,S|C)=0.08P(R,notS|C)=0.72P(not R,S|C)=0.02P(not R,not S|C)=0.18

P(R,S|not C)=0.1P(R,not S|not C)=0.1P(not R,S|not C)=0.4P(not R,not S|not C)

= 0.4

P(C) =0.5

Page 12: Inference in Bayesian networks

12

(c) Devika Subramanian, 2006 23

Properties of clusteringExact method for evaluating belief networks that are not singly connected.Choosing good nodes to cluster is difficult; the same issues as in determining good variable ordering. Further, CPTs at clustered nodes are exponential in size (in the number of nodes clustered there).

(c) Devika Subramanian, 2006 24

SummaryBelief networks are a compact encoding of the full joint probability distribution over n variables that makes conditional independence assumptions between these variables explicit.We can use belief networks to exactly compute any probability of interest over the given variables.Exact inference is intractable for multiply connected networks.

Page 13: Inference in Bayesian networks

13

(c) Devika Subramanian, 2006 25

Approximate inferenceDirect samplingRejection samplingLikelihood weightingMarkov Chain Monte Carlo (MCMC)

(c) Devika Subramanian, 2006 26

Direct samplingIdea: generate samples (x1,…,xn) from the joint distribution specified by the network.For each xi, draw a sample according to P(xi|parents(xi)).

The probability of (x1,…,xn) is simply the number of times of (x1,…,xn) is generated divided by the total number of samples.

))(|(),...,( 1 ∏=i

iinDS xParentsxPxxS

Page 14: Inference in Bayesian networks

14

(c) Devika Subramanian, 2006 27

Direct sampling methodChoose a value for the root nodes weighted by their priors.Use CPTs for direct descendants of roots (given values of root nodes) to choose values for them.Do previous step all the way down to the leaves.

(c) Devika Subramanian, 2006 28

Direct Sampling

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

Page 15: Inference in Bayesian networks

15

(c) Devika Subramanian, 2006 29

Direct Sampling

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

(c) Devika Subramanian, 2006 30

Direct Sampling

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

Page 16: Inference in Bayesian networks

16

(c) Devika Subramanian, 2006 31

Direct Sampling

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

(c) Devika Subramanian, 2006 32

Direct Sampling

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

Page 17: Inference in Bayesian networks

17

(c) Devika Subramanian, 2006 33

ExampleTo compute P(W) in sprinkler network

Choose a value for C with prior P(C) = 0.5; assume we pick C = false.Choose a value for S: P(S|not C) = 0.5; assume we pick S = true. Choose a value for R: P(R|not C) = 0.2; assume we pick R = false.Choose a value for W drawn according to P(W|S, not R) = 0.9; assume we pick W = true.We have generated the event (not C, S, not R, W).Repeat this process and calculate the fraction of events in which W is true.

(c) Devika Subramanian, 2006 34

Rejection samplingUsed to estimate P(X|e) in Bayesian networks.Generate samples from the full joint distribution using direct sampling.Eliminate all samples that don’t match e.Estimate P(X|e) as the fraction of samples with X = x from the remaining samples.

Page 18: Inference in Bayesian networks

18

(c) Devika Subramanian, 2006 35

Rejection sampling exampleTo find P(Rain|Sprinkler = true), generate 100 samples by direct sampling of the network.Say, 27 of them have Sprinkler = true.8 of the 27 have Rain = true.So we estimate above conditional probability as 8/27 = 0.296

(c) Devika Subramanian, 2006 36

Likelihood weightingAvoids inefficiency of rejection sampling, by only generating events that are consistent with evidence e.To find P(X|e), the algorithm fixes e, and samples X and the remaining variables Y in the network.Each event is weighted by the probability of its occurrence.

Page 19: Inference in Bayesian networks

19

(c) Devika Subramanian, 2006 37

Likelihood weighting

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

P(R|S,W)=?

w=1.0

(c) Devika Subramanian, 2006 38

Likelihood weighting

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

P(R|S,W)=?

w=1.0

Page 20: Inference in Bayesian networks

20

(c) Devika Subramanian, 2006 39

Likelihood weighting

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

P(R|S,W)=?

w=1.0*0.1

(c) Devika Subramanian, 2006 40

Likelihood weighting

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

P(R|S,W)=?

w=1.0*0.1

Page 21: Inference in Bayesian networks

21

(c) Devika Subramanian, 2006 41

Likelihood weighting

cloudy

Wet grass

sprinklerrain P(S|C)=0.1P(S|not C)=0.5

P(R|C)=0.8P(R|not C)=0.2

P(W|R,S)=0.99P(W|not R,S)=0.90P(W|R,not S)=0.90P(W|not R, not S) =0.00

P(C) = 0.5

P(R|S,W)=?

w=1.0*0.1*0.99

(c) Devika Subramanian, 2006 42

Likelihood weighting exampleP(Rain|Sprikler,WetGrass) = ?w = 1.0Sample from P(Cloudy) = <0.5,0.5>; suppose it returns Cloudy = true.Sprinkler is an evidence variable with value true. We set w = w x P(Sprinkler|Cloudy) = 0.1Sample from P(Rain|Cloudy) = <0.8,0.2>. Let say we get Rain = true.WetGrass is an evidence variable, so we set w = w x P(WetGrass|Sprinkler,Rain)=0.099We have generated the event (Cloudy,Sprinkler,Rain,WetGrass) with weight 0.099.

Page 22: Inference in Bayesian networks

22

(c) Devika Subramanian, 2006 43

Why likelihood weighting works

Sampling distribution SW is

Likelihood weighting w(z,e)

))(|(),( ii

iW ZParentszPezS ∏=

))(|(),( ii

i EparentsePezw ∏=

Note that SW(z,e) x w(z,e) = P(z,e)

(c) Devika Subramanian, 2006 44

Markov processes: a quick introWe are interested in predicting weather, and for the purposes of this example, weather can take on one of three values: {sunny, rainy,cloudy}.The weather on a given day is dependent only on the weather on the previous day.

)|(),...,|( 111 −− = tttt wwPwwwP

This is the Markov property.

Page 23: Inference in Bayesian networks

23

(c) Devika Subramanian, 2006 45

Markov process exampleWe have knowledge of the transition probabilities between the various states: q(s,s’). q is called the transition kernel.

⎥⎥⎥

⎢⎢⎢

0.50 0.25 0.250.50 0.00 0.500.25 0.25 50.0s

r

c

s r c

(c) Devika Subramanian, 2006 46

PredictionSuppose day 1 is rainy. We will represent this as a vector of probabilities over the three values.

How do we predict the weather for day 2 given pi(1) and the transition kernel q?From the transition kernel, we can see that the probability of day 2 being sunny is .5, and that the probabilities for being cloudy or rainy are 0.25 each.

0]; 1 0[)1( =π

0.5]; 0 5.0[*)1()2( == qππ

Page 24: Inference in Bayesian networks

24

(c) Devika Subramanian, 2006 47

Prediction (contd.)We can calculate the distribution of weather at time t+1 given the distribution for time t.

tqqqt-

qtt

*(1) *)*)1((

*)()1(

π

πππ

=

==+

(c) Devika Subramanian, 2006 48

PredictionWhat’s the weather going to be like on the 3rd, 5th, 7th and 9th days?

9 tallfor 0.4], 0.2 [0.4 (t)0.4] 0.2 4.0[*(1)(9)

0.3999] 0.2002 3999.0[*)1()7(0.3984] 0.2031 3984.0[*)1()5(

0.375] 25.0 375.0[*)1()3(

8

6

4

2

≥===

==

==

==

πππ

ππ

ππ

ππ

qqqq

Page 25: Inference in Bayesian networks

25

(c) Devika Subramanian, 2006 49

A new start stateLet the weather on day 1 be sunny.How does the distribution of weather change with time?

9 tallfor 0.4], 0.2 [0.4 (t)0.4] 0.2 4.0[*(1)(9)

0.3999] 0.2000 4001.0[*)1()7(0.3984] 0.1992 4023.0[*)1()5(0.375] 1875.0 4375.0[*)1()3(

0] 0 1[)1(

8

6

4

2

≥===

==

==

==

=

πππ

ππ

ππ

ππ

π

qqqq

(c) Devika Subramanian, 2006 50

Stationary distributionIndependent of the start state, this Markov process converges to a stationary distribution [0.4 0.2 0.4] in the limit.The stationary distribution p* is the solution to the equation p* q = p*.

Page 26: Inference in Bayesian networks

26

(c) Devika Subramanian, 2006 51

Sampling from a Markov chainWe can sample from the discrete distribution [0.4 0.2 0.4] as follows

Start the Markov chain at a random state at time 1. Use the transition kernel q to generate a state at time t+1, given the value of the state at time t.Keep repeating above step to generate a long chain.After eliminating an initial prefix of the chain (burn-in), use the rest as samples from the above distribution.

(c) Devika Subramanian, 2006 52

When does this work?As t infinity, a Markov chain converges to a unique stationary distribution if it is

Irreducible (every state in the state space is reachable from every other state).Has no periodic cycles

The stationary distribution pi satisfies

Such a Markov chain is called ergodic, and the above theorem is called the ergodicitytheorem.

∑=s

ssqss )',()()'( ππ

Page 27: Inference in Bayesian networks

27

(c) Devika Subramanian, 2006 53

Detailed balance equation

),'()'()',()( ssqsssqs ππ =

The detailed balance equation implies stationarity.

∑∑ ==ss

sssqsssqs )'(),'()'()',()( πππ

(c) Devika Subramanian, 2006 54

Designing an MCMC samplerNeed to find q(s,s’) that satisfies the detailed balance equation with respect to the probability of interest, say P(x|e).

Page 28: Inference in Bayesian networks

28

(c) Devika Subramanian, 2006 55

Gibbs samplerEach variable is sampled conditionally on the current values of all the other variables. Example: sampling from a 2d distribution by sampling first coordinate from a 1d conditional distribution, and then sampling the second coordinate from another 1d conditional distribution.

(c) Devika Subramanian, 2006 56

Gibbs sampling and Bayesian networks

Let Xi be the variable to be sampled and let Y be all the hidden variables other than Xi. Let their current values be xi and y respectively. We will sample a new value for Xi conditioned on all the other variables including the evidence.q(x,x’)=q((xi,y),(xi’,y))=P(xi’|(y,e))

Page 29: Inference in Bayesian networks

29

(c) Devika Subramanian, 2006 57

Gibbs sampling works!

),'()'()|,(),|(

),|()|(),|(

),|()|,(),|()|()',()(

'

'

''

xxqxeyxPeyxP

eyxPeyPeyxP

eyxPeyxPeyxPexPxxqx

ii

ii

iii

π

π

==

=

==

Detailed balance equation is satisfied with P(x|e) as thestationary distribution.

(c) Devika Subramanian, 2006 58

Inference by MCMCInstead of generating events from scratch, MCMC generates events by making a random change to the preceding event.Start the network at a random state (assignment of values to all its nodes).Generate the next state by randomly sampling a value for one of the non-evidence variables, X conditioned on the current values of the variables in the Markov blanket of X.After a burn-in period, each visited state is a sample from the desired distribution.

Page 30: Inference in Bayesian networks

30

(c) Devika Subramanian, 2006 59

ExampleCRSW

Sample from P(C|RS)

not C RSW

not C RSW

Sample from P(R|not C SW)

Sample from P(C|RS)

CRSW

Query: P(R|SW)

(c) Devika Subramanian, 2006 60

MCMC exampleQuery:P(Rain|Sprinkler,WetGrass)Start at:

(Cloudy,not Rain,Sprinkler,WetGrass)Sample from P(Cloudy|Sprinkler,not Rain), suppose Cloudy = false.

(not Cloudy,not Rain,Sprinkler,WetGrass)Sample from P(Rain|not Cloudy,Sprinkler,WetGrass), suppose we get Rain = true.

(not Cloudy,Rain,Sprinkler,WetGrass)Repeatthese twosteps.

Page 31: Inference in Bayesian networks

31

(c) Devika Subramanian, 2006 61

Sampling stepHow do we sample Cloudy according to P(Cloudy|Sprinkler, not Rain)?Use the network!

)|()|()()|()|()()|()|()(),,(),,(

),,(),|(

CRPCSPCPCRPCSPCPCRPCSPCPRSCPRSCP

RSCPRSCP

+=

+=

(c) Devika Subramanian, 2006 62

MCMC

C,R

not C, not Rnot C, R

C,not R

Query: P(R|S,W)

Construct the transition kernel explicitly and verifythat its fix point is in fact: P(R|S,W)

Page 32: Inference in Bayesian networks

32

(c) Devika Subramanian, 2006 63

Why does MCMC work?The sampling process settles into a dynamic equilibrium in which the long run fraction of time spent in each state is exactly proportional to the probability P(X|e).

s s’

LP(s)=n(s)/L

Markov processq(s’|s)

(c) Devika Subramanian, 2006 64

Gibbs sampling in Bayesian networks

Query: P(X|e); Z is the set of all variables in network.Start with a random value z for Z.Pick a variable Xi in Z-e; generate new value v according to P(Xi| MB(xi))Move to new state which is z with value of Xi replaced by v.

Page 33: Inference in Bayesian networks

33

(c) Devika Subramanian, 2006 65

Knowledge engineering for building Bayesian networks

Decide what to talk aboutIdentify relevant factorsIdentify relevant relationships between them.

Decide on a vocabulary of random variablesWhat are the variables that represent the factors?What are the values they take on?Should a variable be treated as continuous or should it be discretized?

(c) Devika Subramanian, 2006 66

Knowledge engineering (contd.)Encode knowledge about dependence between variables

Qualitative part: identify topology of belief network.Quantitative part: specify the CPTs

Encode description of specific problem instance

Translate problem givens into values of nodes in the belief network

Pose queries to inference procedure and get answers

Formulate quantity of interest as a conditional probability.

Page 34: Inference in Bayesian networks

34

(c) Devika Subramanian, 2006 67

PathFinder SystemDesigned by David Heckerman (Probabilistic similarity networks, MIT Press, 1990).

Diagnostic system for lymph node diseases.60 diseases and 100 symptoms and test results. 14000 probabilities in Bayesian network.Effort to build system:

8 hours to determine nodes.35 hours to determine topology45 hours to determine probability values

Performs better than expert doctors.


Recommended