+ All Categories
Home > Documents > Class logistics - courses.csail.mit.edu

Class logistics - courses.csail.mit.edu

Date post: 01-Jan-2022
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
117
Class logistics Tonight midnight, the take-home exam is due. Next week: spring break Following week, on Thursday, your project proposals are due. Feel free to ask Xiaoxu or me for feedback or ideas regarding the project. Auditors are welcome to do a project, and we’ll read them and give feedback.
Transcript
Page 1: Class logistics - courses.csail.mit.edu

Class logistics

• Tonight midnight, the take-home exam is due.• Next week: spring break• Following week, on Thursday, your project

proposals are due.– Feel free to ask Xiaoxu or me for feedback or ideas

regarding the project.– Auditors are welcome to do a project, and we’ll read

them and give feedback.

Page 2: Class logistics - courses.csail.mit.edu

Generative Models

Bill Freeman, MITSome of these slides made with Andrew Blake,

Microsoft Research Cambridge, UK

6.869 March 17, 2005

Page 3: Class logistics - courses.csail.mit.edu

Last class

(a) We looked at ways to fit observations of probabilistic data, and EM.

(b) We’re looking at the modularized joint probability distribution described by graphical models.

Page 4: Class logistics - courses.csail.mit.edu

Making probability distributions modular, and therefore tractable:

Probabilistic graphical models

Vision is a problem involving the interactions of many variables: things can seem hopelessly complex. Everything is made tractable, or at least, simpler, if we modularize the problem. That’s what probabilistic graphical models do, and let’s examine that.

Readings: Jordan and Weiss intro article—fantastic!Kevin Murphy web page—comprehensive and with

pointers to many advanced topics

Page 5: Class logistics - courses.csail.mit.edu

A toy example

Suppose we have a system of 5 interacting variables, perhaps some are observed and some are not. There’s some probabilistic relationship between the 5 variables, described by their joint probability,P(x1, x2, x3, x4, x5).

If we want to find out what the likely state of variable x1 is (say, the position of the hand of some person we are observing), what can we do?

Two reasonable choices are: (a) find the value of x1 (and of all the other variables) that gives the maximum of P(x1, x2, x3, x4, x5); that’s the MAP solution.Or (b) marginalize over all the other variables and then take the mean or the maximum of the other variables. Marginalizing, then taking the mean, is equivalent to finding the MMSE solution. Marginalizing, then taking the max, is called the max marginal solution and sometimes a useful thing to do.

Page 6: Class logistics - courses.csail.mit.edu

To find the marginal probability at x1, we have to take this sum:),,,,(

5432 ,,,54321∑

xxxxxxxxxP

If the system really is high dimensional, that will quickly become intractable. But if there is some modularity inthen things become tractable again.

),,,,( 54321 xxxxxP

Suppose the variables form a Markov chain: x1 causes x2 which causes x3, etc. We might draw out this relationship as follows:

1x 2x 3x 4x 5x

Page 7: Class logistics - courses.csail.mit.edu

P(a,b) = P(b|a) P(a)

By the chain rule, for any probability distribution, we have:

)|,,,()(),,,,( 15432154321 xxxxxPxPxxxxxP =

),|,,()|()( 21543121 xxxxxPxxPxP=

),,|,(),|()|()( 32154213121 xxxxxPxxxPxxPxP=

),,,|(),,|(),|()|()( 432153214213121 xxxxxPxxxxPxxxPxxPxP=

)|()|()|()|()( 453423121 xxPxxPxxPxxPxP=

But if we exploit the assumed modularity of the probability distribution over the 5 variables (in this case, the assumed Markov chain structure), then that expression simplifies:

1x 2x 3x 4x 5x

Now our marginalization summations distribute through those terms:

∑ ∑ ∑ ∑ ∑∑ =1 2 3 4 55432

)|()|()|()|()(),,,,( 453423121,,,

54321x x x x xxxxx

xxPxxPxxPxxPxPxxxxxP

Page 8: Class logistics - courses.csail.mit.edu

Belief propagationPerforming the marginalization by doing the partial sums is called “belief propagation”.

∑ ∑ ∑ ∑ ∑∑ =1 2 3 4 55432

)|()|()|()|()(),,,,( 453423121,,,

54321x x x x xxxxx

xxPxxPxxPxxPxPxxxxxP

In this example, it has saved us a lot of computation. Suppose each variable has 10 discrete states. Then, not knowing the special structure of P, we would have to perform 10000 additions (10^4) to marginalize over the four variables.But doing the partial sums on the right hand side, we only need 40 additions (10*4) to perform the same marginalization!

Page 9: Class logistics - courses.csail.mit.edu

1x 2x 3x 4x 5x

Another modular probabilistic structure, more common in vision problems, is an undirected graph:

The joint probability for this graph is given by:

),(),(),(),(),,,,( 5443322154321 xxxxxxxxxxxxxP ΦΦΦΦ=

Where is called a “compatibility function”. We can define compatibility functions we result in the same joint probability as for the directed graph described in the previous slides; for that example, we could use either form.

),( 21 xxΦ

Page 10: Class logistics - courses.csail.mit.edu

Markov Random Fields

• Allows rich probabilistic models for images.

• But built in a local, modular way. Learn local relationships, get global effects out.

Page 11: Class logistics - courses.csail.mit.edu

MRF nodes as pixels

Winkler, 1995, p. 32

Page 12: Class logistics - courses.csail.mit.edu

MRF nodes as patches

image patches

Φ(xi, yi)

Ψ(xi, xj)

image

scene

scene patches

Page 13: Class logistics - courses.csail.mit.edu

Network joint probability

scene Scene-scenecompatibility

functionneighboringscene nodes

image

local observations

Image-scenecompatibility

function

∏∏ ΦΨ=i

iiji

ji yxxxZ

yxP ),(),(1),(,

Page 14: Class logistics - courses.csail.mit.edu

In order to use MRFs:

• Given observations y, and the parameters of the MRF, how infer the hidden variables, x?

• How learn the parameters of the MRF?

Page 15: Class logistics - courses.csail.mit.edu

Outline of MRF section

• Inference in MRF’s.– Gibbs sampling, simulated annealing– Iterated condtional modes (ICM)– Variational methods– Belief propagation– Graph cuts

• Vision applications of inference in MRF’s.• Learning MRF parameters.

– Iterative proportional fitting (IPF)

Page 16: Class logistics - courses.csail.mit.edu

Variational methods

• Reference: Tommi Jaakkola’s tutorial on variational methods,

http://www.ai.mit.edu/people/tommi/• Example: mean field

– For each node• Calculate the expected value of the node,

conditioned on the mean values of the neighbors.

Page 17: Class logistics - courses.csail.mit.edu

Outline of MRF section

• Inference in MRF’s.– Gibbs sampling, simulated annealing– Iterated condtional modes (ICM)– Variational methods– Belief propagation– Graph cuts

• Vision applications of inference in MRF’s.• Learning MRF parameters.

– Iterative proportional fitting (IPF)

Page 18: Class logistics - courses.csail.mit.edu

Derivation of belief propagationy1

),( 11 yxΦ

),( 21 xxΨ

),( 22 yxΦ

),( 32 xxΨ

),( 33 yxΦ

x1

y2

x2

y3

x3

),,,,,(sumsummean 3213211321

yyyxxxPxxxxMMSE =

Page 19: Class logistics - courses.csail.mit.edu

The posterior factorizes

),(),(sum

),(),(sum

),(mean),(),(),(),(

),(sumsummean

),,,,,(sumsummean

3233

2122

111

3233

2122

111

3213211

3

2

1

321

321

xxyx

xxyx

yxxxxyxxxyx

yxx

yyyxxxPx

x

x

xMMSE

xxxMMSE

xxxMMSE

ΨΦ

ΨΦ

Φ=ΨΦΨΦ

Φ=

=

y1

),( 11 yxΦ

),( 21 xxΨ

),( 22 yxΦ

),( 32 xxΨ

),( 33 yxΦ

x1

y2

x2

y3

x3

Page 20: Class logistics - courses.csail.mit.edu

Propagation rules

y1

),( 11 yxΦ

),( 21 xxΨ

),( 22 yxΦ

),( 32 xxΨ

),( 33 yxΦ

x1

y2

x2

y3

x3),(),(sum

),(),(sum

),(mean),(),(),(),(

),(sumsummean

),,,,,(sumsummean

3233

2122

111

3233

2122

111

3213211

3

2

1

321

321

xxyx

xxyx

yxxxxyxxxyx

yxx

yyyxxxPx

x

x

xMMSE

xxxMMSE

xxxMMSE

ΨΦ

ΨΦ

Φ=ΨΦΨΦ

Φ=

=

Page 21: Class logistics - courses.csail.mit.edu

Propagation rules

y1

),( 11 yxΦ

),( 21 xxΨ

),( 22 yxΦ

),( 32 xxΨ

),( 33 yxΦ

x1

y2

x2

y3

x3

),(),(sum

),(),(sum

),(mean

3233

2122

111

3

2

1

xxyx

xxyx

yxx

x

x

xMMSE

ΨΦ

ΨΦ

Φ=

)( ),( ),(sum)( 23222211

21

2

xMyxxxxMx

ΦΨ=

Page 22: Class logistics - courses.csail.mit.edu

Propagation rules

y1

),( 11 yxΦ

),( 21 xxΨ

),( 22 yxΦ

),( 32 xxΨ

),( 33 yxΦ

x1

y2

x2

y3

x3

),(),(sum

),(),(sum

),(mean

3233

2122

111

3

2

1

xxyx

xxyx

yxx

x

x

xMMSE

ΨΦ

ΨΦ

Φ=

)( ),( ),(sum)( 23222211

21

2

xMyxxxxMx

ΦΨ=

Page 23: Class logistics - courses.csail.mit.edu

Belief propagation: the nosey neighbor rule

“Given everything that I know, here’s what I think you should think”

(Given the probabilities of my being in different states, and how my states relate to your states, here’s what I think the probabilities of your states should be)

Page 24: Class logistics - courses.csail.mit.edu

Belief propagation messagesA message: can be thought of as a set of weights on each of your possible states

To send a message: Multiply together all the incoming messages, except from the node you’re sending to,then multiply by the compatibility matrix and marginalize over the sender’s states.

∏∑∈

=ijNk

jkjji

xi

ji xMxxxM

j \)(ij )(),( )( ψ

ji=ji

Page 25: Class logistics - courses.csail.mit.edu

BeliefsTo find a node’s beliefs: Multiply together all the messages coming in to that node.

j ∏∈

=)(

)( )(jNk

jkjjj xMxb

Page 26: Class logistics - courses.csail.mit.edu

Belief, and message updates

∏∈

=)(

)( )(jNk

jkjjj xMxbj

∏∑∈

=ijNk

jkjji

xi

ji xMxxxM

j \)(ij )(),( )( ψ

= jii

Page 27: Class logistics - courses.csail.mit.edu

Optimal solution in a chain or tree:Belief Propagation

• “Do the right thing” Bayesian algorithm.• For Gaussian random variables over time:

Kalman filter.• For hidden Markov models:

forward/backward algorithm (and MAP variant is Viterbi).

Page 28: Class logistics - courses.csail.mit.edu

No factorization with loops!

y1

x1

y2

x2

y3

x3

),(),(sum

),(),(sum

),(mean

3233

2122

111

3

2

1

xxyx

xxyx

yxx

x

x

xMMSE

ΨΦ

ΨΦ

Φ=

31 ),( xxΨ

Page 29: Class logistics - courses.csail.mit.edu

Justification for running belief propagation in networks with loops

• Experimental results:– Error-correcting codes

– Vision applications

• Theoretical results:– For Gaussian processes, means are correct.

– Large neighborhood local maximum for MAP.

– Equivalent to Bethe approx. in statistical physics.

– Tree-weighted reparameterization

Weiss and Freeman, 2000

Yedidia, Freeman, and Weiss, 2000

Freeman and Pasztor, 1999;Frey, 2000

Kschischang and Frey, 1998;McEliece et al., 1998

Weiss and Freeman, 1999

Wainwright, Willsky, Jaakkola, 2001

Page 30: Class logistics - courses.csail.mit.edu

Statistical mechanics interpretation

U - TS = Free energy

U = avg. energy = T = temperatureS = entropy =

,...),(,...),( 2121 xxExxpstates∑

,...),(ln,...),( 2121 xxpxxpstates∑−

Page 31: Class logistics - courses.csail.mit.edu

Free energy formulation

Defining

then the probability distributionthat minimizes the F.E. is precisely the true probability of the Markov network,

)(),(,...),( 21 ii

ijiij

ij xxxxxP ∏∏ ΦΨ=

,...),( 21 xxP

TxxEjiij

jiexx /),(),( −=Ψ TxEii

iex /)()( −=Φ

Page 32: Class logistics - courses.csail.mit.edu

Approximating the Free Energy

Exact: Mean Field Theory: Bethe Approximation : Kikuchi Approximations:

)],...,,([ 21 NxxxpF)]([ ii xbF

)],(),([ jiijii xxbxbF

),....],(),,(),([ , kjiijkjiijii xxxbxxbxbF

Page 33: Class logistics - courses.csail.mit.edu

Mean field approximation to free energy

U - TS = Free energy

∑∑ ∑∑+=)( ,

)(ln)(),()()()(ij xx

iii

iix

jiijjjiiiMeanFieldji i

xbTxbxxExbxbbF

The variational free energy is, up to an additive constant, equal to the Kllback-Leibler divergence between b(x) and the true probability, P(x).KL divergence:

)(

)(ln)()||(

,..., 21xP

xbxbPbD i

i

ii

xxKL

∏∏∑=

Page 34: Class logistics - courses.csail.mit.edu

Setting deriv w.r.t bi=0

U - TS = Free energyCorresponds to eq. 18 in Jordan and Weiss ms.

∑∑−=)(

)/),()(exp()(ij x

jiijjjiij

TxxExbxb α

In words: “Set the probability of each state xi at node i to be proportional to e to the minus expected energy corresponding to each state xi, given the expected values of all the neighboring states.”

Page 35: Class logistics - courses.csail.mit.edu

Bethe ApproximationOn tree-like lattices, exact formula:

∏∏ −=i

qii

ijjiijN

ixpxxpxxxp 1

)(21 )]([),(),...,,(

∑∑ +=)( ,

)),(ln),()(,(),(ij xx

jiijjiijjiijijiBetheji

xxbTxxExxbbbF

∑ ∑ +−+i x

iiiiiiii

xbTxExbq ))(ln)()(()1(

Page 36: Class logistics - courses.csail.mit.edu

Gibbs Free Energy

)}(),(){(

}1),({),(

)(

,)(

jjx

jiijjij

ijx

xxjiij

ijijijiBethe

xbxxbx

xxbbbF

ij

ji

−+

−+

∑∑∑

∑∑

λ

γ

Page 37: Class logistics - courses.csail.mit.edu

Gibbs Free Energy

)}(),(){(

}1),({),(

)(

,)(

jjx

jiijjij

ijx

xxjiij

ijijijiBethe

xbxxbx

xxbbbF

ij

ji

−+

−+

∑∑∑

∑∑

λ

γ

Set derivative of Gibbs Free Energy w.r.t. bij, bi terms to zero:

)exp( )( )(

))(

exp( ),( ),(

)()(

Txkxb

Tx

xxkxxb

iNjixij

iii

iijjiijjiij

∑∈Φ=

−Ψ=

λ

λ

Page 38: Class logistics - courses.csail.mit.edu

Belief Propagation = Bethe

∑=ix

jiijjj xxbxb ),()(

)( jij xλLagrange multipliers

enforce the constraints

Bethe stationary conditions = message update rules

∏∈

=ijNk

jkjjij xMTx

\)(

)(ln)(λwith

Page 39: Class logistics - courses.csail.mit.edu

Region marginal probabilities

)()(),( ),(

)()( )(

\)(\)(

)(

∏∏

∈∈

Ψ=

Φ=

ijNkj

kj

jiNki

kijijiij

iNki

kiiii

xMxMxxkxxb

xMxkxb

i

ji

Page 40: Class logistics - courses.csail.mit.edu

Belief propagation equationsBelief propagation equations come from the

marginalization constraints.

jii

j= ii

∏∑∈

=ijNk

jkjji

xi

ji xMxxxM

j \)(ij )(),( )( ψ

Page 41: Class logistics - courses.csail.mit.edu

Results from Bethe free energy analysis

• Fixed point of belief propagation equations iff. Betheapproximation stationary point.

• Belief propagation always has a fixed point.• Connection with variational methods for inference: both

minimize approximations to Free Energy,– variational: usually use primal variables.– belief propagation: fixed pt. equs. for dual variables.

• Kikuchi approximations lead to more accurate belief propagation algorithms.

• Other Bethe free energy minimization algorithms—Yuille, Welling, etc.

Page 42: Class logistics - courses.csail.mit.edu

Kikuchi message-update rulesGroups of nodes send messages to other groups of nodes.

Typical choice for Kikuchi cluster.

i j i j=i ji

=lk

Update formessages

Update formessages

Page 43: Class logistics - courses.csail.mit.edu

Generalized belief propagationMarginal probabilities for nodes in one row

of a 10x10 spin glass

Page 44: Class logistics - courses.csail.mit.edu

References on BP and GBP• J. Pearl, 1985

– classic• Y. Weiss, NIPS 1998

– Inspires application of BP to vision• W. Freeman et al learning low-level vision, IJCV 1999

– Applications in super-resolution, motion, shading/paint discrimination

• H. Shum et al, ECCV 2002– Application to stereo

• M. Wainwright, T. Jaakkola, A. Willsky– Reparameterization version

• J. Yedidia, AAAI 2000– The clearest place to read about BP and GBP.

Page 45: Class logistics - courses.csail.mit.edu

Graph cuts

• Algorithm: uses node label swaps or expansions as moves in the algorithm to reduce the energy. Swaps many labels at once, not just one at a time, as with ICM.

• Find which pixel labels to swap using min cut/max flow algorithms from network theory.

• Can offer bounds on optimality.• See Boykov, Veksler, Zabih, IEEE PAMI 23 (11)

Nov. 2001 (available on web).

Page 46: Class logistics - courses.csail.mit.edu

Comparison of graph cuts and belief propagation

Comparison of Graph Cuts with Belief Propagation for Stereo, using IdenticalMRF Parameters, ICCV 2003.Marshall F. Tappen William T. Freeman

Page 47: Class logistics - courses.csail.mit.edu

Ground truth, graph cuts, and belief propagation disparity solution energies

Page 48: Class logistics - courses.csail.mit.edu

Graph cuts versus belief propagation

• Graph cuts consistently gave slightly lower energy solutions for that stereo-problem MRF, although BP ran faster, although there is now a faster graph cuts implementation than what we used…

• However, here’s why I still use Belief Propagation:– Works for any compatibility functions, not a restricted

set like graph cuts.– I find it very intuitive.– Extensions: sum-product algorithm computes MMSE,

and Generalized Belief Propagation gives you very accurate solutions, at a cost of time.

Page 49: Class logistics - courses.csail.mit.edu

MAP versus MMSE

Page 50: Class logistics - courses.csail.mit.edu

Show program comparing some methods on a simple MRF

testMRF.m

Page 51: Class logistics - courses.csail.mit.edu

Outline of MRF section

• Inference in MRF’s.– Gibbs sampling, simulated annealing– Iterated condtional modes (ICM)– Variational methods– Belief propagation– Graph cuts

• Vision applications of inference in MRF’s.• Learning MRF parameters.

– Iterative proportional fitting (IPF)

Page 52: Class logistics - courses.csail.mit.edu

Vision applications of MRF’s

• Stereo• Motion estimation• Super-resolution• Many others…

Page 53: Class logistics - courses.csail.mit.edu

Vision applications of MRF’s

• Stereo• Motion estimation• Super-resolution• Many others…

Page 54: Class logistics - courses.csail.mit.edu

Motion applicationimage patches

image

scene

scene patches

Page 55: Class logistics - courses.csail.mit.edu

What behavior should we see in a motion algorithm?

• Aperture problem• Resolution through propagation of

information• Figure/ground discrimination

Page 56: Class logistics - courses.csail.mit.edu

The aperture problem

Page 57: Class logistics - courses.csail.mit.edu

The aperture problem

Page 58: Class logistics - courses.csail.mit.edu

Program demo

Page 59: Class logistics - courses.csail.mit.edu

Motion analysis: related work

• Markov network– Luettgen, Karl, Willsky and collaborators.

• Neural network or learning-based– Nowlan & T. J. Senjowski; Sereno.

• Optical flow analysis– Weiss & Adelson; Darrell & Pentland; Ju,

Black & Jepson; Simoncelli; Grzywacz & Yuille; Hildreth; Horn & Schunk; etc.

Page 60: Class logistics - courses.csail.mit.edu

Motion estimation results(maxima of scene probability distributions displayed)

Inference:

Image data

Initial guesses only show motion at edges.

Iterations 0 and 1

Page 61: Class logistics - courses.csail.mit.edu

Motion estimation results(maxima of scene probability distributions displayed)

Figure/ground still unresolved here.

Iterations 2 and 3

Page 62: Class logistics - courses.csail.mit.edu

Motion estimation results(maxima of scene probability distributions displayed)

Iterations 4 and 5

Final result compares well with vector quantized true (uniform) velocities.

Page 63: Class logistics - courses.csail.mit.edu

Vision applications of MRF’s

• Stereo• Motion estimation• Super-resolution• Many others…

Page 64: Class logistics - courses.csail.mit.edu

Super-resolution

• Image: low resolution image• Scene: high resolution image

ultimate goal...

imag

esc

ene

Page 65: Class logistics - courses.csail.mit.edu

Polygon-based graphics images are resolution independent

Pixel-based images are not resolution

independentPixel replication

Cubic splineCubic spline, sharpened

Training-based super-resolution

Page 66: Class logistics - courses.csail.mit.edu

3 approaches to perceptual sharpening

(1) Sharpening; boost existing high frequencies.

(2) Use multiple frames to obtain higher sampling rate in a still frame.

(3) Estimate high frequencies not present in image, although implicitly defined.

In this talk, we focus on (3), which we’ll call “super-resolution”.

spatial frequency

ampl

itude

spatial frequencyam

plitu

de

Page 67: Class logistics - courses.csail.mit.edu

Super-resolution: other approaches

• Schultz and Stevenson, 1994• Pentland and Horowitz, 1993• fractal image compression (Polvere, 1998;

Iterated Systems)• astronomical image processing (eg. Gull and

Daniell, 1978; “pixons”http://casswww.ucsd.edu/puetter.html)

Page 68: Class logistics - courses.csail.mit.edu

Training images, ~100,000 image/scene patch pairs

Images from two Corel database categories: “giraffes” and “urban skyline”.

Page 69: Class logistics - courses.csail.mit.edu

Do a first interpolation

Zoomed low-resolution

Low-resolution

Page 70: Class logistics - courses.csail.mit.edu

Zoomed low-resolution Full frequency original

Low-resolution

Page 71: Class logistics - courses.csail.mit.edu

RepresentationZoomed low-freq. Full freq. original

Page 72: Class logistics - courses.csail.mit.edu

RepresentationZoomed low-freq. Full freq. original

True high freqs

(to minimize the complexity of the relationships we have to learn,we remove the lowest frequencies from the input image,

and normalize the local contrast level).

Low-band input(contrast normalized,

PCA fitted)

Page 73: Class logistics - courses.csail.mit.edu

Gather ~100,000 patches

...Training data samples (magnified)

... high freqs.

low freqs.

Page 74: Class logistics - courses.csail.mit.edu

True high freqs.Input low freqs.

Training data samples (magnified)

...

Nearest neighbor estimate

...high freqs.

Estimated high freqs.

low freqs.

Page 75: Class logistics - courses.csail.mit.edu

Input low freqs.

Training data samples (magnified)

...

Nearest neighbor estimate

Estimated high freqs.

...high freqs.

low freqs.

Page 76: Class logistics - courses.csail.mit.edu

Example: input image patch, and closest matches from database

Input patch

Closest imagepatches from database

Correspondinghigh-resolution

patches from database

Page 77: Class logistics - courses.csail.mit.edu
Page 78: Class logistics - courses.csail.mit.edu

Scene-scene compatibility function, Ψ(xi, xj)

Assume overlapped regions, d, of hi-res. patches differ by Gaussian observation noise:

d

Uniqueness constraint,not smoothness.

Page 79: Class logistics - courses.csail.mit.edu

yImage-scene compatibility function, Φ(xi, yi)

xAssume Gaussian noise takes you from observed image patch to synthetic sample:

Page 80: Class logistics - courses.csail.mit.edu

Markov network

image patches

Φ(xi, yi)

Ψ(xi, xj)scene patches

Page 81: Class logistics - courses.csail.mit.edu

Belief PropagationInput

Iter. 0

After a few iterations of belief propagation, the algorithm selects spatially consistent high resolution

interpretations for each low-resolution patch of the input image.

Iter. 1

Iter. 3

Page 82: Class logistics - courses.csail.mit.edu

Zooming 2 octavesWe apply the super-resolution

algorithm recursively, zooming up 2 powers of 2, or a factor of 4

in each dimension.

85 x 51 input

Cubic spline zoom to 340x204 Max. likelihood zoom to 340x204

Page 83: Class logistics - courses.csail.mit.edu

Now we examine the effect of the prior assumptions made about images on the

high resolution reconstruction.First, cubic spline interpolation.

Original50x58

(cubic spline implies thin plate prior)

True200x232

Page 84: Class logistics - courses.csail.mit.edu

Original50x58

(cubic spline implies thin plate prior)

True200x232Cubic spline

Page 85: Class logistics - courses.csail.mit.edu

Next, train the Markov network algorithm on a world of random noise

images.

Original50x58

Training images

True

Page 86: Class logistics - courses.csail.mit.edu

The algorithm learns that, in such a world, we add random noise when zoom

to a higher resolution.

Original50x58

Training images

Markovnetwork

True

Page 87: Class logistics - courses.csail.mit.edu

Next, train on a world of vertically oriented rectangles.

Original50x58

Training images

True

Page 88: Class logistics - courses.csail.mit.edu

The Markov network algorithm hallucinates those vertical rectangles that

it was trained on.

Original50x58

Training images

Markovnetwork

True

Page 89: Class logistics - courses.csail.mit.edu

Training images

Now train on a generic collection of images.

Original50x58

True

Page 90: Class logistics - courses.csail.mit.edu

The algorithm makes a reasonable guess at the high resolution image, based on its

training images.

Training images

Original50x58

Markovnetwork

True

Page 91: Class logistics - courses.csail.mit.edu

Generic training imagesNext, train on a generic

set of training images. Using the same camera

as for the test image, but a random collection of

photographs.

Page 92: Class logistics - courses.csail.mit.edu

Original70x70

Cubic Spline

Markovnet, training:generic

True280x280

Page 93: Class logistics - courses.csail.mit.edu

Kodak Imaging Science Technology Lab test.

3 test images, 640x480, to bezoomed up by 4 in each dimension.

8 judges, making 2-alternative, forced-choice comparisons.

Page 94: Class logistics - courses.csail.mit.edu

Algorithms compared

• Bicubic Interpolation• Mitra's Directional Filter• Fuzzy Logic Filter•Vector Quantization• VISTA

Page 95: Class logistics - courses.csail.mit.edu

Bicubic spline Altamira VISTA

Page 96: Class logistics - courses.csail.mit.edu

Bicubic spline Altamira VISTA

Page 97: Class logistics - courses.csail.mit.edu

User preference test results

“The observer data indicates that six of the observers rankedFreeman’s algorithm as the most preferred of the five testedalgorithms. However the other two observers rank Freeman’s algorithmas the least preferred of all the algorithms….

Freeman’s algorithm produces prints which are by far the sharpestout of the five algorithms. However, this sharpness comes at a priceof artifacts (spurious detail that is not present in the originalscene). Apparently the two observers who did not prefer Freeman’salgorithm had strong objections to the artifacts. The other observersapparently placed high priority on the high level of sharpness in theimages created by Freeman’s algorithm.”

Page 98: Class logistics - courses.csail.mit.edu
Page 99: Class logistics - courses.csail.mit.edu
Page 100: Class logistics - courses.csail.mit.edu

Training images

Page 101: Class logistics - courses.csail.mit.edu

Training image

Page 102: Class logistics - courses.csail.mit.edu

Processed image

Page 103: Class logistics - courses.csail.mit.edu

Outline of MRF section

• Inference in MRF’s.– Gibbs sampling, simulated annealing– Iterated conditional modes (ICM)– Variational methods– Belief propagation– Graph cuts

• Vision applications of inference in MRF’s.• Learning MRF parameters.

– Iterative proportional fitting (IPF)

Page 104: Class logistics - courses.csail.mit.edu

Learning MRF parameters, labeled data

Iterative proportional fitting lets you make a maximum likelihood estimate of a joint distribution from observations of various marginal distributions.

Page 105: Class logistics - courses.csail.mit.edu

True joint probability

Observed marginal distributions

Page 106: Class logistics - courses.csail.mit.edu

Initial guess at joint probability

Page 107: Class logistics - courses.csail.mit.edu

IPF update equation

Scale the previous iteration’s estimate for the joint probability by the ratio of the true to the predicted marginals.

Gives gradient ascent in the likelihood of the joint probability, given the observations of the marginals.

See: Michael Jordan’s book on graphical models

Page 108: Class logistics - courses.csail.mit.edu

Convergence of to correct marginals by IPF algorithm

Page 109: Class logistics - courses.csail.mit.edu

Convergence of to correct marginals by IPF algorithm

Page 110: Class logistics - courses.csail.mit.edu

IPF results for this example: comparison of joint probabilities

True joint probability

Initial guess Final maximumentropy estimate

Page 111: Class logistics - courses.csail.mit.edu

Application to MRF parameter estimation

• Can show that for the ML estimate of the clique potentials, φc(xc), the empirical marginals equal the model marginals,

• This leads to the IPF update rule for φc(xc)

• Performs coordinate ascent in the likelihood of the MRF parameters, given the observed data.

Reference: unpublished notes by Michael Jordan

Page 112: Class logistics - courses.csail.mit.edu

More general graphical models than MRF grids

• In this course, we’ve studied Markov chains, and Markov random fields, but, of course, many other structures of probabilistic models are possible and useful in computer vision.

• For a nice on-line tutorial about Bayes nets, see Kevin Murphy’s tutorial in his web page.

Page 113: Class logistics - courses.csail.mit.edu

“Top-down” information: a representation for image context

Images

80-dimensional representation

Credit: Antonio Torralba

Page 114: Class logistics - courses.csail.mit.edu

“Bottom-up” information: labeled training data for object recognition.

•Hand-annotated 1200 frames of video from a wearable webcam •Trained detectors for 9 types of objects: bookshelf, desk,screen (frontal) , steps, building facade, etc.•100-200 positive patches, > 10,000 negative patches

Page 115: Class logistics - courses.csail.mit.edu

Combining top-down with bottom-up: graphical model showing assumed

statistical relationships between variables

Scene category

Visual “gist”observations

Object class

Particular objects

Local image features

kitchen, office, lab, conference room, open area, corridor, elevator and street.

Page 116: Class logistics - courses.csail.mit.edu

Categorization of new placesICCV 2003 posterBy Torralba, Murphy, Freeman, and Rubin

Specific location

Location category

Indoor/outdoorframe

Page 117: Class logistics - courses.csail.mit.edu

Bottom-up detection: ROC curvesICCV 2003 posterBy Torralba, Murphy, Freeman, and Rubin


Recommended