+ All Categories
Home > Documents > Information Theoretic Image Thresholding Laura Frost Supervisors: Dr Peter Tischer Dr Sid Ray.

Information Theoretic Image Thresholding Laura Frost Supervisors: Dr Peter Tischer Dr Sid Ray.

Date post: 19-Dec-2015
Category:
View: 214 times
Download: 0 times
Share this document with a friend
38
Information Theoretic Image Thresholding Laura Frost Supervisors: Dr Peter Tischer Dr Sid Ray
Transcript

Information Theoretic Image Thresholding

Laura Frost

Supervisors: Dr Peter Tischer

Dr Sid Ray

Aims of Project

Investigate mixture modelling as approximation to image’s histogram

Investigate relative / objective criterion for assessment of thresholding results

Thresholding

Good for images with fairly distinct, homogeneous regions

Region of uncertainty ~ object boundaries

Two important properties for mixture modelling thresholding

Mixture Modelling

Approximate complex distribution with component distributions

Can describe components easily

Classify data – in this case pixels

An example…

Need to ask…

Of thresholding How good is a threshold?

Of mixture modelling When is a mixture model a good model?

And so keep in mind…

A good mixture model implies a good threshold

A good threshold does not necessary imply a good mixture model

Methodology 1

Test / extend / possibly improve iterative mixture modelling

Examine Kullback-Leibler measure as possible relative / objective criterion

Methodology 2

Test mixture modelling image histograms using Snob

Examine Minimum Message Length as possible objective criterion

Iterative Mixture Modelling

Fit mixture model at each grey-level

Select grey-level that produces best model

Good for bi-level thresholding

Implementation

Based on work completed by David Bertolo (Honours 2001, Monash)

Distributions; Poisson (1 parameter) Gaussian, Rayleigh (2 parameters)

Implementation

Improve threshold selection – use intersection of components

Account for overlap

Testing

Synthetic and natural images

Synthetic images created with specific distributions Test accuracy of model fitting Give lower bound for Kullback-Leibler measure

assessment

Synthetic Images

Results – Iterative Mixture Modelling

Component parameters correct for synthetic images

Component parameters for natural images (esp. outliers at boundaries)

Subjective assessment of segmented image

Results – Iterative Mixture Modelling

Examined five information measures;

Entropy of image H(p) Entropy of mixture model H(q) Kullback-Leibler (KL) measure I(p;q) KL relative to entropy of image I(p;q) / H(p) KL relative to entropy of model I(p;q) / H(q)

Results – 2, 3, 4 components

Good fit for synthetic images

Gaussian () Poisson () Rayleigh opposite () Rayleigh right ()

Results – 2, 3, 4 components

Dealt with outliers at boundaries

Gaussian () Poisson () Rayleigh opposite () Rayleigh right ()

Results – 2, 3, 4 components

Overall, segmented images good quality

Gaussian (*) Poisson () Rayleigh opposite (*) Rayleigh right (*)

* Except for images with outliers

Results – 2, 3, 4 components

Time unreasonable for 4 components (complexity increases exponentially)

Poisson takes 4 times longer than other distributions

Matches example…

3 Gaussian mixture model

3 Poisson mixture model

3 Rayleigh mixture model

Results – 2, 3, 4 components

Gaussians H(p) < H (q) for all images

Poissons H(p) > H (q) for all ‘successfully’ thresholded

images

Rayleighs H(p) ~ H(q)

Results – 2, 3, 4 components

I(p;q) decreased as no. components increased – to be expected

I(p;q) / H(p)

I(p;q) / H(q)

A relative criterion

Is there value in comparing models of different complexities?

From these results, probably not

But comparing models of similar complexities looks ok

Mixture modelling with Snob

Problem – overfitting data on natural and synthetic images

Eg, 512 x 512 image has 262 144 pixels to classify

Cheaper for Snob to make more classes

Sampling

Randomly sampling data at different rates

Snob finding very good classes!

Sampling image alumina.gif at 100 pixels (from total 65 536)…

Alumina example…

Over many runs found two classes0.20 * N(91.80, 23.20)

0.80 * N(206.80, 11.40)

Compare to Iterative method0.19 * N(94.30, 26.56)

0.81 * N(206.21, 11.80)

Alumina example…

Message length ~ 4.94 bpp

Not all images so successful at just 100 pixels

All seem to be ok at about 500 pixels

Snob and Thresholding

Sampling at such small rates, Snob handling missing data very well!

Since need to sample at such small rates, did not compare Poissons as hoped

More work needs to be done, but looks promising

An Objective Criterion

Takes complexity of mixture model into account when calculating message length

Message Length a very good candidate for use as an objective criterion for thresholding

Aims of Project

Investigate mixture modelling as approximation to image’s histogram

Investigate relative / objective criterion for assessment of thresholding results

Conclusion

Iterative Method Consistent results Optimal no. of components trial and error Complexity

Snob Problem with overfitting – large no. of data points Sampling at very tiny rates working well

Evaluation Criteria

Kullback-Leibler measure Relative to Entropy Relative to model complexity

Minimum Message Length Promising as objective criterion

Future Work

Better way to initially assign pixels to classes Modify Snob to do this

More testing with Snob

Addition of more distributions to Iterative method


Recommended