PROJECT TITLE
Lungs Tumor Detection
GROUP MEMBERS
Muhammad Jabeer Khan (Sp10-Bce-011)
Mian Wisal Ahmad (Sp10-Bee-068)
Zeeshan Nazir (Sp10-bce-031)
PROJECT SUPERVISOR & CO-
SUPERVISOR
Engr Atiqa Kayan Engr Umairullah Tariq
PRESENTATION LAYOUT
Database (IMBA home public access library & INOR)
Image Acquisition Pre-Processing Gray level slicing Connected Components and labelling Morphological operations
Erosion and dilation Features Support vector machine
FLOW CHART
Image Acquisition
Pre-Processing
Segmentation
Post Processing
Feature Extraction
DATABASE
Collection of Lung images
Conversion from DICOM to jpg format
Training
Reference: https://eddie.via.cornell.edu/cgi-bin/datac/
signon.cgi
IMAGE ACQUISITION
C.T image from database
Input into MATLAB
PRE-PROCESSING
Gray scale image (elimination of hue and saturation)o Histogram Equalization Overview:
It is for instance used to enhance Bone structures in x-rays or C.T images under-exposed photographs
Application:
Contrast Adjustment using image histogram
GRAY SCALE IMAGE
HISTOGRAM EQUALIZED IMAGE
GRAY LEVEL SLICING
Highlighting a specific range of gray-levels
Enhancing flaws in X-rays, C.T scans
Bit plane slicing
Plane by plane information acquisition
Threshold value of lung tumorRefrence:
Digital image processing by S Jayaraman, S Esakkirajan , S Veerakumar
GRAY LEVEL SLICING(CONT’D)
There are two main different approaches: highlight a range of intensities while diminishing
all others to a constant low level. highlight a range of intensities but preserve all
others
GRAY LEVEL SLICED IMAGE
CONNECTED COMPONENTS AND LABBELING
Finding the total no of connected regions in an image
Assigning a label to each connected region
ALGORITHM (FIRST PASS ASSIGNING LABELS)
Image
Assign New Label to pixel
Neighbors Not Labeled
Check Neighbor
s
Scan Pixel by Pixel
Pixel is not Background
Neighbors already labeled
Assign Neighbors
parent label to main label
ALGORITHM (SECOND PASS AGREEGATION)
Scan Each Pixel
Get Label’s Parent
Add to Existing List
Add to a new List
Parent is in
Pattern list
Pixel is Labeled
Yes No
Yes
STEP BY STEP WALKTHROUGHIn the beginning, we have this image, we start with currentLabelCount = 1
We found our non-background pixel
get its non-background neighbors
we set the current pixel to the currentLabelCount and increment it
on to the next pixel, this one has a neighbour which is already labeled
assigns the pixel's parent label to that of the neighbor
We continue on, none of the neighbours of this pixel is labeled
We increment currentLabelCount and assign it to the pixel, again its parent is set to itself
It gets interesting here, when neighbours have different labels
1) We choose main label (would be the smallest label in list--> (1))2) We set it to be the parent of the other labels
A few more rounds and we should end up with this. Notice the blue number in the upper right corner, that's the parent label, the de facto one upon which we aggregate later.
That's it, now all we have to do is pass the image again pixel by pixel, getting the root of each (if labeled) and store it in our patterns' list.
MORPHOLOGICAL OPERATIONS
Erosion
Dilation
Combine to Openning Object
Closening Background
STRUCTURING ELEMENT
Small set to probe the image under study
For each SE, define origo
Shape and size must be adapted to geometric
Properties for the objects
EROSION
The contraction of image (binary or grayscale) a.k.a region shrinking
Use structuring element on image data to produce new image
SE patterns that fits best on the image
IMAGE OF EROSION
HOW IT WORKS???
A pixel is turned on (1) only when the pixels of both structuring element and the pixels match other.
Both ON (1) and OFF (0) pixels should match.
Erodes to the right
difference
erosion
EROSION EXAMPLE
MATHEMATICAL DEFINITION OF EROSION
1. Erosion is the morphological dual to dilation.
2. It combines two sets using the vector subtraction of set elements.
3. Let denotes the erosion of A by BBA
){
}..,{2
2
BbeveryforAbxZx
baxtsAaanexistBbeveryforZxBA
DILATION FILLS HOLES
Fills in holes.
Smoothes object boundaries.
Adds an extra outer ring of pixels onto object boundary, ie, object becomes slightly larger.
IMAGE OF DILATION
X
B
difference
dilation
EXAMPLE OF DILATION
Dilation : x = (x1,x2) such that if we center B on them, then the so translated B intersects X.
MATHEMATICAL DEFINATION OF DILATION
IMAGE OF MORPHOLOGICAL OPERATIONS
TRAINING
One by one extraction of each labelled region.
Identifying tumor region.
Supervised learning through Support vector machine.
FEATURES FOR EXTRACTION
Area (305)
Eccentricity (0.5828)
Perimeter (84.7696)
Standard deviation (0.0275)
Mean (7.4599e-4)
Extent (0.6689)
SUPPORT VECTOR MACHINE
SVMs maximize the margin around the separating hyperplane.
A.k.a. large margin classifiers
The decision function is fully specified by a subset of training samples, the support vectors.
Solving SVMs is a quadratic programming problem
MAXIMUM MARGIN: FORMALIZATION w: decision hyperplane normal vector xi: data point i
yi: class of data point i (+1 or -1) Classifier is:
f(xi) = sign(wTxi + b)
Functional margin of xi is: yi (wTxi + b)
Functional margin of dataset is twice the minimum functional margin for any point The factor of 2 comes from measuring the
whole width of the margin
43
GEOMETRIC MARGIN
Distance from example to the separator is
Examples closest to the hyperplane are support vectors.
Margin ρ of the separator is the width of separation between support vectors of classes.
r
ρx
x′
w
Finding r:Dotted line x’−x is perpendicular todecision boundary so parallel to w.Unit vector is w/|w|, so line is rw/|w|.x’ = x – yrw/|w|. x’ satisfies wTx’+b = 0.So wT(x –yrw/|w|) + b = 0since |w| = sqrt(wTw).So wTx –yr|w| + b = 0So, solving for r gives:r = y(wTx + b)/|w|
Sec. 15.1
44
LINEAR SVM MATHEMATICALLYTHE LINEARLY SEPARABLE CASE
Assume that all data is at least distance 1 from the hyperplane, then the following two constraints follow for a training set {(xi ,yi)}
For support vectors, the inequality becomes an equality Then, since each example’s distance from the hyperplane is
The margin is:
wTxi + b ≥ 1 if yi = 1
wTxi + b ≤ −1 if yi = −1
Sec. 15.1
45
LINEAR SUPPORT VECTOR MACHINE (SVM)
Hyperplane wT x + b = 0
Extra scale constraint:
mini=1,…,n |wTxi + b| = 1
This implies:
wT(xa–xb) = 2
ρ = ||xa–xb||2 = 2/||w||2wT x + b = 0
wTxa + b = 1
wTxb + b = -1
ρ
Sec. 15.1
46
LINEAR SVMS MATHEMATICALLY (CONT.)
Then we can formulate the quadratic optimization
A better formulation (min ||w|| = max 1/ ||w|| ):
Find w and b such that
is maximized; and for all {(xi , yi)}
wTxi + b ≥ 1 if yi=1; wTxi + b ≤ -1 if yi = -1
Find w and b such that
Φ(w) =½ wTw is minimized;
and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1
Sec. 15.1
47
THE OPTIMIZATION PROBLEM SOLUTION
Each non-zero αi indicates that corresponding xi is a support vector.
Classifying function will have the form:
Relies on an inner product between the test point x and the support vectors xi
w =Σαiyixi b= yk- wTxk for any xk such that αk 0
f(x) = ΣαiyixiTx + b
Sec. 15.1
48
NON-LINEAR SVMS Datasets that are linearly separable (with some noise)
work out great:
Hard dataset to be classified
Mapping data to a higher-dimensional space:
0
x2
x
0 x
0 x
Sec. 15.2.3
49
NON-LINEAR SVMS: FEATURE SPACES
General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable:
Φ: x → φ(x)
Sec. 15.2.3
50
THE “KERNEL TRICK”
The linear classifier relies on an inner product between vectors K(xi,xj)=xi
Txj
If every datapoint is mapped into high-dimensional space via some transformation Φ: x → φ(x), the inner product becomes:
K(xi,xj)= φ(xi) Tφ(xj)
A kernel function is some function that corresponds to an inner product in some expanded feature space.
Example:
2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2
,
Need to show that K(xi,xj)= φ(xi) Tφ(xj):
K(xi,xj)=(1 + xiTxj)2
,= 1+ xi12xj1
2 + 2 xi1xj1 xi2xj2+ xi2
2xj22 + 2xi1xj1 + 2xi2xj2=
= [1 xi12 √2 xi1xi2 xi2
2 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj2
2 √2xj1 √2xj2]
= φ(xi) Tφ(xj) where φ(x) = [1 x1
2 √2 x1x2 x22 √2x1 √2x2]
Sec. 15.2.3
51
KERNELS
Why use kernels? Make non-separable problem separable. Map data into better representational space
Common kernels Linear Polynomial K(x,z) = (1+xTz)d
Gives feature conjunctions Radial basis function (infinite dimensional space)
Sec. 15.2.3
TIMELINE
1st Presentation
• Study of project
2nd Presentation
• Image Acquisition and Pre-Processing
3rd Presentation
• Gray Level Slicing and Connected Components Labeling
4th Presentation
• Feature Extraction and SVM
5th Presentation
• Presenting the project to external examiner
52