+ All Categories
Home > Documents > CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING...

CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING...

Date post: 27-Mar-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
44
89 CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKS An efficient medical image retrieval system involves several stages including preprocessing, feature extraction, classification, retrieval and indexing. For retrieval of images, Euclidian distance method values are calculated between the query image and database images. In all the existing methods, various visual features have been considered indirectly to retrieve the images from databases. This chapter presents implementation and result analysis of some of the existing methods that are presented in literature review in order to assess the exact retrieval performance of these works and derive best out of them for further investigation. Experimental setup: Intel® Core 2 Duo CPU Workstation with 2GB RAM used for conducting the experiments. MATLAB 7.2.0-Image Processing tool Box used for developing user interface components as front end, MATLAB 7.2.0-Image Processing tool Box-Workspace used as feature database for backend storage and for image processing work other MATLAB 7.2.0 utilities were used. For mathematical equations, math type tool is used for writing the document. Initially, MATLAB 7.2.0 workspace database with several real time medical images were used for testing these CBMIR systems. Similarity Comparison: In these works, for image retrieval the similarity comparison technique has been computed. For similarity comparison, Euclidean distance d has been computed using the following equation
Transcript
Page 1: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

89

CHAPTER 3

SYSTEMATIC ASSESSMENT

ON EXISTING WORKS

An efficient medical image retrieval system involves several stages

including preprocessing, feature extraction, classification, retrieval and

indexing. For retrieval of images, Euclidian distance method values are

calculated between the query image and database images. In all the existing

methods, various visual features have been considered indirectly to retrieve

the images from databases. This chapter presents implementation and result

analysis of some of the existing methods that are presented in literature

review in order to assess the exact retrieval performance of these works and

derive best out of them for further investigation.

Experimental setup: Intel® Core 2 Duo CPU Workstation with

2GB RAM used for conducting the experiments. MATLAB 7.2.0-Image

Processing tool Box used for developing user interface components as front

end, MATLAB 7.2.0-Image Processing tool Box-Workspace used as feature

database for backend storage and for image processing work other MATLAB

7.2.0 utilities were used. For mathematical equations, math type tool is used

for writing the document. Initially, MATLAB 7.2.0 workspace database with

several real time medical images were used for testing these CBMIR systems.

Similarity Comparison: In these works, for image retrieval the

similarity comparison technique has been computed. For similarity

comparison, Euclidean distance d has been computed using the following

equation

Page 2: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

90

q db N 2(F [i] F [i])i 1

d (3.1)

where Fq[i] is the ith query image feature and Fdb[i] is the

corresponding feature in the feature vector database. Here N refers to the

number of images in the database (S.Nandagopalan et al 2008).

Retrieval Efficiency: Precision and recall are the basic measures

used in evaluating the search strategies of various methods. Precision

(accuracy of the retrieval) is the fraction of retrieved instances that are

relevant, while recall (sensitivity of the retrieval) is the fraction of relevant

instances that are retrieved.

Reference database: Precision and recall were calculated from the

reference database having 560 medical images that includes real-time medical

images collected from local hospital and medical images collected from

CasImage database and IRMA database repositories.

Analysis and Implementation: By analyzing the similarity in

execution and presentation concepts in the existing methods, it is being

concluded that all these works can be grouped into three categories viz.

single, dual and tri feature extraction.

For effective comparison of all the above there categories, they are

implemented in three parts under same experimental setup using same

reference database as follows:

1. CBMIR system using single visual feature content

2. CBMIR system using dual visual feature content

3. CBMIR system using tri visual feature content

Page 3: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

91

The diagrammatic representation of these works is depicted in

Figure 3.1.

Figure 3.1 Framework for Systematic Assessment on Existing Works

3.1 CBMIR SYSTEM USING SINGLE VISUAL FEATURE

CONTENT

In this part, CBMIR system has been implemented using single

visual feature (either shape or texture) along with k-means clustering

algorithm. Section 3.1.1 discusses the implementation using shape based

feature and section 3.1.2 discusses it using texture based feature.

3.1.1 Shape based CBMIR system

In this work, basic shape feature is extracted using canny edge

detection algorithm. For classification, K-means classification algorithm is

used. The Framework model of the Shape based CBMIR system is shown in

Figure 3.2. Initially, medical images are taken as input to the system and

preprocessing of the images carried out in order to improve the flexibility of

the images for further processing of the system. In this system, there are two

CBMIR

Single Feature

Extraction

Dual Feature

Extraction

Tri Feature Extraction

Shape (CED)

Texture (GLCM)

Shape & Texture (EH& EBT)

Shape & Texture (GFD&GF)

Shape,Texture & Grayscale Resolution

(IM,GLCM & Grayscale

Resolution)

Shape,Texture & Intensity

(Gaussion Filter, LBP & DWT

Based Intensity Extraction)

Page 4: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

92

parts of processes that are carried out namely offline and online. In offline

process, image reference database is constructed through one of the dominant

feature of the image such as shape extraction using Canny Edge Detection

(CED) algorithm. In online process, Graphical User Interface (GUI) for the

user interaction is developed through which the user can interact with the

system for retrieval of their desired images from image database. For retrieval

process, similarity comparison technique carried out between online user

query image and offline image reference database. After comparison, the

resulting images are indexed and retrieved based on their rank.

Shape

Shape is an important and most powerful feature used for image

classification, indexing and retrieval. Shape information is extracted using

histogram of edge detection. In this work, edge information in the image

obtained using canny edge detection (Canny, J 1986). Other techniques for

shape feature extraction are elementary descriptor, Fourier descriptor,

template matching, Quantized descriptors etc. Canny edge detection

outperforms many of the newer algorithms that have been developed in the

industry (Ehsan Nadernejad et al 2008).

Page 5: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

93

Figure 3.2 Framework model of the Shape based CBMIR system

Algorithm: For detecting edges using Canny Edge Detection

Algorithm.

Step 1: Smoothing: Smooth the image with a two dimensional

Gaussian. In most cases the computation of a two dimensional Gaussian is

costly, so it is approximated by two one dimensional Gaussians.

Step 2: Finding Gradients: Take gradient of the image that shows

changes in intensity indicating the presence of edges. It actually gives two

results: the gradient in x direction and the gradient in y direction.

Step 3: Non-maximal suppression: Edges will occur at points

where the gradient is at a maximum. The magnitude and direction of the

gradient is computed at each pixel.

Input Medical Images

Image Enhancement

Reference Database

Query Image

Shape Extraction (CED)

Building Feature Vector

Extract Shape

Similarity Comparison

Indexing and Retrieval

Output Ranked Images

Page 6: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

94

Step 4: Edge Threshold: The threshold method used by Canny

threshold and low threshold.

Step 5: Thinning: Using interpolation to find the pixels where the

norms of gradient are local maximum.

Figures 3.3(a) and 3.3(b) depict the before and after result of

feature extraction for a brain image using Canny Edge Detection method.

Input image Output image

(a) Brain Image (b) Edge detected Brain Image

Figure 3.3 Sample Canny Edge Detected Brain Image

Image Classification

Image classification is one of the important steps in image retrieval

process because it saves more time while searching the images from huge

volume of database. Classification is identification of different regions of the

image by which the retrieval efficiency of the system will be improved. A

commonly used classification algorithm is k-means algorithm. In this work, k-

means algorithm is used for identifying different regions of images from the

database. Performance of the retrieval process is improved by comparing the

Page 7: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

95

d

algorithm that has been widely used in various key areas like micro array

dataset (Bernard Chen et al 2005), high dimensional data sets (Madjid

Khalilian et al 2010) and especially in image retrieval system(Hong Liu and

Xiaohong Yu 2009) . The k-means algorithm takes the input parameter k ,

and partitions a set of n objects into k clusters so that the resulting intra

cluster similarity is high but the inter cluster similarity is low. Given a set of

observations (x1, x2 n), where each observation is a d-dimensional real

vector, the k-means clustering aims to partition the n observations into k sets

(k < n) S = {S1, S2 k} so as to minimize the within-cluster sum of squares.

(3.2)

where i is the mean of points in Si.

Algorithm: For K-means clustering Algorithm

The k-means algorithm for partitioning is based on the mean value

of the objects in the cluster

Input: The number of clusters k and a database containing n

objects.

Output: A set of k clusters that minimizes the squared-error

criterion.

Method:

Step 1: Enter the number of clusters.

Page 8: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

96

Step 2: Randomly guess k cluster Center locations.

Step 4: Each Center finds the centroid of the points it owns.

Step 5: Center now moves to the new centroid.

Step 6: Repeat until terminated.

Result Assessment

The performance was tested using reference database and the

results were analyzed and tabulated as shown in Table 3.1.

Table 3.1 Shape based CBMIR system Performance Evaluation

Query Image

Precision in %

Recall

in % Precision/Recall AVG=(P+R)/2

Time Complexity in Seconds

Image1 50.00 37.50 1.33 43.75 182

Image2 53.33 34.00 1.57 43.67 179

Image3 56.95 66.66 0.85 61.81 233

Image4 58.33 36.84 1.58 47.59 183

Image5 68.26 51.42 1.33 59.84 248

Comment on Result: Shape based CBMIR system is able to

achieve precision (accuracy) in the range of 50-70% and recall (sensitivity) in

the range of 30-70% for 5 different queries. The time complexity is in the

range of 170-250 seconds.

Page 9: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

97

Figure 3.4 (a, b, c, d, e, and f) shows some of the sample images of

the work in various steps of Canny Edge Detection Algorithm.

CANNY EDGE DETECTION

Step 1: SMOOTHING

X Derivative:

Figure 3.4 (a) Sample Image Smoothing for X Derivative

Y Derivative:

Figure 3.4 (b) Sample Image Smoothing for Y Derivative

Page 10: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

98

Step 2: FINDING GRADIENT

Figure 3.4 (c) Sample Image for Finding Gradient

Step 3: NON MAXIMUM SUPRESSION

Figure 3.4 (d) Sample Image for Non Maximum Suppression

Step 4: THRESHOLDING AND HYSTERESIS

Figure 3.4 (e) Sample Image for Thresholding and Hysteresis

Page 11: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

99

Step 5: EDGE DETECTION

Figure 3.4 (f) Sample Edge Detected Image using Canny Edge Detection

Algorithm

3.1.2 Texture based CBMIR system

In this work, one of the visual features viz. texture has been

considered. The main feature of this tool is utilization of GLCM for extracting

the texture pattern of image and k-means clustering algorithm for image

classification in order to improve the retrieval efficiency.

The framework model of the texture based CBMIR system is

shown in Figure 3.5 in which the Medical images (such as x-ray, MRI Scan,

CT scan) are given as input into the system. The given input images are

segmented using the method described in (Ozden and Polat, 2005). In this

work, only the texture regions of the image are considered for feature

extraction. For each image in the image database, feature vector value has

been developed and stored in feature database. When a query image is

submitted by the user, the same texture feature extraction and feature vector

value construction process is applied to the query image to obtain the feature

vector value of the query image. For similarity comparison between the query

Page 12: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

100

image and the database image, the Euclidean distance method is used. The

closest Euclidean distance values to the database images are ranked and

retrieved.

Texture

Texture is a natural property of surfaces and it provides visual

patterns of the image. It has repeated pixel information that contains vital

information regarding the structural arrangement of the surface (example

clouds, leaves bricks). It also gives the relationship between surface and

external environment.

Figure 3.5 Framework Model of the texture based CBMIR system

In this work, the extraction process of texture feature is performed

by computing the Gray Level Co-Occurrence Matrix (GLCM). The

graycomatrix function is used to create a GLCM. The graycomatrix function

Images Query Image

Texture Feature Extraction

Feature database

Similarity Comparison

Query Features

Retrieved Images and Ranking

Page 13: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

101

creates a gray level comatrix by calculating how often a pixel with the

intensity (gray level) value i occurs in a specific spatial relationship to a

pixel with the value j . The spatial relationship is defined as the pixel of

interest and the pixel to its immediate right (horizontally adjacent) (Haralick

et al 1973). The outcome of GLCM for each element (I, J) is computed by

summing the pixel with the value occurred in the particular spatial

relationship to a pixel with value j in the input image (Partio et al 2002, Park

et al 2004). GLCM features are extracted using one distance d = {1} and four

After computation of Gray level co occurrence matrix, a number of

statistical texture measures based on GLCM are derived, which are suggested

by Haralick. For generating texture features, second order method has been

used that are derived from the co-occurrence probabilities. These probabilities

represent the conditional joint probabilities of all pairwise combinations of

gray levels in the spatial window of interest given two parameters: inter pixel

Eq.3.3.

Pr(x) {C | ( )}ij = , (3.3)

where, Cij is the co-occurrence probability between gray levels i

and j which is defined as Eq. 3.4

ijij

ij

PC GP

i,j=1

(3.4)

where Pij - Represents the number of occurrences of gray levels i and j -

Page 14: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

102

G -The quantized number of gray levels

The sum in the denominator thus represents the total number of

gray level pairs (i, j) within the window.

Graycomatrix computes the GLCM from a full version of the

image. By default, if a binary image, graycomatrix scales the image to two

gray-levels. If is an intensity image, graycomatrix scales the image to eight

gray-levels. A texture is distinguished by 14 statistical measurement value

suggested by (Haralick et al. 1973). The following formulas are used to

calculate the features, which are shown in Eq. 3.5-3.8(Yin et al 2008).

2Energy P(i, j)i,j

= (3.5)

Entropy P(i, j)log(P(i, j)i,j

= - (3.6)

i j(i )( j )p(i, j)Correlationi,j i j

= (3.7)

P(i, j)Homogeneity 1 |i j|i,j = (3.8)

Algorithm: For calculating GLCM measures for each pixel

1. Read the input image.

2. Convert the data type to double and Zero pad the image

3. Extract a 3×3 window image from the input image and compute the co-occurrence texture measure

4. Estimate the texture parameters for the obtained texture image

Page 15: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

103

5. Repeat step3 and step4 by moving the window till end of the image

6. Display various texture parameters by normalizing them

Classification: Classification is a technique to detect the dissimilar

texture regions of the image based on its features. It can be used to cluster the

feature sets of the image that is characterized as different regions. A

frequently used clustering algorithm is the k-means algorithm. In this work,

the k-means clustering algorithm is used for classifying the texture regions of

the image so that different regions of the texture image have been identified in

order to increase the performance of the retrieval by comparing the classified

texture image with u

K-means clustering: K-means clustering is a simple algorithm for

clustering the texture regions of an image. For K clusters {C1, C2 K} each

with nk patterns, it aims to find cluster centers mk to minimize the cost

function 2E k shown in Eq.3.9 and 3.10.

kk k

1m xn x C

(3.9)

(3.10)

The initial cluster midpoints are selected randomly and the

algorithm is applied repeatedly until a fixed state level is arrived.

Algorithm: For K-means clustering

1. Initialize cluster centers randomly in texture image

2. For all the pixels in the image clusters do the following

Page 16: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

104

a) Compute the Euclidean distance of the feature vector from the cluster for every other cluster.

b) Assign the pixel to that cluster whose center yields the minimum distance from the feature vector

3. Update the cluster centers by computing the mean of the feature vectors of the pixels belonging to that cluster

4. Between two consecutive updates, if the changes in the cluster centers are less than a specified value, then stop else go to step 2

Result Assessment

The performance is tested using reference database and the results

are analyzed and tabulated as shown in Table 3.2.

Table 3.2 Texture based CBMIR system Performance Evaluation

Query Image

Precision in %

Recall

in % Precision/Recall AVG=(P+R)/2

Time Complexity in Seconds

Image1 53.33 51.42 1.04 52.38 224

Image2 55.71 45.00 1.24 50.36 187

Image3 56.00 60.00 0.93 58.00 180

Image4 57.00 56.66 1.01 56.83 210

Image5 65.71 48.94 1.34 57.33 182

Comment on Result

Texture based CBMIR system is able to achieve precision

(accuracy) in the range of 50-70% and recall (sensitivity) in the range of 40 to

60% for 5 different queries. The time complexity is in the range of 180-225

Page 17: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

105

seconds. Compared to the shape based CBMIR system, texture based CBMIR

system seems to give better recall values and less complexity for the same

five different query images.

The following Figure 3.6.shows some of the sample images of the

work used in the model for texture feature extraction

Figure 3.6 Sample images of the work for texture feature extraction

3.2 CBMIR SYSTEM USING DUAL VISUAL FEATURE

CONTENT

In this part, the CBMIR has been implemented using dual visual

features such as the combined form of both shape and texture features.

Section 3.2.1 and 3.2.2 discusses the implementation using combined form of

both shape and texture based features.

Page 18: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

106

Image Database

Feature Database

Select Image

Shape Feature Extraction (EHD)

Texture Feature Extraction (EBT)

Building Single Feature Vector

Query Image

Extract Texture &Shape Features

Similarity Comparison

Indexing & Retrieval

Output Ranked Image

3.2.1 CBMIR using Edge Histogram Descriptor (EHD) and Edge

Based Texture (EBT)

The primary step of this work is to extract shape using edge

histogram descriptor and texture using edge based texture in an image. In the

next step, both these image features are combined together and are considered

to build single image feature vector. This process is performed for each of the

images stored in the database to form a set of feature values. The image

selected by the user (i.e. query image) is also considered and the above

features are extracted to form a combined feature value. Further each of the

feature vector value in feature database is compared with feature value of the

query image. The most similar images are then ranked and displayed based on

their similarity factor. Figure 3.7 shows the framework model of CBMIR

system using EHD and EBT.

Figure 3.7 Framework model of CBMIR system using EHD and EBT

Page 19: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

107

Texture feature extraction

The texture is extracted using an edge based texture feature

extraction technique. The edge based texture extraction is obtained by

defining a 3x3 matrix mask. This approach provides the necessary edge

information as well as captures the regions that do not have a clear edge

defined.

Edge based texture feature extraction

Texture features provides a key feature for retrieval systems. The

edge based texture feature extraction method gives the edge information along

with the regions. A set of six 3x3 matrix are used as edge filters. These six

filters each form a mask and are moved from pixel to pixel in an image by a

process called convolution. The formula for a two-dimensional convolution of

an image a and mask b of values n1 and n2 is given as

1 2 1 2 1 1 2 21 2

c(n ,n ) a(k ,k )b(n k ,n k )k k

(3.11)

The resulting image of the two-dimensional convolution (Eq 3.11)

of each filter mask values given in Figure 3.8 is then checked with specified

threshold value for obtaining pixel values having less than a specified

threshold value, which is to be removed. It gives a clearer texture pattern of

the image. The same process is carried out with other masks and the images

are consolidated. The different edge filter masks and sample output of the

edge based texture feature extraction are shown in Figures 3.8 and 3.9

respectively.

Figure 3.8 Different edge filter masks

Page 20: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

108

(a) Input image (b) Texture extracted image

Figure 3.9 Edge based texture Feature Extraction

Shape feature extraction

The shape feature is extracted using edge histogram descriptor. The

edge histogram descriptor helps in capturing the spatial distribution of the

image. The edge histogram descriptor represents the local edge distribution by

subdividing the image space into 4 X 4 sub images, and represents it as

histograms.

Edge histogram descriptor

An edge histogram descriptor is performed to extract the shape of

an image. The edge histogram descriptor captures the spatial distribution of

five types of edges, namely four directional edges and one non directional

edge. In order to improve the matching performance, the edge distribution

information of the whole image space and the vertical and horizontal semi

global-edge distributions are used. It helps to retrieve the similar semantic

means of an image.The edge histogram descriptor represents the local edge

distribution by subdividing the image space into 4 X 4 sub images, and

represents it as histograms. The edge histograms are categorized as shown in

Figure 3.10 (Nandagopalan et al 2008 & Rajshree et al 2010).

Page 21: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

109

(a) Vertical (b) Horizontal (c) 45 degree (d) 130 degree (e) non-directional

Figure 3.10 Edge histograms types

Algorithm for edge histogram descriptor is calculated as follows

1. Compute the total number of bins. Here a total of 5 x 16 = 80

histogram bins are required.

2. Find all edges in the image using the filter coefficients by

moving them each pixel by pixel. The filter coefficients for edge

detection are as shown in Figure 3.11.

Figure 3.11 Filter coefficients for edge detection

3. Place each bin corresponding to its orientation.

4. Finally normalize the histogram by dividing the value in each

bin by total number of edges.

Figure 3.12 shows sample shape feature extraction and edge

histogram for the input image.

Page 22: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

110

(a) Input Image (b) Shape extracted image

(c) Edge Histogram

Figure 3.12 Shape Feature Extraction

In Figure 3.12 (c), edge histograms are detected till the intensity

value is null. i.e. when there is no intensity value, it means that the edges of

that paricular descriptor is completely extracted.

Combined feature vector construction

After extracting the shape and texture features, they are converted

into a common feature vector. This feature vector is constructed by combining

the texture feature values and the histogram values. The texture feature values

are averaged and consolidated into a single matrix value. The normalized

Page 23: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

111

histogram obtained from edge histogram descriptor form the other matrix

value. Thus the texture feature values and the histogram values form a pair in

the feature vector. i.e. Feature vector = [Texture_extracted_image_value,

Edge_histogram_value]. The resulting feature is considered for similarity

comparison that provides a better image retrieval process.

Result Assessment

The performance is tested using reference database and the results

are analyzed and tabulated as shown in Table 3.3.

Table 3.3 CBMIR using Edge Histogram Descriptor and Edge Based

Texture system Performance Evaluation

Query Image

Precision in %

Recall

in % Precision/Recall AVG=(P+R)/2

Time Complexity in Seconds

Image1 87.50 84.00 1.04 85.75 355

Image2 89.00 83.33 1.07 86.17 303

Image3 93.33 88.57 1.05 90.95 335

Image4 93.44 89.47 1.04 91.46 342

Image5 91.73 90.00 1.02 90.87 300

Comment on Result

The system is able to achieve precision (accuracy) in the range of

85-95% and recall (sensitivity) in the range of 80-90% for the same set of

queries. The time complexity is in the range of 300-360 seconds. Compared

to single feature based CBMIR system, this system seems to give 50% better

Page 24: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

112

precision and recall values but time complexity also increases by equal

amount.

Figure 3.13 shows one sample output user interface screen of this

work.

(a) Loading Screen (b) Selecting the Image from the Database

(c).Input query image (d) Output similar images

Figure 3.13 Sample user interface screen Images of CBMIR using EHD

and EBT

Page 25: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

113

3.2.2 CBMIR using Generic Fourier Descriptor (GFD) and Gabor

Filters (GF)

This method consists of three stages: First, the input image

(Medical image) is processed for shape feature extraction using Generic

Fourier Descriptor method (GFD). In the next stage, the texture features are

extracted using Gabor Filters (GF) method. The final stage integrates the

above two features to obtain feature vector values. Then these feature vector

values are used to perform similarity comparisons between query image and

database images.

In order to perform this process, the query image is extracted based

on shape using the effective shape descriptor called Generic Fourier

Descriptor. GFD generally enhances the performance of the system by

highlighting some major advantages like retrieval accuracy, low

computational time and robust retrieval performance. The next stage is texture

feature extraction and it is extracted using the efficient Gabor filter method.

This algorithm outperforms most of other methods in the process of extracting

the texture features and it also provides better way to retrieve the images. The

retrieval process is done effectively by extracting both the shape and texture

feature of the query image and this resulting image is used to construct feature

vector value and then it is compared with the reference database to retrieve

similar images using the Euclidean distance method. Figure 3.14 shows the

framework model of CBMIR using GFD and GF.

Page 26: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

114

Image Databas

Feature Database

Select Image

Shape Feature Extraction

(GFD)

Texture Feature Extraction (GF)

Building Single Feature Vector

Query Image

Extract Texture &Shape Features

Similarity Comparison

Output Ranked Image

Indexing & Retrieval

Figure 3.14 The framework model of CBMIR using GFD and GF

Texture Feature Extraction

Texture Feature description is one of the key features of an image

content description for image retrieval. Textures are modeled as a pattern

dominated by a narrow band of spatial frequencies and orientations. The

texture feature of an image is extracted to limit the region to be processed

using feature extraction. The texture feature extraction is carried out using a

texture analysis technique called Gabor Filters (GF). The Gabor filters are a

group of wavelets, with each wavelet capturing energy at a specific frequency

and a specific direction. Gabor filter is very useful for texture analysis

because of its tunable property of frequency and orientation. Gabor filters

have been used in many applications such as texture segmentation, target

Page 27: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

115

detection, fractal dimension management, document analysis, edge detection,

retina identification, and image coding and image representation. (Levesque

Vincent 2000).

Texture feature extraction using Gabor Filter

Gabor filter takes the form of a Gaussian modulated complex

sinusoid in the spatial domain. A bank of Gabor filters are used to extract

local image features. Typically, an input image is convolved with a 2-D

Gabor function to obtain a Gabor feature image. Gabor filters have the ability

to model the frequency and orientation sensitivity characteristic of the human

visual system. Gabor filters have desirable properties for picture analysis and

feature extraction. They are selective in space, spatial frequency and

orientation, achieving the theoretical limit for conjoint resolution in the spatial

and spatial frequency domain. (Adams Wai-Kin Kong et al 2003, Veni et al

2010).

One Dimensional Gabor Filter

The one-dimensional Gabor Filter consists of three parts and they

are cosine, Gaussian and constant part. The cosine part is dependent on

distance and frequency. (Grigorescu Simona et al 2002), the Gaussian part is

dependent on distance and sigma, and constant part makes the two-

dimensional Gaussian interval equal to 1.0. The magnitude of the 1-D Gabor

filter output is used as a feature to detect boundaries for texture like images.

The major advantage of 1-D Gabor filter is, both the feature extraction and

edge extraction are applied along orthogonal directions. The 1-D Gabor filter

has the following form,

2

21 xf (x, , ) exp( x)

22 (3.12)

Page 28: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

116

where x is the coordinate he standard

deviation of Gaussian envelope.

Two Dimensional Gabor Filter

The Gabor filter is a multi-scale, multi-resolution filter that has

selectivity for orientation, spectral bandwidth. Gabor function is a band-pass

filter that can be tuned to a narrow set of frequency anywhere in the frequency

domain (Anil Jain et al 2001). Gabor function is a complex sinusoid

modulated by a rotated Gaussian. It is one of the most interesting concepts

that deal with the frequency domain. This function can provide accurate time-

frequency location governed by Uncertainty principle . It states that it

satisfies the lower-most bound of time-spectrum resolution. A circular 2- D

Gabor filter in the spatial domain has the following general form

2 2

2 2

1 x yG(x, y, , u, ) exp exp 2 i ux cos uysin2 2

(3.13)

where i= 1 , u is the frequency of the sinusoidal wave, is the

orientation of the function and is the standard deviation of the Gaussian

envelope. Figure 3.15 shows the sample texture extracted output image.

(a) Input Image (b) Output Image

Figure 3.15 Gabor Filter based texture feature extraction

Page 29: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

117

Shape Feature Extraction

The Shape is considered as one of the key features of image content

description and also an important low level image feature to retrieve relevant

images from the database. The shape feature is extracted using an effective

shape descriptor called Generic Fourier Descriptor (GFD). This shape

descriptor is obtained by applying 2-D Fourier Transform on a polar shape

image (Dengsheng Zhang et al 2002). The GFD method outperforms common

contour-based and region-based shape descriptors (Arun et al 2009). The

shape descriptor is distinguished from other descriptors by means of two

characteristics and they are stability (stable performance in different

applications) and clarity (clear physical meaning). Retrieval accuracy,

compact features, general application, low computation complexity, robust

retrieval performance and hierarchical coarse to fine representation are some of the advantages highlighted in the GFD shape descriptor.

Shape feature extraction using Generic Fourier Descriptor

The Feature extraction process is mainly performed to extract the

shape of image. In this process, initially the One Dimensional Fourier

Descriptor (1-D FD) has been applied to the query image to obtain the

knowledge of shape boundary information. Furthermore, One Dimensional

Fourier Descriptor cannot capture shape interior content which is important

for shape discrimination. To overcome the drawback of 1-D FD, Two Dimensional Fourier Descriptor (2-D FD) comes in to existence. By applying

2-D FD, first low frequency terms Fourier descriptors captures global shape

features, while higher frequency terms capture finer details of the shape

(Yang Mingqiang et al 2008 , Ekombo P. Lionel Evina et al 2009).

One Dimensional Fourier Descriptor

The One Dimensional Fourier Descriptor has been widely used in

various shape representation applications and the concept behind the 1-D FD

Page 30: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

118

is to obtain the shape boundary of an image without considering the interior

content (Goyal Anjali et al 2010). The shape signature is the one-dimensional

function which is derived from shape boundary coordinates. It usually

captures the perceptual feature of the shape. A typical shape signature function is the centroid distance, which is given by the distance of boundary

points from the centroid (xc, yc) of the shape,

12 2 2r t x t x y t yc c

(3.14)

where N 11x x tc N t 0

, N 11y y tc N t 0 -1

Generally, 1-D FD is obtained through Fourier transform (FT) on a

shape signature function derived from shape boundary coordinates {(x (t), y

-1}

Two Dimensional Fourier Descriptor

The region-based Fourier descriptor is commonly referred to as

generic Fourier descriptor which is used for several applications (Thai V.

Hoang et al 2010). The Generic Fourier descriptor is derived by applying a

modified polar Fourier transform (MPFT) on shape image. In order to apply

MPFT, the polar shape image is treated as a normal rectangular image. The

steps are

1. The normalized image is rotated counter clockwise.

2. The pixel values along positive x-direction starting from the

image center are copied and pasted into a new matrix as row

elements.

Page 31: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

119

3. The steps 1 and 2 are repeated until the image is rotated by

360°.

The Fourier transform is acquired by applying a discrete 2D Fourier

transform on the shape image.

i i

r i

r 2PF(R T (3.15)

w i i (R and T are

radial and angular resolutions).

frequencies selected and the number of angular frequencies selected.

Figure 3.16 shows the sample shape extracted output image.

(a) Input Image (b) Shape extracted image

Figure 3.16 Generic Fourier Descriptor based Shape feature extraction

Combined feature vector construction

After extracting the shape and texture features, they are converted

into a common feature vector. This feature vector is constructed by combining

the texture feature values and the shape descriptor values. The Gobor filter

Page 32: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

120

based texture feature values are averaged and consolidated into a single

matrix value. The normalized values obtained from Generic Fourier descriptor

forms the other matrix value. Thus the texture feature values and the shape

feature values form a pair in the feature vector. i.e. Feature vector= [Texture _

extracted_image_value, Generic_Fourier_Descriptor_value]. The resulting

feature is considered for similarity comparison, which provides a better image

retrieval process.

Result Assessment

The performance is tested using the reference database and the

results are analyzed and tabulated as shown in Table 3.4.

Table 3.4 CBMIR using Generic Fourier Descriptor (GFD) and Gabor

Filters (GF) system Performance Evaluation

Query Image

Precision in %

Recall

in % Precision/Recall AVG=(P+R)/2

Time Complexity in Seconds

Image1 85.00 86.84 0.98 85.92 295

Image2 90.66 88.00 1.03 89.33 330

Image3 92.85 86.66 1.07 89.76 345

Image4 93.96 90.00 1.04 91.98 360

Image5 90.29 89.00 1.01 89.65 298

Comment on Result

The system is able to achieve precision (accuracy) in the range of

85-95% and recall (sensitivity) in the range of 85-90% for the same set of

queries. The time complexity is in the range of 290-360 seconds. Compared

Page 33: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

121

to single feature based CBMIR system, the system provide 50% better

precision and recall value but the time complexity increases by equal amount.

Figure 3.17 shows sample user interface screen images of CBMIR

using GFD and GF.

(a) Input Screen . (b) Retrieval Screen

Figure 3.17 Sample user interface screen Images of CBMIR using GFD

and GF

3.3 CBMIR SYSTEM USING TRI VISUAL FEATURE

CONTENT

In this work, the CBMIR has been implemented using tri visual

features such as combined form of shape, texture and intensity of the image.

3.3.1 CBMIR using Gaussian Filter, Local Binary Pattern and

Discrete Wavelet Transform for medical images

In this work, the features extracted are: shape using Gaussian Filter,

texture using Local Binary Pattern (LBP) and Discrete Wavelet Transform

(DWT) based intensity that is used in spatial domain.

Page 34: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

122

Images with high feature similarities to the query image may be

different from the query in terms of semantics. This work proposes a step

named image cropping that is performed only on the affected part or the

specified part of the image to tackle the semantic gap problem. Since feature

extraction is done only on the cropped part, it makes the system time efficient.

More accurate results are produced by extracting the features such as shape,

intensity and texture in spatial domain. Algorithm for this system is as follows

Algorithm

The three major tasks involved in this retrieval system are listed out

below

Step 1 : The Query image is obtained from the user.

Step 2 : Part of the image to be cropped is specified by the user.

Step 3 : The part specified is cropped.

Step 4 : Extracting the features from the cropped segment.

Step 5 : Determining image similarity and retrieval.

The framework model of CBMIR using Gaussian Filter, Local

Binary Pattern and Discrete Wavelet Transform for medical images is shown

in Figure 3.18.

Page 35: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

123

Feature Database

Cropping Image

Intensity Feature Extraction

(DWT)

Shape Feature Extraction

(Gaussian Filter)

Building Single Feature Vector

Extract Texture, Shape &Intensity

Features

Similarity Comparison

Indexing & Retrieval

Output Ranked Image

Texture Feature Extraction

(LBP)

Image Database

Reading Input Image

Cropping Image

Query Image

Figure 3.18 The framework model of CBMIR using Gaussian Filter,

Local Binary Pattern and Discrete Wavelet Transform for

medical images

Obtaining the Input Image and Cropping

The Query image is obtained from the user. An interface is

provided to the user that allows selecting the query image from a folder and

linking it to the application. The part of query image that can be used for

feature extraction is specified by the user. This part is then cropped and given

as input to the next module (feature extraction module). Figure 3.19 (a, b, c)

illustrates this process.

Page 36: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

124

(a) Selection of query image from the (b) Image with Pixel Information database

(c) Cropped Image

Figure 3.19 Process of cropped image

Texture Feature Extraction

The texture is a manner in which the constituent parts are united. It

is the structure or repeated patterns on an image. Texture in digital images can

be determined if the neighboring pixels satisfy a specified criterion of

similarity. Local Binary Pattern (LBP) is used in texture extraction of the

image. The LBP operator works with eight neighbors of the pixel using the

Page 37: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

125

center pixel value as threshold and the LBP code for a neighborhood is

produced by multiplying the threshold values with weights given to the

corresponding pixels and summing up the result. (Devrim Unay and Ahmet

Ekin 2008).

Texture Feature Extraction Using Local Binary Pattern (LBP)

In this work, Local Binary Pattern algorithm is used for texture

extraction. The LBP operator defines the texture in the image represented by

thresholding the neighborhood with the gray value of its center pixel and the

results will be represented as binary code format. The pixel-to-pixel

comparison in the image produces the texture and the resulting image is in the

form of texture histogram.

Local Binary Pattern Algorithm for Texture Extraction

STEP 1 : For each pixel in the cell, compare it with other 8 neighboring pixels.

STEP 2 : Follow the pixel along a circle i.e. clockwise or anti-clockwise.

STEP 3 : write 1 otherwise write 0 .This gives an 8 bit binary number.

STEP 4 : Compute the histogram over the cell of frequency of each number ccurring.

STEP 5 : Optionally normalize the histogram.

STEP 6 : Concatenate normalized histogram of all cell.

STEP7 : This gives the feature vector for the window, which can be used for classification.

Page 38: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

126

Figure 3.20 shows a texture feature extracted image.

Figure 3.20 LBP based Texture Feature Extracted Image

Shape Feature Extraction

The shape of a set of points is all the geometrical information that is

invariant to size changes. The shape does not depend on the size of object and

on changes in orientation/direction. However, a mirror image could be called

a different shape. Shapes may change if the object is scaled non-uniformly.

Shape Feature Extraction Using Gaussian Filter

In this work, Gaussian filter based shape feature extraction is

implemented. Gaussian filter is an effective way of removing the noise. Since

the weights gives higher significance for the pixels, it reduces edge blurring.

x is the distance from the origin

in the horizontal axis, y is the distance from the origin in the vertical axis, and

x and y values are generated randomly.

2 2x y( )2G(x,y) e (3.16)

Page 39: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

127

The generated mask is convoluted with the original image.

a h*mask (3.17)

where h is the original image. Figure 3.21 shows a shape feature

extracted image.

Figure 3.21 Gaussian Filter based Shape Feature Extracted Image

Intensity Feature Extraction

The intensity is the amount of light the pixel reproduces (how

bright it is). Gray scale images also known as black and white images are

composed exclusively of shades of gray, varying from black at the weakest

intensity to white at the strongest. The binary representations assume that 0 is

black and the maximum value (255 at 8 bpp, 65,535 at 16 bpp, etc.) is white.

Intensity Feature Extraction Using Discrete Wavelet Transform

In this work, the Discrete Wavelet Transform is used to extract the

intensity feature in spatial domain. A discrete wavelet transform (DWT) is

any wavelet transform for which the wavelets are discretely sampled. A key

Page 40: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

128

advantage is that it captures both frequency and location information (location

in time) (Hasan Demirel and Gholamreza Anbarjafari 2011).

Discrete Wavelet Transform Algorithm

STEP 1 : Separate the image pixel positions into odd and even columns.

STEP 2 : Add the odd and even column values to construct a low pass filter.

STEP 3 : Subtract the odd and even column values to construct a high pass filter.

STEP 4 : Separate the low pass filter into odd and even rows.

STEP 5 : Similarly the high pass filter is separated into odd and even rows.

STEP 6 : Get the four results such as LL, LH, HL, and HH.

STEP 7 : Use the obtained results for further feature extraction.

Figure 3.22 shows intensity feature extracted image.

Figure 3.22 DWT based Intensity feature extracted image

Page 41: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

129

Combined feature vector construction

After extracting all the three features such as shape, texture, and

intensity, these features are integrated in order to build a single feature vector

using fusion method. The resulting feature is considered for similarity

comparison that provides a better image retrieval process.

Result Assessment

The performance is tested using reference database and the results

are analyzed and tabulated as shown in Table 3.5.

Table 3.5 CBMIR using Gaussian Filter, Local Binary Pattern and

Discrete Wavelet Transform for medical images system

Performance Evaluation

Query Image

Precision in %

Recall

in % Precision/Recall AVG=(P+R)/2

Time Complexity in

Seconds

Image1 90.93 85.57 1.06 88.25 405

Image2 91.58 85.00 1.08 88.29 464

Image3 92.00 90.10 1.02 91.05 397

Image4 93.73 86.66 1.08 90.20 490

Image5 94.65 87.00 1.09 90.83 465

Comment on Result

This tri feature extraction system is able to achieve precision

(accuracy) in the range of 90-95% and recall (sensitivity) in the range of

85-90% for the same set of queries. The time complexity is in the range of

400-490 seconds. Compared to the dual feature based CBMIR system, this

Page 42: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

130

system seems to give about 5% better precision and recall values but time

complexity is also increased by 40%.

Figure 3.23 shows one sample snapshot of the images retrieved in

the work.

Figure 3.23 Sample snapshot of retrieved images of CBMIR using

Gaussian Filter, Local Binary Pattern and Discrete Wavelet

Transform for medical images system

3.4 ANALYSIS AND COMPARISON OF SINGLE, DUAL AND

TRI FEATURE EXTRACTION METHODS.

Table 3.6 shows the overall result assessment comparison of single,

dual and tri feature extraction methods implemented for the same reference

database under similar experimental setup.

Page 43: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

1

Table 3.6 Overall result assessment comparison

Query Image

METHOD-I

SINGLE FEATURE EXTRACTION

METHOD-II

DUAL FEATURE EXTRACTION

METHOD-III

TRI FEATURE EXTRACTION

Shape(CED) Texture(GLCM) Shape & Texture

(EH D + EBT) Shape & Texture

(GFD + GF)

Shape, Texture & Intensity

(Gaussian Filter +

L B P + D W T)

P R T.C P R T.C P R T.C P R T.C P R T.C

Imag1 50.00 37.50 182 53.33 51.42 224 87.50 84.00 355 85.00 86.84 295 90.93 85.57 405

Imag2 53.33 34.00 179 55.71 45.00 187 89.00 83.33 303 90.66 88.00 330 91.58 85.00 464

Imag3 56.95 66.66 233 56.00 60.00 180 93.33 88.57 335 92.85 86.66 345 92.00 90.10 397

Imag4 58.33 36.84 183 57.00 56.66 210 93.44 89.47 342 93.96 90.00 360 93.73 86.66 490

Imag5 68.26 51.42 248 65.71 48.94 182 91.73 90.00 300 90.29 89.00 298 94.65 87.00 465

Average 57.38 45.28 205 57.55 52.40 197 91.00 87.07 327 90.55 88.10 326 92.57 86.86 444

*. P-Precision; R-Recall; T.C-Time Complexity in Seconds.

131

Page 44: CHAPTER 3 SYSTEMATIC ASSESSMENT ON EXISTING WORKSshodhganga.inflibnet.ac.in/bitstream/10603/41999/8/08_chapter3.pdf · creates a gray level comatrix by calculating how often a pixel

132

It is inferred from the Table 3.6, that the tri feature extraction

method shows the best result in terms of both precision (accuracy) and recall

(sensitivity) parameters when compared to other methods such as single

feature extraction method and dual feature extraction method. One constraint

noted is the increased time complexity.

Conclusion

This chapter presents implementation and result analysis of the best

methods selected out of many methods that have been discussed in the

literature review. To compare the throughput of the selected schemes, similar

experimental setup has been ensured by implementing all the schemes on the

same machine with identical test conditions. Precision and recall were

calculated using the same reference database having 560 medical images that

includes real-time medical images collected from local hospital and medical

images collected from CasImage database and IRMA database repositories.

Considering the similarity in execution and presentation concepts used, they

are grouped into three categories viz. single, dual and tri feature extraction.

They are compared in order to assess the exact retrieval performance of these

works and derive the best out of them for further investigation. It is concluded

that the tri feature extraction method shows the best result in terms of both

precision and recall parameters, when compared to other methods such as

single feature extraction method and dual feature extraction method.


Recommended