+ All Categories
Home > Documents > A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  ·...

A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  ·...

Date post: 15-May-2018
Category:
Upload: phungnhu
View: 219 times
Download: 0 times
Share this document with a friend
7
A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image Mining Dr. T. Karthikeyan 1 , P. Manikandaprabhu 2 1 Associate Professor, 2 Research Scholar, Department of Computer Science, PSG College of Arts & Science, Coimbatore. 2 [email protected] Abstract This paper proposed the discrete wavelet based texture features in Image Mining. The Proposed methodology uses the Discrete Wavelet Transform to reduce the size of test images. Grey Level Co-occurrence Matrix (GLCM) is applied for all test images of Low Level components of level 2 decomposed images to extract the texture feature of the images. Related images are retrieved by using different distance measure classifiers. The experimental result shows that the proposed method achieves comparable retrieval performance for correlation property of GLCM of texture feature. Keywords - Texture, Discrete Wavelet Transform, gray level co-occurrence matrix, Distance Measures. 1. Introduction The open spread use of digital and multimedia knowledge, storeroom; finding and recovery of images beginning the huge database become not easy. To facilitate economical searching and retrieving of pictures as of the digital collection, new software and techniques have been emerged. The need to discover a preferred image from a huge collection is mutual by many skilled groups including the media persons, drawing engineers, art historians and scholars etc. Content Based Image Retrieval (CBIR) is compared with text or content related advance for recover similar images from the database [24, 25]. Content Based Image Retrieval (CBIR) does not need manual annotation for each image and is not incomplete by the availability of lexicons as a substitute this framework utilizes the low level features that are natural in the images, color, shape and texture. In CBIR, some forms of parallel between images are computed using image futures extracted from them. Thus, users can look for images just like query images quickly and effectively. Fig. 1 shows the architecture of a typical CBIR system. For each image in the image database and its image features are extracted and the obtained feature space (or vector) is stored in the feature database. once a query image comes in, its feature space are going to be compared with those within the feature database one by one and the similar images with the smallest feature distance will be retrieved. Fig.1: Image Retrieval Process CBIR may be divided in the following stages: Preprocessing: The image is first processed in order to extract the features to describe the contents. The processing involves normalization, filtering, segmentation and object identification. The output of this stage could be a set of significant regions and objects. Feature extraction: Features such as color, shape, texture, etc. are used to describe the content of the image. Image features can be classified into primitives. 2. Feature Extraction For the given image database [1], features are extracted first from individual images. The visual features like color, shape, texture or spatial features or Feature Extraction Image DB Query Image Feature Database Query Features Similarity Measures Retrieved Images P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811 IJCTA | Sept-Oct 2014 Available [email protected] 1805 ISSN:2229-6093
Transcript
Page 1: A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  · 2014-10-23A Study on Discrete Wavelet Transform based Texture Feature Extraction for ...

A Study on Discrete Wavelet Transform based Texture Feature Extraction for

Image Mining

Dr. T. Karthikeyan1, P. Manikandaprabhu2 1 Associate Professor, 2 Research Scholar,

Department of Computer Science, PSG College of Arts & Science, Coimbatore. [email protected]

Abstract

This paper proposed the discrete wavelet based texture

features in Image Mining. The Proposed methodology

uses the Discrete Wavelet Transform to reduce the size

of test images. Grey Level Co-occurrence Matrix

(GLCM) is applied for all test images of Low Level

components of level 2 decomposed images to extract

the texture feature of the images. Related images are

retrieved by using different distance measure

classifiers. The experimental result shows that the

proposed method achieves comparable retrieval

performance for correlation property of GLCM of texture feature.

Keywords - Texture, Discrete Wavelet Transform, gray

level co-occurrence matrix, Distance Measures.

1. Introduction The open spread use of digital and multimedia

knowledge, storeroom; finding and recovery of images

beginning the huge database become not easy. To

facilitate economical searching and retrieving of

pictures as of the digital collection, new software and

techniques have been emerged. The need to discover a

preferred image from a huge collection is mutual by

many skilled groups including the media persons,

drawing engineers, art historians and scholars etc.

Content Based Image Retrieval (CBIR) is compared

with text or content related advance for recover similar images from the database [24, 25].

Content Based Image Retrieval (CBIR) does not

need manual annotation for each image and is not

incomplete by the availability of lexicons as a

substitute this framework utilizes the low level features

that are natural in the images, color, shape and texture.

In CBIR, some forms of parallel between images are

computed using image futures extracted from them.

Thus, users can look for images just like query images

quickly and effectively.

Fig. 1 shows the architecture of a typical CBIR

system. For each image in the image database and its

image features are extracted and the obtained feature

space (or vector) is stored in the feature database. once

a query image comes in, its feature space are going to

be compared with those within the feature database one

by one and the similar images with the smallest feature

distance will be retrieved.

Fig.1: Image Retrieval Process

CBIR may be divided in the following stages:

• Preprocessing: The image is first processed in order to

extract the features to describe the contents. The

processing involves normalization, filtering,

segmentation and object identification. The output of

this stage could be a set of significant regions and

objects. • Feature extraction: Features such as color, shape,

texture, etc. are used to describe the content of the

image. Image features can be classified into primitives.

2. Feature Extraction For the given image database [1], features are

extracted first from individual images. The visual

features like color, shape, texture or spatial features or

Feature Extraction

Image DB Query Image

Feature

Database

Query

Features

Similarity Measures

Retrieved

Images

P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811

IJCTA | Sept-Oct 2014 Available [email protected]

1805

ISSN:2229-6093

Page 2: A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  · 2014-10-23A Study on Discrete Wavelet Transform based Texture Feature Extraction for ...

some compressed domain features. The extracted

features are delineating the feature vectors. These

feature vectors are then stored to form image feature

database. For a given query image, we similarly extract

its features and form a feature vector. This feature

vector is matched with the already stored vectors in

image feature database.

Sometimes dimensionality reduction techniques are

employed to reduce the computations. The distance

between the feature vector of the query image and those of the images in the database is then calculated. The

distance of a query image with itself is zero if it is in

database. Then, the distances are stored in increasing

order and retrieval is performed with the help of

indexing scheme.

The feature is distinct as a role of one or more

capacity, every of which specifies some experimental

property of an object and it quantifies some significant

characteristics of the object. We classify the various

features currently employed as follows:

• General features: Function self-regulating features such as shape, color and texture. Independent of the

abstraction level, they can be advance in divided

into:

- Pixel level: Features considered at each pixel

level, e.g. location, colour.

- Local features: Features considered above the

outcome of results is subdivision of the image band

on image segmentation or edge detection.

- Global level features: Features measured over the

whole image or simply expected sub-area of an

image.

• Domain-specific level: Application reliant features like human faces, fingerprints, and conceptual

features.

These features are typically a synthesis of low-level

features for a some specific domain.

On the other hand, all features can be closely secret

into low level features and high level features. Low

level features can be extracted directly from the

original images, whereas high-level feature extraction

must be based on low level features [2].

The vital problems of content based image retrieval

system, which are: i. Image database selection, ii. Similarity measurement, iii. Performance evaluation of

the retrieval process and iv. Low-level image features

extraction.

3. Wavelet Transform Wavelet transform has a good location property in

time and frequency domain and is exactly within the

direction of transform compression idea. The discrete

wavelet transforms states to wavelet transforms that the

wavelets are disjointedly appraised. A transform which

limits a function both in space and scaling and has

some necessary properties compared to the Fourier

transform. The transform is centred on a wavelet

matrix, which can be figured more quickly than the

analogous Fourier matrix. Most notably, the DWT is

used for signal coding, where the assets of the

transform are exploited to signify a discrete signal in an

extra redundant form, often as a preconditioning for

data compression. The discrete wavelet transform has a

vast quantity of applications in Science, Computer Science, Mathematics and Engineering.

Wavelets are functions that satisfy certain

mathematical requirements and are used in representing

data or other functions. The basic awareness of the

wavelet transform is to exemplify any arbitrary signal

“X” as a superposition of a regular of such wavelets or

basis functions. These basis functions are gained from a

single photo type wavelet called the mother wavelet by

dilation (scaling) and translation (shifts). The discrete

wavelet transform for two dimensional signals can be

defined as follows.

1 21 2 1 2

1 2

1( , ,b ,b ) ,

X b Y bw a a

a aa

(1)

Where, a= a1a2

The indexes equation.(1) w (a1, a2, b1, b2) are

called wavelet coefficients of signal X and a1, a2 are

dilation & b1, b2 are translation, ψ is the transforming

function is known as mother wavelet. Low frequencies

are examined with low temporal resolution while high

frequencies with more temporal resolution. A wavelet

transform combines both low pass and high pass

filtering in spectral decomposition of signals. In case of

discrete wavelet, the image is decomposed into a discrete set of wavelet coefficients using an orthogonal

set of basic functions. These sets are divided into four

parts such as approximation, horizontal details, vertical

details and diagonal details. Discrete Wavelet transform

[3] provide substantial improvement in picture quality

at higher compression ratio.

The Embedded Zero tree Wavelet coding is a simple,

effective progressive image coding algorithm and can

be worn for both lossless and lossy compression

systems. This algorithm works well with the proposed

coding scheme because the zero tree structure is

effective in describing the significance map of the transform coefficients, as it exploits the inherent self-

similarity of the subband image over the range of

scales, and the positioning of majority of zero valued

coefficients in the higher frequency subbands. The

EZW algorithm applies Successive Approximation

Quantization in order to provide multi-precision

representation of the transformed coefficients and to

facilitate the embedded coding. The algorithm codes

(1)

P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811

IJCTA | Sept-Oct 2014 Available [email protected]

1806

ISSN:2229-6093

Page 3: A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  · 2014-10-23A Study on Discrete Wavelet Transform based Texture Feature Extraction for ...

the transformed coefficients in decreasing order in

several scans. Each scan of the algorithm consists of

two passes: significant map encoding and refinement

pass.

The dominant pass scans the subband structure in

zigzag, right-to-left and then top-to-bottom within each

scale, before proceeding to the next higher scale of

subband structure as presented in Fig .2. For each and

every pass, a threshold (T) is chosen against which all

the coefficients are measured and encoded as one of the following four symbols,

Significant positive – If the coefficient value is

greater than threshold T

Significant negative – If the magnitude of the

coefficient value is greater than threshold T

Zero tree root – A coefficient is encoded as zero

tree root if the coefficient and all its descendants

are insignificant with respect to threshold T

Isolated zero – If the coefficient is insignificant but

some of its descendants are significant.

2 maxlog

0 2C

T (2)

where equation.(2) Cmax is the maximum coefficient

in the subband structure. The successive approximation

quantization uses a monotonically decreasing set of

thresholds and encodes the transformed coefficients as

one of the above four labels with respect to any given

threshold. For successive significant pass encoding, the

threshold is lowered as 1

2

TT KK

and only those

coefficients not yet found to be significant in the

previous pass are scanned for encoding, and the process

is repeated until the threshold reaches zero, and results

in complete encoded bit streams.

Fig.2: EZW subband structure scanning order

In the embedded zero tree wavelet coding strategy,

developed by Shapiro, a wavelet/subband decomposition of the image is performed. The wavelet

coefficients/pixels are then grouped into Spatial

Orientation Trees. The magnitude of each wavelet

coefficients/pixels in a tree, starting with the root of the

tree, is then compared to a particular threshold T. If the

magnitude of all the wavelet coefficients/pixels in the

tree are smaller than T, the entire tree structure (that is

the root and all its descendant nodes) is coded by one

symbol, the zerotree symbol ZTR. If however, there

exit significant wavelet coefficients/pixels, then the tree

root is coded as being significant or insignificant, if its

magnitude is larger than or smaller than T, respectively.

The descendant nodes are then each examined in turn to

determine whether each is the root of a possible sub zero tree structure, or not. This process is carried out

such that all the nodes in all the trees are examined for

possible sub zero tree structures.

The significant wavelet coefficients/pixels in a tree

are coded by one of two symbols, POS or NEG,

depending on whether their actual values are positive or

negative, respectively. The process of classifying the

pixels as being ZTR, IZ, POS, or NEG is referred to as

the dominant pass in [4]. This is then followed by the

subordinate pass in which the significant wavelet

coefficients/pixels in the image are refined by determining whether their magnitudes lie within the

intervals (T, 3T/2) and (3T/2,2T). Those wavelet

coefficients/pixels whose magnitudes lie in the interval

(T, 3T/2) are represented by a 0 (LOW), whereas those

with magnitudes lying in the interval (3T/2,2T) are

represented by a 1 (HIGH). Subsequent to the

completion of both the dominant and subordinate

passes, the threshold value T is reduced by a factor of

2, and the entire process repeated. This coding strategy,

consisting of the dominant and subordinate passes

followed by the reduction in the threshold value, is

iterated until a target bit rate is achieved. The root node of each tree is located at the highest

level of the decomposition pyramid, and all its

descendants are located in different spatial frequency

bands at the same pyramid level, or clustered in groups

of 2 X 2 at lower levels of the decomposition pyramid.

An EZW decoder reconstructs the image by

progressively updating the values of each wavelet

coefficient/pixel in a tree as it receives the data. The

decoder's decisions are always synchronized to those of

the encoder.

4. Texture Features Among totally different visual characteristics like

color and shape for the analysis of various types of

images, texture is reported to be outstanding and very

important low level feature [5, 6]. Even though no

standard definition exists for texture, Sklansky [7]

outlined the texture collection of native properties among the image region with a continuing, slowly

varied or about periodic pattern. Texture gives the

information on structural arrangement of surfaces and

P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811

IJCTA | Sept-Oct 2014 Available [email protected]

1807

ISSN:2229-6093

Page 4: A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  · 2014-10-23A Study on Discrete Wavelet Transform based Texture Feature Extraction for ...

objects on the image. Texture is not defined for a

separate pixel; it depends on the distribution of

intensity over the image. Texture possesses regularity

and scalability properties; it is represented by main

directions, contrast and sharpness. It is measured using

its distinct properties like periodicity, coarseness,

directionality and pattern complexity for efficient

image retrieval particularly on the aspects of orientation

and scale [8].

Tuceryan and Jain [9] divided the different methods for feature extraction into four main

categories, namely: structural, statistical, model-based

and transform domain. Basically, texture representation

methods can be classified into two categories: structural

and statistical. Statistical methods, including Fourier

power spectra, co-occurrence matrices, shift-invariant

principal component analysis (SPCA), Tamura

features, Wold decomposition[10], Markov random

field[11], fractal model[12] and multi-resolution

filtering techniques such as Gabor[13] and wavelet

transform[14], characterize texture by the statistical distribution of the image intensity. D. A. Clausi et. al

[15] designed the fusion texture feature with Gabor

filter and co occurrence probabilities for texture

segmentation and demonstrated that it outperforms well

for noisy images and the high dimensional feature

vector. The DWT based color cooccurrence feature for

texture classification is explained in [16].

Haralick et. al [17] proposed the methods for

representing texture features of images was grey level

co-occurrence matrices (GLCM). Haralick et. al [17]

also suggested 14 descriptors including the contrast, correlation, entropy and others. Each descriptor shows

one texture property. Therefore, many works for

example as described in [18], are devoted to selecting

those statistical descriptors derived from the co-

occurrence matrices that describe texture within the

best approach.

In [19], firstly, transforming color space from RGB

model to HSI model and then extracting color

histogram to form color feature vector. Secondly,

extracting the texture feature by using gray co-

occurrence matrix. The texture of image is an

illustration of spatial relationship of gray level image. Co-occurrence matrix is make it up based on the point

of reference and distance between image pixels. The

co-occurrence matrix C(i, j) counts the co-occurrence

of pixels with gray values i and j at a given distance d.

The distance d is outlined in polar coordinates (d, ),

with discrete length and orientation. In practice,

takes the values 0◦; 45◦; 90◦; 135◦; 180◦; 225◦; 270◦;

and 315◦. The cooccurrence matrix C(i, j) can now be

defined as follows:

1 1 2 2

1 1 2 2

2 2 1 1

(( , ), ( , )) ( ) ( )

for ( , ) , ( , )( , ) card

( , ) ( , ) ( cos , sin );

for 0<i, j<

x y x y XY XY

f x y i f x y jC i j

x y x y d d

(3)

where card {.} denotes the number of elements in the

set. Let G be the number of gray-values in the image, then the dimension of the co-occurrence matrix C (i, j)

will be N ×N. So, the computational complexity of the

co-occurrence matrix depends quadratically on the

number of gray-scales used for quantization.

A. Wavelet-Based Texture Representation

In wavelet based texture Representations, a

specific feature of this method is representation and

analysis of signals in different scales, i.e., under

different resolutions. The image is described by a

hierarchical structure each level of which represents the original signal with a certain degree of detail.

Tamura et al. [20] presented an approach to

describing texture on the basis on human visual

perception. They suggested coarseness, contrast,

directionality, line-likeness, regularity and roughness

equivalent to the six texture properties that were

recognized as visually significant in the course of

psychological experiments. Howarth and Ruger [18,

21] noticed that the parameters describing the primary

three properties coarseness, contrast and directionality

are rather effective in classifying and searching images by texture. The set of all points for one image is

referred to as Tamura image.

Texture analysis by means of the Gabor filters is a

special case of the wavelet approach. This is the most

frequently used method in image retrieval by texture. In

most of the CBIR systems primarily based in Gabor

wavelet [22, 23], the mean and standard deviation of

the distribution of the wavelet transform coefficients

are used to construct the feature vector.

B. Correlation property

Correlation property shows the linear dependency

of gray level values in the co-occurrence matrix. It

presents how a reference pixel is related to its

neighbour, 0 is uncorrelated, 1 is perfectly correlated.

P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811

IJCTA | Sept-Oct 2014 Available [email protected]

1808

ISSN:2229-6093

Page 5: A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  · 2014-10-23A Study on Discrete Wavelet Transform based Texture Feature Extraction for ...

i ij

i j

i j

i

i j

j

j i

2

i i

i j

2

j j

i j

(ij)C(i,j)-μ μ

Correlation= (4)σ σ

where

μ = i C(i,j)

μ = j C(i,j)

σ = (i-μ ) C(i,j)

σ = (j-μ ) C(i,j)

5. Distance Measures Distance metrics are considered among the enquiry

image and each image in the database. This procedure

is frequent awaiting all the images in the file have been

related with the query image. Remoteness among two images is used to searching the similarities between

query image and the images in the database. Distance

measures like the city block, Standard Euclidean

distance method include used to found the comparison

of feature vectors. In this paper, we use the Euclidean

distance, Standard Euclidean distance and also city

block distance are used to compare the similarity

between the images.

A. City-Block distance (L1)

It computes the distance that may be go to get from

one point to other data point. The amount of the dissimilarity of their corresponding example.

1

n

i i

i

d x y

(5)

B. Euclidean distance (L2)

Euclidean distance is nearly everyone often used to

evaluate profiles of respondents diagonally variables.

This is the nearly all commonly-used metric distance

measure. Euclidean distance is the rectangle root of the

amount of the squared differences between equivalent elements of the two vectors.

2

1

( )n

i i

i

d x y

(6)

C. Standard Euclidean Distance (Std L2)

Standardized Euclidean distance earnings Euclidean distance is planned on regular facts.

Standardized value = (Original value - mean)/Standard

Deviation

1

n

i i

i

d x y

(7)

6. Performance Measures Assessment of retrieval presentation is a critical

trouble in content-based image retrieval (CBIR). Many

different methods for measuring the performance of a

system have been created and used by researchers. The

most common evaluation measures used in CBIR are

precision and recall which are defined as,

Number of relevant images retrievedPrecision =

Total number of images retrieved

7. Experimental Results

Corel image database of 1000 images have been

used. Each image is of size 256x384. There are 10

classes in this database like Africans, Buildings, Buses,

Dinosaurs, Elephants, Flowers, Mountains and Peoples

in database. Each class contains 100 images. The

retrieval efficiency and effectiveness of the proposed

texture feature and Distance measures are experimented

with the popular image database Corel image database and the experimental results are presented in this

section. This experiment gives the comparison of

performance measures of CBIR for the metric City

Block distance (L1), Euclidean distance (L2) and

Standard Euclidean Distance (Std L2). Here we

compared the GLCM Correlation property of precision

of Class names Buses, Dinosaurs, Elephants,

Mountains and peoples.

A. Graph Results

The graph results in Fig 3. Shows performance analysis

related to retrieval accuracy of various class name.

Fig. 3: Precision of Correlation in Each Class

P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811

IJCTA | Sept-Oct 2014 Available [email protected]

1809

ISSN:2229-6093

Page 6: A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  · 2014-10-23A Study on Discrete Wavelet Transform based Texture Feature Extraction for ...

The graph results in Fig 4. Shows the average retrieval

accuracy of various distance measures.

Fig. 4: Average of Precision Efficiency

TABLE I

Detailed Precision of Correlation by Class Name

Class Name L1 L2 Std L2

Buses 94 96 92

Dinosaur 96 96 94

Elephant 88 90 84

Mountain 86 86 80

People 70 76 70

Average 86.8 88.8 84

VII. CONCLUSION In this paper, discrete wavelet based texture

features, associated with different distance measures have been evaluated in Corel data sets. The efficiency

and performance of the proposed system are measured

using average precision of three different distance

measures. Performance analysis comparison of

Correlation with different distance classifier therein one

Euclidean distance gives best performance than city

block and Standard Euclidean distance.

REFERENCES [1] S. Patil and S. Talbar, “Content Based Image Retrieval Using Various Distance Metrics”, Data Engineering and Management, Lecture Notes in Computer Science, Berlin Heidelberg: Springer, pp 154-161, 2012, vol. 6411.

[2] E. Saber, A.M. Tekalp, ”Integration of color, edge and texture features for automatic region-based image annotation

and retrieval”, Journal of Electronic Imaging 7(3), pp. 684–700, 1998.

[3] R. Krishnamoorthy, K. Rajavijayalakshmi and R. Punidha, "Low Complexity Hybrid Lossy To Lossless Image Coder With Combined Orthogonal Polynomials Transform And Integer Wavelet Transform”, ICTACT Journal On Image And Video Processing, Vol. 2, No. 04, pp.410-416, May 2012.

[4] Julien Reichel, Gloria Menegaz, Marcus J. Nadenau, and Murat Kunt, “Integer Wavelet Transform for Embedded Lossy to Lossless Image Compression”, IEEE Trans. Image Processing, Vol. 10, No. 3, pp. 383-392, 2001.

[5] K. Jalaja, C. Bhagvati, B. L. Deekshatulu and Arun K. Pujari, “Texture Element Feature Characterizations for CBIR”, in IEEE Proc. IGARSS '05, Vol. 2, pp. 733 - 736, 2005.

[6] T. Sikora, “The MPEG-7 visual standard for content description – an overview”, IEEE Trans. Circuits Systems and Video Technology, Vol. 11, no. 6, pp.696 – 702, 2001.

[7] J. Sklansky, “Image segmentation and feature extraction”, IEEE Trans. Systems, Man and Cybernetic, Vol.8, no. 4, pp. 237-247, 1978.

[8] H. Tamura, S. Mori and T. Yamawaki, “Texture features corresponding to visual perception”, IEEE Trans. Systems, Man and Cybernetics, Vol. 6, no. 4, pp. 460 - 473, 1976.

[9] M. Tuceryan and A. K. Jain, “Texture analysis”, In the Handbook of Pattern Recognition and Computer Vision, 207-248, 1998.

[10] F. Liu and R. Picard, “Periodicity, directionality and randomness: Wold features for image modeling and retrieval”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 18, no. 7, pp. 722 - 733, 1996.

[11] G. Cross and A. Jain, “Markov random field texture

models”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 5, no.1, pp. 25 - 39, 1983.

[12] L. M. Kaplan et al, “Fast texture database retrieval using extended fractal features”, in Storage and Retrieval for Image and Video Databases VI (Sethi, I K and Jain, R C, eds), Proc SPIE 3312, 1998, pp. 162-173.

[13] T. Chang and C.C.J. Kuo, “Texture analysis and classification with tree structured wavelet transform”, IEEE

Trans. Image Processing, Vol. 2, no. 4, pp. 173 - 188, 1992.

[14] D.S. Zhang., A. Wong., M. Indrawan, and G. Lu, “Content-based image retrieval using gabor texture features”, In Proc. of IEEE PCM’00, pp 392–395, 2000.

[15] D. A. Clausi and H. Deng, “Design Based Texture Feature Fusion Using Gabor Filters and Co- Occurrence Probabilities”, IEEE Trans. Image Processing, Vol.14, No. 7, 2005.

P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811

IJCTA | Sept-Oct 2014 Available [email protected]

1810

ISSN:2229-6093

Page 7: A Study on Discrete Wavelet Transform based Texture Feature Extraction for Image … ·  · 2014-10-23A Study on Discrete Wavelet Transform based Texture Feature Extraction for ...

[16] S. Arivazhagan, L. Ganesan and V. Angayakanni, “Color Texture Classification using Wavelet transform”, in Proc. of

ICCIMA’05, pp. 315-320, 2005.

[17] R.M. Haralick, K. Shanmugam, and I. Dienstein., “Textural Features for Image Classification”, IEEE Trans. Systems, Man Cybernetics., vol. 3, no. 6, pp. 610– 621, 1973.

[18] P. Howarth and S.Ruger, “Evaluation of Texture Features for Content-based Image Retrieval”, in Proc. of CIVR'04, 2004, pp. 326–334.

[19] Jiayin Kang and Wenjuan Zhang, “A Framework for

Image Retrieval with Hybrid Features”, in Proc. of CCDC, 2012, pp. 1326 – 1330.

[20] Tamura, H., Mori, S., and Yamawaki, T., “Textural Features Corresponding to Visual Perception”, IEEE Trans. Systems, Man Cybernetics, vol. 8, pp. 460–472, 1978.

[21] Howarth, P. and Ruger, S., “Robust Texture Features for Still Image Retrieval”, IEEE Proc. Vision, Image Signal Processing, vol. 152, no. 6, pp. 868–874, 2005.

[22] N. Sebe, and M.S. Lew., “Wavelet Based Texture Classification”, in IEEE Proc. of Int. Conf. on Pattern Recognition’, vol. 3, pp. 959–962, 2000.

[23] B.S. Manjunath, et. al, "Color and texture descriptors", IEEE Trans. Circuits and Systems for Video Technology, Vol.11(6), pp. 703-715, 2001.

[24] Michael S. Lew, Nicu Sebe, Chabane Djeraba and Ramesh Jain, “Content based Multimedia Information Retrieval in State of the Art and Challenge”, ACM

Multimedia Computing, Communications and Applications, Vol. 2, No. 1, pp. 1–19, Feb. 2006.

[25] T. Karthikeyan, P. Manikandaprabhu, S. Nithya, “A Survey on Text and Content Based Image Retrieval System for Image Mining”, International Journal of Engineering Research & Technology, Vol. 3 Issue 3, pp. 509-512, Mar. 2014.

P Manikandaprabhu et al, Int.J.Computer Technology & Applications,Vol 5 (5),1805-1811

IJCTA | Sept-Oct 2014 Available [email protected]

1811

ISSN:2229-6093


Recommended