+ All Categories
Home > Documents > A Simplified Gravitational Model for Texture Analysis

A Simplified Gravitational Model for Texture Analysis

Date post: 23-Dec-2016
Category:
Upload: paulo-cesar
View: 214 times
Download: 1 times
Share this document with a friend
9
J Math Imaging Vis (2013) 47:70–78 DOI 10.1007/s10851-012-0408-1 A Simplified Gravitational Model for Texture Analysis Jarbas Joaci de Mesquita Sá Jr. · André Ricardo Backes · Paulo César Cortez Published online: 4 December 2012 © Springer Science+Business Media New York 2012 Abstract Textures are among the most important features in the field of image analysis. This paper presents an innova- tive methodology to extract information from them, convert- ing an image into a simplified dynamical system in gravita- tional collapse process whose collapsing states are described by using the lacunarity method. The paper compares the pro- posed approach to other classical methods using Brodatz textures and a second texture database as benchmark. The best classification results using the standard parameters of the method were 97.00 % and 54.10 % of success rate (per- centage of samples correctly classified) for both databases, respectively. These results prove that the presented approach is an efficient tool for texture analysis. Keywords Texture analysis · Simplified gravitational system · Complexity · Lacunarity 1 Introduction Texture analysis plays a very important role in computer vi- sion. Textures can be understood as complex visual patterns J.J. de Mesquita Sá Jr. · P.C. Cortez Departamento de Engenharia de Teleinformática—DETI, Centro de Tecnologia—UFC, Campus do Pici, S/N, Bloco 725, Caixa Postal 6007, CEP 60.455-970 Fortaleza, Brazil J.J. de Mesquita Sá Jr. e-mail: [email protected] P.C. Cortez e-mail: [email protected] A.R. Backes ( ) Faculdade de Computação, Universidade Federal de Uberlândia, Av. João Naves de Ávila, 2121, 38408-100 Uberlândia, MG, Brazil e-mail: [email protected] composed by entities, or sub-patterns, with bright, colour, orientation and size characteristics [1]. Thus, textures sup- ply very useful information for automatic recognition and interpretation of an image by a computer [2]. Because tex- tures provide a rich source of image information, they are an essential attribute in many application areas, such as object recognition, remote sensing, content-based image retrieval and so on. Despite its lack of definition, over the years many meth- ods of texture analysis have been developed, each of them obtaining information in a different manner. Actually, most of the methods can be grouped into four main categories [3]: statistical, geometrical, model-based and signal processing methods. The statistical approach includes methods based on first-order statistics and co-occurrence matrices [1]. As an example of signal processing method, we can mention the Gabor filters [4]. Still in this category, there are many studies on texture analysis in spectral domain, especially af- ter the invention of wavelet transform (e.g., [5, 6]). In recent years, other approaches have been developed to explore texture information. One of these aims to represent and characterize the relation among pixels using the Com- plex Networks Theory [7]. Another important and state-of- the-art approach is the Tourist Walk [8], where each pixel is interpreted as a tourist wishing to visit N cities accord- ing to the following deterministic rule: go to the nearest (or farthest) city that has not been visited in the last μ time steps (tourist memory). Recently, fractal analysis has been proposed as a replacement to the histogram to study the distributions of grey levels along a texture pattern as well [9]. This work aims to explore images in a new manner so that valuable information can be extracted from them. For this purpose we used an innovative approach as previously described in [10]. This approach transforms an image in a
Transcript
Page 1: A Simplified Gravitational Model for Texture Analysis

J Math Imaging Vis (2013) 47:70–78DOI 10.1007/s10851-012-0408-1

A Simplified Gravitational Model for Texture Analysis

Jarbas Joaci de Mesquita Sá Jr. ·André Ricardo Backes · Paulo César Cortez

Published online: 4 December 2012© Springer Science+Business Media New York 2012

Abstract Textures are among the most important featuresin the field of image analysis. This paper presents an innova-tive methodology to extract information from them, convert-ing an image into a simplified dynamical system in gravita-tional collapse process whose collapsing states are describedby using the lacunarity method. The paper compares the pro-posed approach to other classical methods using Brodatztextures and a second texture database as benchmark. Thebest classification results using the standard parameters ofthe method were 97.00 % and 54.10 % of success rate (per-centage of samples correctly classified) for both databases,respectively. These results prove that the presented approachis an efficient tool for texture analysis.

Keywords Texture analysis · Simplified gravitationalsystem · Complexity · Lacunarity

1 Introduction

Texture analysis plays a very important role in computer vi-sion. Textures can be understood as complex visual patterns

J.J. de Mesquita Sá Jr. · P.C. CortezDepartamento de Engenharia de Teleinformática—DETI, Centrode Tecnologia—UFC, Campus do Pici, S/N, Bloco 725, CaixaPostal 6007, CEP 60.455-970 Fortaleza, Brazil

J.J. de Mesquita Sá Jr.e-mail: [email protected]

P.C. Corteze-mail: [email protected]

A.R. Backes (�)Faculdade de Computação, Universidade Federal de Uberlândia,Av. João Naves de Ávila, 2121, 38408-100 Uberlândia, MG,Brazile-mail: [email protected]

composed by entities, or sub-patterns, with bright, colour,orientation and size characteristics [1]. Thus, textures sup-ply very useful information for automatic recognition andinterpretation of an image by a computer [2]. Because tex-tures provide a rich source of image information, they are anessential attribute in many application areas, such as objectrecognition, remote sensing, content-based image retrievaland so on.

Despite its lack of definition, over the years many meth-ods of texture analysis have been developed, each of themobtaining information in a different manner. Actually, mostof the methods can be grouped into four main categories [3]:statistical, geometrical, model-based and signal processingmethods. The statistical approach includes methods basedon first-order statistics and co-occurrence matrices [1]. Asan example of signal processing method, we can mentionthe Gabor filters [4]. Still in this category, there are manystudies on texture analysis in spectral domain, especially af-ter the invention of wavelet transform (e.g., [5, 6]).

In recent years, other approaches have been developed toexplore texture information. One of these aims to representand characterize the relation among pixels using the Com-plex Networks Theory [7]. Another important and state-of-the-art approach is the Tourist Walk [8], where each pixelis interpreted as a tourist wishing to visit N cities accord-ing to the following deterministic rule: go to the nearest(or farthest) city that has not been visited in the last μ

time steps (tourist memory). Recently, fractal analysis hasbeen proposed as a replacement to the histogram to studythe distributions of grey levels along a texture pattern aswell [9].

This work aims to explore images in a new manner sothat valuable information can be extracted from them. Forthis purpose we used an innovative approach as previouslydescribed in [10]. This approach transforms an image in a

Page 2: A Simplified Gravitational Model for Texture Analysis

J Math Imaging Vis (2013) 47:70–78 71

dynamic system in gravitational collapse process. It enablesimages to evolve to different states, each of which offeringnew relationships among the pixels and, therefore, a newsource of information to be extracted. To accomplish this,we employed the lacunarity method to quantify each state inorder to obtain a feature vector.

The proposed method presents advantages with respectto those existing in the literature, whose emphasis is in theinformation obtained from near pixels. Nevertheless, the re-lations between distant pairs of pixels are a rich source ofinformation. The gravitational system helps to explore theserelations by creating new collapsing states where distantpixels are continually approaching each other. In this way,methods with local focus (for instance, lacunarity methoduses small windows to extract feature vectors) have an im-provement in their capacity of extracting informative signa-tures.

Additionally to the contributions presented in [10], thispaper presents the followings improvements: an explanationwith more details of the simplified gravitational system; acomparison about the lacunarity method with and withoutthe gravitational approach; a comparison of the proposed ap-proach with and without the tangential speed; the establish-ment of the computational complexity of the method; andthe use of a new image dataset to perform a second experi-ment. This dataset is a challenging one because the imageswere taken from different viewpoints and present large scalechanges, perspectives distortions, and non-rigid transforma-tions.

The remainder of the paper is organized as follows. Sec-tion 2 shows the rules established to simulate a simplifiedgravitational collapse in an image. Section 3 describes theprocess of composing image signatures by applying the la-cunarity method in states of the collapse process. In Sect. 4,we perform a set of experiments using two texture datasets:the first uses 40 classes of Brodatz’s textures (10 imagesper class) and the second uses 25 classes of textures withscale changes and perspective distortions, which were ob-tained from different viewpoints (40 images per class). Sec-tion 4.1 demonstrates the superior performance of the pro-posed approach when it is compared to results of other im-portant methods. Finally, we made some considerations ofthis work in Sect. 5.

2 Texture Analysis and Simplified Gravitational System

We must consider two forces when an object orbits anotherone. The gravitational force is the first one as stated by IsaacNewton in “the Principia”. This force is defined by the fol-lowing sentence: the force exerted by an object to anotherone is directly proportional to the product of their masses

Fig. 1 Example of gravitationalforce between two massiveparticles

Fig. 2 (a) A particle moving from p1 to p2 and its velocity changingfrom vi to vj ; (b) Triangle that determines the direction of change invelocity �v

and inversely proportional to the square of the distance be-tween them [11]. Figure 1 illustrates this force while its for-mulation is given by the following equation:

fa = G · m1 · m2

‖r‖2· r‖r‖ , (1)

where G is the gravitational constant, m1 and m2 are themasses of two particles, r is the vector connecting the po-sitions of the particles and fa is the gravitational force be-tween the two particles. ‖‖ denotes the magnitude or normof a vector.

The second force is the centripetal force. This force isdirectly proportional to the tangential speed and points tothe centre of a circular trajectory described by an object. Itsformulation is given by the following equation:

fc = mac = mv2

‖r‖ , (2)

where fc is the centripetal force, m is the mass, ac is thecentripetal acceleration, v is the tangential speed and ‖r‖ isthe radius of a circular trajectory.

We can derive the centripetal acceleration ac fromFig. 2(a). This figure shows a particle at two different mo-ments: (i) the particle at point p1 and time ti with velocity vi

and (ii) the same particle at point p2 and time tj with veloc-ity vj . If we assume that ‖vi‖ = ‖vj‖ = ‖v‖, i.e., the speedsdiffer only in direction, the magnitude of acceleration canbe defined as follows:

‖ac‖ = ‖vj − vi‖tj − ti

= ‖�v‖�t

. (3)

As the two triangles in Fig. 2 are similar (both are isosce-les and have the same angle �θ ), we can establish the fol-lowing relationship:

‖�v‖‖v‖ = �rd

rd. (4)

Page 3: A Simplified Gravitational Model for Texture Analysis

72 J Math Imaging Vis (2013) 47:70–78

If the equation above is solved for ‖�v‖ and the expres-sion substituted into ‖ac‖ = ‖�v‖/�t (Eq. (3)), we obtain

‖ac‖ = ‖v‖�rd

rd�t. (5)

Then, we consider that points p1 and p2 are extremelyclose. This causes �v to point toward the circular path.Since acceleration is in direction of �v, it also points to-ward the centre of the trajectory. Furthermore, as points p1

and p2 approach each other, �t approaches 0, and the ratioapproaches the speed (magnitude) of the velocity v. Finally,when �t → 0, the magnitude of the centripetal accelera-tion is

ac = v2

‖r‖ . (6)

In order to apply these concepts on a texture image, someconsiderations are necessary. A grey-scale texture is com-monly described by the literature as a bi-dimensional struc-ture of pixels. So, let I (x, y) = 0, . . . ,255, (x = 1, . . . ,Nx

and y = 1, . . . ,Ny ), be a pixel in an image I , where x andy are the Cartesian coordinates of the pixel. To each pixelI (x, y) is associated an integer value which represents itsgrey-scale value.

We consider each pixel I (x, y) as a particle in the gravi-tational system. Each pixel has a mass m, which is the inten-sity associated to that pixel. In a real gravitational system, allparticles interact with each other. This approach presents ahigh computational cost, so that, we considered that there isjust interaction between each pixel and an object of mass M .We place this object at the centre of the texture image, i.e.,at coordinates (�Nx/2�, �Ny/2�), that is, it has no relationwith the grey-level intensities of the image.

We established for each pixel a gravitational force ac-cording to Eq. (1), where we replaced mass m1 by M andm2 by I (x, y) (i.e., the grey-scale of the pixel), and a cen-tripetal force according to Eq. (2), where the pixel intensityreplaces m and determines the tangential speed. To deter-mine this speed, we have to take into account that a very lowspeed causes a very fast collapse and, therefore, informationloss, while high speeds imply no collapse.

Then, to find a range of tangential speeds where all pixelscould collapse slowly, we established fa = fc. This yieldsthe highest tangential speed, as described by the followingequation:

vmax =√

GM

rmax, (7)

where vmax is the highest pixel speed and rmax is the great-est distance between a pixel and the image centre (mass M).

Fig. 3 Example of a simplified gravitational model where a pixel p

collapses. The new position of the pixel is defined by the distances D1and D2

With this speed we assure that even the farthest pixel fromthe image centre will collapse, that is, all pixels will gradu-ally approach the image centre.

To extract the information regarding to both distance andgrey-level intensity, each pixel has its speed determined ac-cording to the following equation:

vpix =(

1 + I (x, y)

255

)vmax

2, (8)

where vpix is the tangential speed of the pixel and I (x, y) isits grey-level. In this way, each pixel has a particular trajec-tory defined by its distance and its intensity, giving image itsown signature.

By using these rules, we give two types of movementto each pixel. The first movement is constant and anti-clockwise circular. It is defined by S1 = vpix · t , where D1

is the distance covered by the pixel in a time t . We com-pute this new pixel position by rotating it using an angle ofS1/(2πr), where r is the distance between the pixel and thecentre of the image. The second movement is rectilinear, ac-celerated and directed to the centre of the image. We com-pute the new pixel position as S2 = (1/2) · apix · t2, whereapix is the acceleration of the pixel toward the image centre,given by

apix ={‖fa − fc‖/I (x, y), if I (x, y) �= 0

0, if I (x, y) = 0(9)

and D2 indicates the space covered by the pixel in a time t .To compute this new position, we decrement (or increment)the axis x and y using the proportion S2/r . Figure 3 showsthe movement of a pixel in a determined time t .

When we apply this gravitational model to texture im-ages, two or more pixels may, eventually, try to occupy thesame position during the collapse process. In the case ofthis situation, this shared position will receive the averageof pixels’ grey-levels. This adaptation aimed to reduce thecomplexity of the method and to preserve image informa-tion.

Page 4: A Simplified Gravitational Model for Texture Analysis

J Math Imaging Vis (2013) 47:70–78 73

3 Signature for Collapsing Texture Patterns

This section presents an approach to extract a feasible tex-ture signature. To achieve this, we use the proposed collaps-ing model and a traditional texture descriptor, the lacunarity.Mandelbrot [12] introduced the concept of lacunarity in or-der to characterize different texture patterns that presentedthe same fractal dimension. This method was initially pro-posed for binary texture patterns and it describes the textureaccording to the distribution of gaps dispersed over it. Liter-ature considers lacunarity as a multi-scaled measure of tex-ture’s heterogeneity, since it depends on the gap size to bemeasured [13].

Due to its simplicity, the gliding-box algorithm is of-ten used to compute the lacunarity [13, 14]. Basically, themethod consists of gliding a box of size r over the tex-ture pattern and to count the number of gaps that exist inthe binary pattern. Posteriorly, different extensions to dealwith greyscale images were proposed [15, 16]. Instead ofsimply counting the number of gaps, these approaches com-pute the minimum u and maximum v pixel values inside thebox. According to these values, a column with more thanone cubic may be needed to cover the image intensity co-ordinates. This relative height of the column is defined asS = �v/r� − �u/r�. By considering each possible box po-sition in the image, we compute the probability distributionQ(S, r) of the relative height for a box size r . Then, the la-cunarity is achieved as

Λ(r) =∑

S2 · Q(S, r)

[∑S · Q(S, r)]2. (10)

During the gravitational collapse, the texture patternchanges and so its roughness, so that its lacunarity is cer-tainly different for each collapsing time step t , i.e., for eachtexture image that represents a single stage of the collapsingprocess. Thus, this collapsing process enables us to char-acterize a texture pattern through the variations of its lacu-narity value. In this way, we propose a feature vector thatrepresents the texture pattern in different collapsing timesteps t by a set of lacunarity values computed for a givenbox size r :

ψ t1,t2,...,tM(r) = [

Λt1(r),Λt2(r), . . . ,ΛtM (r)]. (11)

We must emphasize that the lacunarity is a multi-scaledmeasure, i.e., it depends on the box size r used in its cal-culus [13]. Thus, it is convenient to consider a second fea-ture vector that exploits such characteristic. Therefore, wepropose a second feature vector that analyzes the collaps-ing texture using different lacunarity values. This is accom-plished by the concatenation of the signatures calculated us-ing ψ t1,t2,...,tM

(r), for different r values:

ϕ(rmax) = [ψ t1,t2,...,tM

(2), . . . ,ψ t1,t2,...,tM(rmax)

], (12)

where rmax is the maximum box size allowed.

4 Experiment

To evaluate the proposed approach, we computed our pro-posed signature for different configurations and used themin a texture analysis context. We used two image databasesin this evaluation. The first is the Brodatz texture album [17],which consists of a set of 400 texture images (40 classes,with 10 samples each) extracted from the book of Brodatz.Each sample is 200 × 200 pixels size, with 256 grey levels.Figure 4 shows one example of each texture class.

The second database, presented in paper [18], is aharder benchmark, which consists of a collection of 1,000greyscale texture images (25 classes, with 40 samples each)taken from different viewpoints and presenting large scalechanges, perspective distortions, and non-rigid transforma-tions. Because the original images are (640 × 480), wecropped a window (200 × 200) from the upper-left cornerof each image in order to keep consistency with the Bro-datz’s database and reduce the processing time. Figure 5shows one example of each texture class.

To evaluate the proposed method, we used the Lin-ear Discriminant Analysis (LDA) in a leave-one-out cross-validation scheme [19].

To provide a more robust evaluation of the proposedmethod, we also included a comparison with traditional tex-ture analysis methods. For this comparison, the followingmethods were considered:

Fourier Descriptors [20]: A Fourier transform is appliedover an image. Next, a shifting operation is applied over theimage spectrum. A total of 99 descriptors is obtained fromthe shift image. Each descriptor is the sum of the absolutevalues of the coefficients placed at the same radial distancefrom the centre. The parameters of this method were ob-tained empirically.

Co-occurrence Matrices: They represent joint probabilitydistributions between pairs of pixels placed at a given dis-tance d and direction θ computed for an input image. Dis-

Fig. 4 Example of 40 Brodatz texture classes used in the experiment.Each image is 200 × 200 pixels and 256 grey levels

Page 5: A Simplified Gravitational Model for Texture Analysis

74 J Math Imaging Vis (2013) 47:70–78

Fig. 5 Example of 25 texture classes from the second database usedin the experiment. Each image is 200 × 200 pixels and 256 grey levels

tances of 1 and 2 pixels with angles of −45◦, 0◦, 45◦, 90◦,in a non-symmetric version were used. For each resultingco-occurrence matrix, we compute energy and entropy de-scriptors to compose the feature vector, totalizing a featurevector of 16 attributes. We obtained these parameters basedon Haralick’s classical paper [1].

Gabor Filters [21]: Each Gabor filter consists of a bi-dimensional Gaussian function moduled by an oriented si-nusoid in a determined frequency and direction. We com-pute energy descriptors from the convolution of the Gaborfilter over an input image. We used the mathematical rulespresented in paper [22] to compose a total of 16 filters (4 ro-tation filters and 4 scale filters), with frequency ranging from0.01 to 0.30, totalizing a feature vector of 16 attributes. Theused parameters were obtained empirically.

Tourist Walk: this approach considers each pixel as a cityand the distance between two neighbouring pixels (cities) asthe modulus of the difference of their grey-level intensities.A tourist starts from a determined city and moves accordingto the following rule: go to the nearest (or farthest) neigh-bouring city so that the intensity difference between themis not related to μ previous steps (μ is the tourist memoryof how many cities it has visited). These rules allow to cre-ate complex paths, which give an image signature. A fea-ture vector of 48 attributes were used. More details on thismethod and on the used parameter values can be found inpaper [8].

Wavelets Descriptors: in the experiments, we used the mul-tiresolution decomposition of the 2D wavelet. We computed

Fig. 6 Lacunarity estimated for a texture sample using time stepst = 1, . . . ,20 and box sizes r = {2,3,4,5,6}

three dyadic decompositions using daubechies 4 for eachinput image. Then, we computed energy and entropy fromhorizontal, diagonal and vertical wavelet details, totaling 18features [23–25].

4.1 Results

To apply the method over the image databases previouslydescribed, some fundamental parameters need to be set.These parameters are the gravitational constant G, the boxsize (rmax), the time steps t , and the mass M . We started bysetting the gravitational constant as G = 1. We chose thisvalue for this parameter as we are not interested in limitingthe step that a particle can have during the collapsing pro-cess.

The next step was to use different sets of t values, alongwith a specific rmax and M values to compose the fea-ture vector ϕ. Then, we used this feature vector to char-acterize the Brodatz’s samples in the proposed experiment.From various configurations evaluated using the Brodatz’sdatabase, we obtain the best result (97 % of the samples arecorrectly classified in their respective classes) when we usedt = {1,6,12,18}, rmax = 11 and M = 500 [10]. Next, weexplain why these parameter values provided the best suc-cess rate.

We start by analyzing the behaviour of the estimated la-cunarity using different box sizes (r) as an image collapses(Fig. 6). We noted that different r values lead to a differ-ent estimation of the lacunarity. However, for a given boxsize r , the changes in the lacunarity value Λ(r) are subtle asthe collapsing time t increases. As a consequence, a featurevector ψ built using sequential time steps would present alarge amount of redundant information, what could compro-mise the ability of the method to describe texture patterns.This situation can be avoided by using non-sequential values

Page 6: A Simplified Gravitational Model for Texture Analysis

J Math Imaging Vis (2013) 47:70–78 75

of t , which explains the best set of t found. Moreover, theway the estimated lacunarity changes as t increases is highlydependent on the texture pattern, which also indicates thatthe use of different t values may increase the performanceof the method.

Still, it is important to remember that lacunarity is con-sidered as a multi-scaled measure. This is evident in Fig. 6,where the lacunarity Λ(r) is different for each box size r

considered. Nevertheless, the lacunarity value computed fortwo sequential box sizes tend to become very close as r in-creases.

For r ≥ 11, lacunarity values are very similar. The in-clusion of these similar lacunarity values in the proposedfeature vector would increase the amount of redundant in-

Fig. 7 Lacunarity values computed for a single texture pattern usingdifferent box sizes r

formation. This could, again, compromise the ability of themethod to describe texture patterns. To avoid this situation,we opted for rmax = 11. This is evident in Fig. 7, where wecomputed the lacunarity values for a single texture patternusing different box sizes r . Thus, we propose a feature vec-tor that represents the texture pattern in different collaps-ing time steps t by a set of lacunarity values computed fora given maximum box size rmax. Using this approach, wereduce the presence of redundant information, which couldcompromise the ability of the method to describe texturepatterns.

Next, we analyzed which value would be the most appro-priate for the mass M . This parameter can be understood asa massive black hole at the centre of the image. Thus, thismass should be capable of attracting farther and darker pix-els of the image according to Eq. (1), where m1 is replacedby M and m2 is replaced by the pixel intensity. Moreover,this mass speeds up the collapsing process as it can be seenin Fig. 8, which shows the same texture pattern after 20 timesteps of collapsing process using different values of mass M .This is due to the fact that the use of a larger value of massincreases the amplitude that a pixel can move in a singlestep. Figure 9 shows the distance that a pixel placed at thecorner of a 200 × 200 image can move in a single step fordifferent values of mass M . For this evaluation, we consid-ered three different pixel intensities: m = 0 (light grey line),m = 128 (dashed black line) and m = 255 (black line). Notethat the mass M does not have much influence on darker pix-els (very small mass) as it has in lighter pixels (large mass).However, the most important thing is that a large mass in-creases the range of distances that a given pixel can movedepending on its intensity (mass m). For M = 200, a pixel

Fig. 8 Examples of collapsing texture images after 20 time steps of collapsing process using different values of mass M

Page 7: A Simplified Gravitational Model for Texture Analysis

76 J Math Imaging Vis (2013) 47:70–78

Fig. 9 Distance that a pixel placed at the corner of a 200 × 200 imagecan move in a single step for different values of mass M

Table 1 Comparison results for different texture methods in bothdatabases

Method Success rate (%)

Brodatz 2nd database

Fourier descriptors 87.75 35.10

Co-occurrence matrices 82.50 41.10

Gabor filters 95.25 44.60

Tourist walk 95.50 48.70

Wavelet descriptors 87.50 38.80

Proposed approach 97.00 54.10

can move, depending on its grey level, a distance rangingfrom 0.01 to 2.56 pixels, while for M = 500, this distanceranges from 0.025 to 6.4, and M = 1,000, from 0.05 to 12.8.Based on these facts, a suitable value of M should be closeto the middle of the interval 0 < M ≤ 1,000 in order to giveall pixels an adequate movement, neither fast nor slow. Thisis the reason why M = 500 yields the best success rate inBrodatz’s database.

Once we established, and supported by the analysisabove, the best values of t , rmax and mass M , the proce-dure for classifying the second database was simply to usethe same configuration used in Brodatz’s database.

The results in Table 1 show that our proposed approachreached the best results in both databases. It is importantto stress that, even though all the methods in the seconddatabase presented a smaller success rate when comparedto the classification of Brodatz’s textures, which is due tothe nature of the images and how they were acquired, thedifference between the success rate of our method and theresult obtained by the second better method was significant.This result suggests that our method is suitable for differenttypes of greyscale textures, regardless of how images were

Fig. 10 Comparison between the lacunarity method with and withoutthe gravitational approach

obtained. Also, it is worthwhile to emphasize that a com-prehensive discussion on the possible limitations of the pro-posed method needs more experiments with textures of verydifferent natures in order to ensure its statistical reliability.

In order to prove that the gravitational system really im-proves the accuracy of the lacunarity approach, we com-puted a feature vector with only the lacunarity method (with-out the gravitational model), using the same values of r

(2,3, . . . , rmax = 11) in the previous experiments for eachsample in both databases. We present these results in Fig. 10,in which it is possible to note that our proposed approachextracts additional and relevant information from a texturepattern, thus improving the classification results. It is impor-tant to stress that the gravitational approach was fundamen-tal to classification of both databases, because only lacunar-ity method yields a success rate similar to the worst methodsin both experiments (see Table 1). Moreover, in this paperwe used lacunarity, while we used the fractal dimension in aprevious work [26]. Basically, this approach could be usedin any texture analysis technique.

As an additional experiment, we used the gravitationalsystem without the tangential speed in order to quantify howmuch this parameter improves the classification of the sam-ples. Figure 11 shows a texture after 20 time steps of col-lapsing process using different values of mass M with andwithout the tangential speed, and Table 2 shows the successrate of the modified system. The results demonstrate that thegravitational system remains competitive, still obtaining thefirst best result in second database and the 3th best result inclassification of Brodatz’s samples. As we stated before, thetangential speed allows to extract information regarding toboth distance and grey-level intensity. Therefore, it is essen-tial for the maximum performance of the approach proposedin this paper.

Page 8: A Simplified Gravitational Model for Texture Analysis

J Math Imaging Vis (2013) 47:70–78 77

Fig. 11 Examples of collapsing texture images after 20 time steps of collapsing process using different values of mass M . First row: withouttangential speed; Second row: with tangential speed

Table 2 Comparison of the proposed approach with and without thetangential speed

Proposed approach Success rate (%)

Brodatz 2nd database

with tangential speed 97.00 54.10

without tangential speed 94.00 51.80

4.2 Computational Complexity

To generate a stage of collapsing process of a texture, it isnecessary to process each pixel of an image. Considering asquare image N × N pixels, we have a total of N2 pixels tobe processed. For each generated image, it is necessary tocompute its lacunarity. For a determined value of box size r ,the computational cost of lacunarity is (N − r + 1)2 × r2.Note that the greatest box size used in the experimentswas r = 11, a very small value if compared to the usedimage size N = 200. Thus, we can consider the compu-tational cost of lacunarity as O(N2). Finally, the compu-tational cost of the proposed method can be estimated asO(T · N2 + R · N2), where T is the number of stages of thecollapsing process of a determined original image, and R isthe total number of values of lacunarity calculated for eachcollapsing stage.

The time complexity of our proposed approach is thesame as co-occurrence methods (differing only in constantfactors), and is better than time complexity of Gabor filtersand Fourier descriptors (O(N2 logN), due to the FourierTransform). However, it is important to emphasize that ithas a relevant constant factor.

5 Conclusion

This work presents a novel method to extract signaturesfrom textures by transforming them in a simplified gravita-tional system whose collapsing states are explored by the la-cunarity method. This novel approach showed results supe-rior to the results yielded by classical methods, when testedon two well established texture benchmarks. These resultscan be explained by the fact that the gravitational systemcreates new relations among pixels that were originally sep-arated in the original image. These new relations allow newtexture configurations and, therefore, extend the capacity ofthe lacunarity method to extract texture information. Also,results showed that the method presents the best results byusing spaced values of t , a suitable mass M , and a set ofr values [2,3, . . . , rmax]. Thus, the proposed method opensa promising research source in texture analysis, amplifyingthe set of methods of identifying textures.

References

1. Haralick, R.M.: Statistical and structural approaches to texture.Proc. IEEE 67(5), 786–804 (1979)

2. Bala, J.W.: Combining structural and statistical features in a ma-chine learning technique for texture classification. In: IEA/AIE,vol. 1, pp. 175–183 (1990)

3. Tuceryan, M., Jain, A.K.: Texture analysis. In: Chen, C.H.,Pau, L.F., Wang, P.S.P. (eds.) Handbook of Pattern Recognitionand Computer Vision, pp. 235–276. World Scientific, Singapore(1993)

4. Casanova, D., Sá Junior, J.J.M., Bruno, O.M.: Plant leaf identifi-cation using gabor wavelets. Int. J. Imaging Syst. Technol. 19(1),236–243 (2009)

Page 9: A Simplified Gravitational Model for Texture Analysis

78 J Math Imaging Vis (2013) 47:70–78

5. Lu, C.S., Chung, P.C., Chen, C.F.: Unsupervised texture segmen-tation via wavelet transform. Pattern Recognit. 30(5), 729–742(1997)

6. Arivazhagan, S., Ganesan, L.: Texture classification using wavelettransform. Pattern Recognit. Lett. 24(9–10), 1513–1521 (2003)

7. Costa, L.F., Rodrigues, F.A., Travieso, G., Villas Boas, P.R.: Char-acterization of complex networks: a survey of measurements. Adv.Phys. 56(1) (2005). arXiv:cond-mat/0505185

8. Backes, A.R., Gonçalves, W.N., Martinez, A.S., Bruno, O.M.:Texture analysis and classification using deterministic touristwalk. Pattern Recognit. 43, 685–694 (2010)

9. Varma, M., Garg, R.: Locally invariant fractal features for statis-tical texture classification. In: International Conference on Com-puter Vision—ICCV, pp. 1–8 (2007)

10. Sá Junior, J.J.M., Backes, A.R.: A simplified gravitational modelfor texture analysis. In: Proceedings of the Computer Analysis ofImages and Patterns—14th International Conference, CAIP 2011,Part I, Seville, Spain, August 29–31, 2011. Lecture Notes in Com-puter Science, vol. 6854, pp. 26–33. Springer, Berlin (2011)

11. Newton, I.: Philosophiae naturalis principia mathematica, Univer-sity of California (1999). Original 1687, translation guided byI.B. Cohen

12. Mandelbrot, B.: The Fractal Geometry of Nature. Freeman, SanFrancisco (1982)

13. Allain, C., Cloitre, M.: Characterizing the lacunarity of randomand deterministic fractal sets. Phys. Rev. A 44(6), 3552–3558(1991)

14. Facon, J., Menoti, D., de Albuquerque Araújo, A.: Lacunarity as atexture measure for address block segmentation. In: Proceedingsof the Progress in Pattern Recognition, Image Analysis and Ap-plications, 10th Iberoamerican Congress on Pattern Recognition,CIARP, Havana, Cuba, November 15–18, 2005. Lecture Notesin Computer Science, vol. 3773, pp. 112–119. Springer, Berlin(2005)

15. Dong, P.: Test of a new lacunarity estimation method for imagetexture analysis. Int. J. Remote Sens. 21(17), 3369–3373 (2000)

16. Du, G., Yeo, T.S.: A novel lacunarity estimation method appliedto SAR image segmentation. IEEE Trans. Geosci. Remote Sens.40(12), 2687–2691 (2002)

17. Brodatz, P.: Textures: a photographic album for artists and design-ers. Dover, New York (1966)

18. Lazebnik, S., Schmid, C., Ponce, J.: A sparse texture representa-tion using local affine regions. IEEE Trans. Pattern Anal. Mach.Intell. 27(8), 1265–1278 (2005)

19. Everitt, B.S., Dunn, G.: Applied Multivariate Analysis, 2nd edn.Arnold, Sevenoaks (2001)

20. Azencott, R., Wang, J.P., Younes, L.: Texture classification usingwindowed Fourier filters. IEEE Trans. Pattern Anal. Mach. Intell.19(2), 148–153 (1997)

21. Idrissa, M., Acheroy, M.: Texture classification using gabor filters.Pattern Recognit. Lett. 23(9), 1095–1102 (2002)

22. Manjunath, B.S., Ma, W.Y.: Texture features for browsing andretrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell.18(8), 837–842 (1996)

23. Daubechies, I.: Ten Lectures on Wavelets. SIAM, Philadelphia(1992)

24. Chang, T., Kuo, C.C.J.: Texture analysis and classification withtree-structured wavelet transform. IEEE Trans. Image Process.2(4), 429–441 (1993)

25. Jin, X., Gupta, S., Mukherjee, K., Ray, A.: Wavelet-based featureextraction using probabilistic finite state automata for pattern clas-sification. Pattern Recognit. 44(7), 1343–1356 (2011)

26. Sá Junior, J.J.M., Backes, A.R.: A simplified gravitational modelto analyze texture roughness. Pattern Recognit. 45(2), 732–741(2012)

Jarbas Joaci de Mesquita Sá Jr. isa Ph.D. Student at the Departmentof Teleinformatics Engineering atthe Federal University of Ceará, inBrazil, and is professor at the Com-puter Engineering Department atthe same institution—Campus So-bral. He received his M.Sc. (2008)in Computer Science at the Univer-sity of S. Paulo. His fields of inter-est include Computer Vision, ImageAnalysis and Pattern Recognition.

André Ricardo Backes is a profes-sor at the College of Computing atthe Federal University of Uberlân-dia in Brazil. He received his B.Sc.(2003), M.Sc. (2006), and Ph.D.(2010) in Computer Science at theUniversity of S. Paulo. His fields ofinterest include Computer Vision,Image Analysis and Pattern Recog-nition.

Paulo César Cortez is a profes-sor at the Department of Teleinfor-matics Engineering at the FederalUniversity of Ceará in Brazil. Hereceived his B.Sc. (1982) in Elec-trical Engineering at the FederalUniversity of Ceará, M.Sc. (1992),and Ph.D. (1996) in Electrical En-gineering at the Federal Universityof Paraíba. His fields of interest in-clude Image and Signal Analysis.


Recommended