+ All Categories
Home > Documents > Lattice Algebra Approach to Color Image Segmentation

Lattice Algebra Approach to Color Image Segmentation

Date post: 03-Mar-2023
Category:
Upload: khangminh22
View: 0 times
Download: 0 times
Share this document with a friend
14
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/220146293 Lattice Algebra Approach to Color Image Segmentation Article in Journal of Mathematical Imaging and Vision · February 2012 DOI: 10.1007/s10851-011-0302-2 · Source: DBLP CITATIONS 8 READS 67 3 authors, including: Some of the authors of this publication are also working on these related projects: Urban monitoring View project Juan Carlos Valdiviezo-N Centro de Investigación en Ciencias de Infor… 28 PUBLICATIONS 105 CITATIONS SEE PROFILE Gerhard Ritter University of Florida 146 PUBLICATIONS 3,429 CITATIONS SEE PROFILE All content following this page was uploaded by Juan Carlos Valdiviezo-N on 29 January 2015. The user has requested enhancement of the downloaded file.
Transcript

Seediscussions,stats,andauthorprofilesforthispublicationat:https://www.researchgate.net/publication/220146293

LatticeAlgebraApproachtoColorImageSegmentation

ArticleinJournalofMathematicalImagingandVision·February2012

DOI:10.1007/s10851-011-0302-2·Source:DBLP

CITATIONS

8

READS

67

3authors,including:

Someoftheauthorsofthispublicationarealsoworkingontheserelatedprojects:

UrbanmonitoringViewproject

JuanCarlosValdiviezo-N

CentrodeInvestigaciónenCienciasdeInfor…

28PUBLICATIONS105CITATIONS

SEEPROFILE

GerhardRitter

UniversityofFlorida

146PUBLICATIONS3,429CITATIONS

SEEPROFILE

AllcontentfollowingthispagewasuploadedbyJuanCarlosValdiviezo-Non29January2015.

Theuserhasrequestedenhancementofthedownloadedfile.

J Math Imaging Vis (2012) 42:150–162DOI 10.1007/s10851-011-0302-2

Lattice Algebra Approach to Color Image Segmentation

Gonzalo Urcid · Juan-Carlos Valdiviezo-N. ·Gerhard X. Ritter

Published online: 9 June 2011© Springer Science+Business Media, LLC 2011

Abstract This manuscript describes a new technique forsegmenting color images in different color spaces based ongeometrical properties of lattice auto-associative memories.Lattice associative memories are artificial neural networksable to store a finite set X of n-dimensional vectors and re-call them when a noisy or incomplete input vector is pre-sented. The canonical lattice auto-associative memories in-clude the min memory WXX and the max memory MXX,both defined as square matrices of size n × n. The columnvectors of WXX and MXX, scaled additively by the compo-nents of the minimum and maximum vector bounds of X,are used to determine a set of extreme points whose con-vex hull encloses X. Specifically, since color images formsubsets of a finite geometrical space, the scaled column vec-tors of each memory will correspond to saturated color pix-els. Thus, maximal tetrahedrons do exist that enclose propersubsets of pixels in X and such that other color pixels areconsidered as linear mixtures of extreme points determinedfrom the scaled versions of WXX and MXX. We provide il-lustrative examples to demonstrate the effectiveness of ourmethod including comparisons with alternative segmenta-tion methods from the literature as well as color separationresults in four different color spaces.

Keywords Color image segmentation · Color spaces ·Convex sets · Lattice auto-associative memories · Linearmixing model · Pixel based segmentation · Unsupervisedclustering

G. Urcid (�) · J.-C. Valdiviezo-N.Optics Department, INAOE, Tonantzintla, Pue 72000, Mexicoe-mail: [email protected]

G.X. RitterCISE Department, University of Florida, Gainesville,FL 32611-6120, USAe-mail: [email protected]

1 Introduction

In several image processing and analysis applications, im-age segmentation is a preliminary step in the descriptionand representation of regions of interest [1–4]. Segmentationtechniques, first developed for grayscale images [5–8], havebeen extended, enhanced or changed to deal efficiently withcolor images coded in different color spaces as explainednext.

Color image segmentation has been approached fromseveral perspectives that currently are categorized as pixel,area, edge, and physics based segmentation, for which earlycompendiums appeared in [9, 10]. State-of-the-art surveysare given in [11, 12]. For example, pixel based segmenta-tion includes histogram techniques and cluster analysis incolor spaces. Optimal thresholding [13] and the use of aperceptually uniform color space [14] are examples of his-togram based techniques. Area based segmentation contem-plates region growing as well as split-and-merge techniques,whereas edge based segmentation embodies local methodsand extensions of the morphological watershed transforma-tion. This transformation and the flat zone approach to colorimage segmentation were originally developed, respectively,in [15] and [16]. A seminal work employing Markov ran-dom fields for splitting and merging color regions was pro-posed in [17]. Other recent developments contemplate thefusion of various segmentation techniques such as the ap-plication of morphological closing and adaptive dilation tocolor histogram thresholding [18] or the use of the water-shed algorithm for color clustering with Markovian label-ing [19]. Physics based segmentation relies on adequate re-flection models of material objects such as inhomogeneousdielectrics, plastics, or metals [20, 21]. Nevertheless, its ap-plicability has been limited to finding changes in materialswhose reflection properties are well studied and modeledproperly.

J Math Imaging Vis (2012) 42:150–162 151

Recently, soft computing techniques [22] or fuzzy princi-pal component analysis coupled with clustering based on re-cursive one-dimensional histogram analysis [23], suggest al-ternative ways to segment a color image. In order to quantifythe results obtained from different segmentation schemes,the subject of color image segmentation evaluation has beenbriefly exposed in [24]. Basic treatment of image segmenta-tion performed in both Hue-Saturation-Intensity (HSI) andRGB color spaces is given in [25, 26]; for a more com-plete and systematic exposition of color image segmenta-tion methods see [27] or [28]. Also, from the standpoint oflattice algebra, [29, 30] are recent efforts related to the uni-fication of lattice theory based image processing, compu-tational intelligence, modeling, and knowledge representa-tion.

In this paper we present a lattice algebra based tech-nique for image segmentation applied to RGB (Red-Green-Blue) color images transformed to other representative sys-tems, such as the HSI (Hue-Saturation-Intensity), the I1I2I3

(principal components approximation), and the L*a*b*(Luminance–redness/greenness–yellowness/blueness) colorspaces. The proposed method relies on the min WXX andmax MXX lattice auto-associative memories (LAAMs),where X is the set formed by 3D pixel vectors or colors.The scaled column vectors of any memory together with theminimum or maximum bounds of X may form the verticesof tetrahedra enclosing subsets of X, and will correspondto the most saturated color pixels in the image. Image par-tition into regions of similar colors is realized by linearlyunmixing pixels belonging to tetrahedra determined by thecolumns of the scaled lattice auto-associative memories W

and M , and then by thresholding and scaling pixel colorfractions obtained numerically by applying a least squaresmethod, such as the linear least squares (LLS) method alsoknown as generalized matrix inversion [31], or the non-negative least squares (NNLS) method [32]. In the final stepsegmentation results are displayed as grayscale images. Thelattice algebra approach to color image segmentation can becategorized as a pixel based unsupervised technique. Prelim-inary research and computational experiments on the pro-posed method for segmenting color images were reported in[33, 34].

The paper is organized as follows: Sect. 2 presentsbackground material on image segmentation and a generaloverview of minimax algebra and lattice auto-associativememories; Sect. 3 develops with some detail the segmen-tation technique based on the scaled column vectors ofLAAMs and briefly describes the linear mixing model usedto determine the color fractions composing any pixel vec-tor in the input image. Illustrative examples using syntheticand real images are provided to establish how the proposedmethod works and how it compares in computational effort,for example, with the c-means and fuzzy c-means clustering

techniques. In Sect. 4, we show other segmentation resultsfor additional images represented in the color spaces listedabove. Finally, Sect. 5 gives the conclusions and some per-tinent comments concerning this research.

2 Mathematical Background

2.1 Image Segmentation

Although there are several approaches to segment a colorimage, as briefly described in the Introduction, a mathemat-ical description of the segmentation process, common to allapproaches, can be given using set theory [1, 3, 4, 25]. Inthis framework, to segment an image is to divide it into afinite set of disjoint regions whose pixels share well-definedattributes. We recall from basic set theory that a partitionof a set is a family of pairwise disjoint subsets covering it.Mathematically, we have

Definition 1 Let X be a finite set with k elements. A par-tition of X is a family P = {Ri} of subsets of X, each withki elements for i = 1, . . . , q , that satisfy the following con-ditions: 1) Ri ∩ Rj = ∅ for i �= j (pairwise disjoint subsets)and 2)

⋃q

i=1 Ri = X where∑q

i=1 ki = k (whole set cover-ing).

Note that the only attribute shared between any two ele-ments of X with respect to a given partition P is their mem-bership to a single subset Ri of X. Unfortunately, the simpleattribute of sharing the same membership is not enough todistinguish or separating objects of interest in a given image.Therefore, Definition 1 must be enriched by imposing otherconditions required for image segmentation. Additional at-tributes shared between pixels (elements of X) can be, forexample, spatial contiguity, similar intensity or color, andtype of connectedness. All or some of these quantifiable at-tributes can be gathered into a single uniformity criterionspecified by a logical predicate. A mathematical statementof our intuitive notion of segmentation follows next.

Definition 2 Let X be a finite set with k elements. A seg-mentation of X is a pair ({Ri},p) composed of a family {Ri}of subsets of X each with ki elements for i = 1, . . . , q , anda logical predicate p specifying a uniformity criterion be-tween elements of X, that satisfy the following conditions:1) the family {Ri} is a partition P of X, 2) for any i, Ri isa connected subset of X, 3) ∀i,p(Ri) = true (elements ina single subset share the same attributes), and 4) for i �= j ,p(Ri ∪ Rj) = false (elements in a pairwise union of subsetsdo not share the same attributes).

152 J Math Imaging Vis (2012) 42:150–162

With respect to condition 2) in Definition 2, we remindthat a connected subset Ri is a set where every pair of el-ements {xs, xt } ∈ Ri is connected in the sense that, a se-quence of elements, denoted by (xs, . . . , xr , xr+1, . . . , xt ),exists such that {xr , xr+1} belong to the same spatial neigh-borhood and all points belong to Ri . A weaker but still use-ful version of Condition 4) in Definition 2, requires that, Ri

and Rj should be neighbor sets. Loosely speaking, a sub-set Ri ⊂ X is commonly refer as an image region. Whetherregions can be disconnected (2nd condition of Definition 2is not imposed), multi-connected (with holes), should havesmooth boundaries, and so forth depends on the applica-tion’s domain, segmentation technique, and goals. Percep-tually, the segmentation process must convey the necessaryinformation to visually recognize or identify the prominentfeatures contained in the image such as color hue, bright-ness or texture. Hence, adequate segmentation is essentialfor further description and representation of regions of in-terest suitable for image analysis or image understanding.We turn now to the description of some basic concepts ofminimax algebra as well as some background material aboutlattice auto-associative memories needed for Sects. 3 and 4.

2.2 Lattice Associative Memories

The basic numerical operations of computing the maximumor minimum of two numbers usually denoted as functionsmax(x, y) and min(x, y) will be written as binary opera-tors using the “join” and “meet” symbols employed in latticetheory, i.e., x ∨ y = max(x, y) and x ∧ y = min(x, y). Weuse lattice matrix operations [35, 36] that are defined ele-mentwise using the underlying structure of R−∞ or R∞ assemirings. For example, the maximum of two matrices X,Y

of the same size m×n is defined as (X∨Y)ij = xij ∨yij fori = 1, . . . ,m and j = 1, . . . , n. Inequalities between matri-ces are also verified elementwise, for example, X ≤ Y if andonly if xij ≤ yij . Also, the conjugate matrix X∗ is definedas −Xt where Xt denotes usual matrix transposition. Themax-of-sums X ∨Y , of appropriately sized matrices andthe min-of-sums X ∧Y , are defined, for i = 1, . . . ,m andj = 1, . . . , n, respectively, as (X ∨ Y)ij = ∨p

k=1(xik + ykj )

and (X ∧ Y)ij = ∧p

k=1(xik + ykj ). For p = 1 these latticematrix operations reduce to the outer sum of two vectorsx = (x1, . . . , xn)

t ∈ Rn and y = (y1, . . . , ym)t ∈ R

m, givenby the m × n matrix (i = 1, . . . ,m and j = 1, . . . , n)

y × xt = (yi + xj )

=

⎜⎜⎜⎝

y1 + x1 y1 + x2 · · · y1 + xn

y2 + x1 y2 + x2 · · · y2 + xn

......

......

ym + x1 ym + x2 · · · ym + xn

⎟⎟⎟⎠

. (1)

Henceforth, let (x1,y1), . . . , (xk,yk) be k vector pairs withxξ = (x

ξ1 , . . . , x

ξn)t ∈ R

n and yξ = (yξ1 , . . . , y

ξm)t ∈ R

m

for ξ = 1, . . . , k. For a given set of vector associations{(xξ ,yξ ) : ξ = 1, . . . , k} we define a pair of associated ma-trices (X,Y ), where X = (x1, . . . ,xk) and Y = (y1, . . . ,yk).Thus, X is of dimension n × k with i, j th entry x

ji and Y is

of dimension m × k with i, j th entry yji . To store k vector

pairs (x1,y1), . . . , (xk,yk) in an m × n lattice associativememory (LAM), also known as morphological associativememory (MAM), a similar approach for vector encoding isused as in a linear or correlation memory but instead of thelinear outer product, the lattice outer sum in (1) is applied.The canonical LAM’s are defined as follows.

Definition 3 The min-memory WXY and the max-memoryMXY , both of size m × n, that store a set of associations(X,Y ) are given, respectively, by the expressions

WXY =k∧

ξ=1

[yξ × (−xξ )t ]; wij =k∧

ξ=1

(yξi − x

ξj ), (2)

MXY =k∨

ξ=1

[yξ × (−xξ )t ]; mij =k∨

ξ=1

(yξi − x

ξj ). (3)

We speak of a lattice hetero-associative memory (LHAM) ifX �= Y and of a lattice auto-associative memory (LAAM) ifX = Y .

The expressions to the left of (2) and (3) are in ma-trix form and the right expressions are the ij -th entriesthat give the network weights of the corresponding asso-ciative memory. Note that according to (1), for each ξ ,yξ × (−xξ )t is a matrix Eξ of size m × n that memo-rizes the association pair (xξ ,yξ ) hence WXY = ∧k

ξ=1 Eξ

and MXY = ∨kξ=1 Eξ , which suggests the given names. In

this paper we will use LAAMs only, i.e., WXX and MXX

of size n × n, and if no confusion arises of what the setX stands for, we denote these memories by W and M re-spectively. In particular, the main diagonals of both ma-trices, i.e., wii and mii consist entirely of zeros. SinceY = X, X ∨X∗ = (X∗)∗ ∨X∗ = (X ∧X∗)∗, and, therefore,M = W ∗. Hence, the min-memory and the max-memory aredual to each other in the sense of matrix conjugation; con-sequently, mij = −wji .

This type of non-linear associative memories, developedfrom a lattice algebra approach, were introduced as a newparadigm in neural computation to deal with the problem ofrecalling exemplar patterns from noisy binary or real valuedinputs [37–39]. Later, several advancements were achievedincluding theoretical foundations [40], increased recall ca-pability [41, 42] of exemplar patterns degraded by consid-erable amounts of random noise, and hyperspectral imageryendmember detection [44–49].

J Math Imaging Vis (2012) 42:150–162 153

3 LAAMs Approach to Color Image Segmentation

In this section, for illustrative purposes, we consider onlyimages coded in the RGB color space. The first subsectiongives a detailed description, in three stages, of the proposedsegmentation approach. A brief comment on two fundamen-tal clustering techniques follows in the second subsection asa framework for computational comparisons. The third sub-section illustrates the segmentation results on synthetic andreal RGB color images obtained by the LAAM’s approachtogether with the results derived from the c-means and fuzzyc-means clustering techniques.

3.1 The Segmentation Process

Segmentation of a color image is performed in stages in-cluding: 1) computation of the scaled lattice auto-associativememories, 2) linear unmixing of color pixels using leastsquare methods, and 3) thresholding color fractions to pro-duce color segmentation maps represented as grayscale im-ages. These stages are explained in detail in the followingparagraphs.

Given a color image A consisting of p × q pixels, webuild a set X containing all different colors (3-dimensionalvectors) present in A. If |X| = k denotes the number of el-ements in set X, then k ≤ pq = |A|, where pq is the maxi-mum number of colors available in A. Then, using the rightexpressions of (2) and (3), the memory matrices min-WXX

and max-MXX are computed and to make explicit their re-spective column vectors, we rewrite them, respectively, asW = (w1,w2,w3) and M = (m1,m2,m3). By definition,the column vectors of W may not necessarily belong tothe space [0,255]3 since W usually has negative entries.The general transformation given in the next definition, willtranslate the column vectors of W within the color cube.

Definition 4 Let X = {x1, . . . ,xk} be a finite subset of Rn.

The minimum- and maximum vector bounds are given, re-spectively by v = ∧k

ξ=1 xξ and u = ∨kξ=1 xξ . Their corre-

sponding entries, for i = 1, . . . , n, are computed as

vi =k∧

ξ=1

xξi ; ui =

k∨

ξ=1

xξi . (4)

Let W = (w1, . . . ,wn) and M = (m1, . . . ,mn) be the sets ofcolumn vectors of the min- and max memories relative to X,then additive scaling results in two scaled matrices, denotedrespectively W and M , whose column vectors are defined,for j = 1, . . . , n, by

wj = wj + uj ; mj = mj + vj , (5)

Note that for j = 1, . . . ,3, wjj = uj and mjj = vj . Hence,diag(W) = u and diag(M) = v.

The first stage of the segmentation process is completedby applying (4) and (5) to X, W , and M . Continuing withthe description of the proposed segmentation procedure, useis made of the underlying sets of scaled columns W ={w1,w2,w3} and M = {m1,m2,m3} including the extremevector bounds v and u. Note that, the vectors belonging tothe set W ∪ M ∪ {v,u} provide a way to determine sev-eral tetrahedra enclosing specific subsets of X such as, e.g.,W ∪ {u} and M ∪ {v}.

The second stage in the segmentation process is accom-plished using concepts from convex set geometry. Theseconcepts make it possible to mix colors in any color space.Recall that X is said to be a convex set if the straight linejoining any two points in X lies completely within X; also,an n-dimensional simplex is the minimal convex set or con-vex hull whose n + 1 vertices (extreme points) are affinelyindependent vectors in R

n. Since the color cube is a sub-space of R

3 a 3-dimensional simplex will correspond to atetrahedron. Thus, considering pixel vectors in a color im-age enclosed by some tetrahedron, whose base face is de-termined by its most saturated colors, an estimation of thefractions in which they appear at any other color pixel can bemade. A model commonly used for the analysis of spectralmixtures in hyperspectral images, known as the constrainedlinear mixing (LM) model [43], can readily be adapted tosegment noiseless color images by representing each pixelvector as a convex linear combination of the most saturatedcolors. Its mathematical representation is given by

x = Sc = c1s1 + c2s2 + c3s3, subject to (6)

c1, c2, c3 ≥ 0 (non-negativity),

c1 + c2 + c3 = 1 (full additivity),

where, x is a 3 × 1 pixel vector, S = (s1, s2, s3) is a squarematrix of size 3 × 3 whose columns are the extreme col-ors, and c is the 3 × 1 vector of “saturated color fractions”present in x. Notice that the most saturated colors in a givenimage may easily be equal to the set of primary colors (red,green, blue) or to the set of complementary colors (cyan, ma-genta, yellow). Therefore, the present step consists of solv-ing (6) to find vector c given that S = W or S = M for ev-ery x ∈ X, a procedure known as linear unmixing. As men-tioned earlier in the introduction, to solve the constrainedlinear system displayed in (6), one can employ the LLS orNNLS methods imposing the full additivity or the positivityconstraint, respectively.

In the third and last stage of the segmentation process,once (6) is solved for every color pixel xξ ∈ X, all c

jξ frac-

tion values are assembled to form a vector associated withthe saturated color sj , and the final step is carried out by ap-plying a threshold value, in most cases, between 0.3 and 1 toobtain an adequate segmentation depicting the correspond-ing image partition (see Definition 2). Additional theoreti-cal background on which the proposed method is based as

154 J Math Imaging Vis (2012) 42:150–162

Fig. 1 1st column: test RGBcolor image; 1st row, 2nd to 4thcols.: grayscale imagesdepicting segmented regionscontaining proportions of red(w1), green (w2), and blue (w3)colors; 2nd row, 2nd to 4thcols.: grayscale images withregions composed of cyan (m1),magenta (m2), and yellow (m3)colors. Brighter gray tonescorrespond to high fractions ofsaturated colors

well as its application to hyperspectral imagery appears in[47, 49].

3.2 A Comment on Clustering Techniques

Of the many existing approaches to image segmentation[9–12], clustering techniques such as c-means and fuzzy c-means can be applied to color images provided the num-ber of clusters is known beforehand. When using any ofthese techniques a cluster is interpreted as the mean or av-erage color assigned to an iteratively determined subset ofcolor pixels belonging to X. For an explanation of the ba-sic theory and algorithmic variants concerning the c-meansclustering technique cf. [50–52] and similarly, for the fuzzyc-means clustering technique see [53, 54]. In relation toour proposed method based on LAAMs, a comparison withboth clustering techniques is immediate since the maximumnumber of saturated colors determined from W , M , andpossibly {v,u} is always 8, thus the number of clusters isbounded by the interval [1,8]. Furthermore, since any mem-ber in the set W ∪ M ∪ {v,u} is an extreme point, we areable to select any two disjoint subsets of three column vec-tors to form a 3 × 3 system in order to obtain unique so-lutions to (6). Therefore, once a pair of triplets is fixed,the number of clusters c can be restricted to the interval[6,8].

3.3 Segmentation Results and Comparisons

Example 1 (Flat color image) Figure 1 shows in the leftcolumn a test RGB color image (primary colors additivemixtures) of size 256 × 256 pixels that has only 8 dif-ferent colors. Hence, X = {x1, . . . ,x8} out of a total of65,536 pixel vectors. The scaled lattice memory matri-ces and the minimum-, maximum vector bounds are givenby

W =⎛

⎝255 0 0

0 255 00 0 255

⎠ ,

M =⎛

⎝0 255 255

255 0 255255 255 0

⎠ ,

v =⎛

⎝000

⎠ , u =⎛

⎝255255255

⎠ .

(7)

For this trivial color image, a simple algebraic analysisyields a closed solution for unmixing color pixels obey-ing (6). In this case we have

W−1 = 1

255

⎝1 0 00 1 00 0 1

⎠ ,

M−1 = 1

510

⎝−1 1 11 −1 11 1 −1

⎠ .

(8)

From (8), W−1 = I/255 where I is the 3 × 3 identity ma-

trix and considering that xξi ∈ {0,255}, ci = xi/255 veri-

fies trivially the inequalities 0 ≤ ci ≤ 1 for all i = 1,2,3and ξ ∈ {1, . . . ,8}. Full additivity is satisfied if

∑3i=1 ci =

∑3i=1 xi/255 = 1, therefore color pixel values x1, x2, and

x3 lie in the plane x1 + x2 + x3 = 255 which occurs only atthe points (255,0,0), (0,255,0), and (0,0,255). However,letting s = x1 +x2 +x3 the color fractions obtained from thescaled min memory W are readily specified by the simpleformula

ci = xi

s= xi

x1 + x2 + x3⇔ s �= 0, (9)

otherwise if s = 0 let ci = 0. Similarly, from the inverse

matrix M−1

given in (8), one finds that ci = (∑

j �=i xj −xi)/510 for i = 1,2,3. However, since x

ξi ∈ {0,255} we

J Math Imaging Vis (2012) 42:150–162 155

Fig. 2 1st column: test RGBcolor image; 1st row, 2nd to 4thcols.: grayscale images of colorfractions obtained by linearunmixing showing thesegmentation of cyan (m1),magenta (m2), and yellow (m3)(CMY) colors; 2nd row, 2nd to4th cols.: fuzzy c-meansgrayscale images depictingmembership distribution inregions of CMY color gradients;3rd row, 2nd to 4th cols.:c-means binary imagesdepicting uniform segmentedregions labeled from CMYcentroids

Table 1 Fraction values for unmixing pixels of the test RGB colorimage

Saturated Pixel values From W From M

color (x1, x2, x3) (c1, c2, c3) (c1, c2, c3)

Black (0,0,0) (0,0,0) (0,0,0)

Red (255,0,0) (1,0,0) (0, 12 , 1

2 )

Green (0,255,0) (0,1,0) ( 12 ,0, 1

2 )

Blue (0,0,255) (0,0,1) ( 12 , 1

2 ,0)

Cyan (0,255,255) (0, 12 , 1

2 ) (1,0,0)

Magenta (255,0,255) ( 12 ,0, 1

2 ) (0,1,0)

Yellow (255,255,0) ( 12 , 1

2 ,0) (0,0,1)

White (255,255,255) ( 13 , 1

3 , 13 ) ( 1

3 , 13 , 1

3 )

have ci ∈ {−0.5,0,0.5,1}; thus, non-negativity is not satis-fied for all i. Also, full additivity is verified if

∑3i=1 ci =

∑3i=1(

∑j �=i xj − xi)/510 = 1, implying that color pixel

values x1, x2, and x3 belong to the plane x1 + x2 + x3 =510, and this can occur only at the points (0,255,255),(255,0,255), and (255,255,0). Therefore, making s =x1 + x2 + x3 the color fractions obtained from the scaledmax memory M are given by the formula

ci =∑

j �=i xj − xi

s=

∑j �=i xj − xi

x1 + x2 + x3⇔ s �= 0, (10)

otherwise if s = 0, then ci = 0; also, if ci = −1 (for some i),then set ci = 0 and change cj to cj /2 for j �= i. Table 1 dis-plays the correspondence between pixel color values andcolor fractions derived from the scaled LAAMs.

Using the mapping established in Table 1 the color frac-tion solution vector ‘c’ is quickly determined for each

one of the 65,536 pixels forming the image, using for S,first the W matrix that unmixes the primary colors, thenthe M matrix that unmixes the secondary colors. To theright of the test RGB color image in Fig. 1, the colorfraction maps displayed as grayscale images are asso-ciated to the saturated colors derived from the columnvectors of the scaled LAAMs, except black, consideredthe image background, and white that results from addi-tive mixture of the three primary colors. Each color frac-tion segmented image sj is visible after a linear scalingfrom the interval [0,1] to the grayscale dynamic range[0,255].

Example 2 (Gradient color image) In Fig. 2, the left columnshows a synthetic RGB color image composed by a gradi-ent of primary and secondary colors of size 256 × 256 pix-els with 2,400 different colors. Thus, X = {x1, . . . ,x2400}(again, from a total of 65,536 color pixels). It turns outthat the scaled LAAM matrices and the minimum-, max-imum vector bounds are almost the same as those com-puted in the previous example, (7), except that the nu-meric value 255 is replaced by 254. Although the givenimage is rather simple, an algebraic analysis would beimpractical for finding a color fractions formula applica-ble for unmixing every different color present in the im-age. However, fast pixel linear decomposition can be real-ized, e.g., by generalized matrix inversion (LLS) enforcingfull additivity and adequate thresholding of numerical val-ues.

From (6) any cq = 1 − cp − cr , where q = 1,2,3 andq �= p < r �= q , can be selected to reduce the size of matrixS and vector c. Consequently, computations are simplifiedby solving for each color pixel the linear system given by

156 J Math Imaging Vis (2012) 42:150–162

Fig. 3 1st column: sampleRGB color images; 2nd col.:scatter plot of a subset of X

showing 256 different colorsincluding the most saturatedcolors determined from W andM ; 3rd and 4th cols.: tetrahedradetermined from proper subsetsof W ∪ M ∪ {v,u}

xq = Sqcq , where cq = (cp, cr )t , Sq = Wq or Sq = Mq , and

Sq =⎛

⎝s1p − s1q s1r − s1q

s2p − s2q s2r − s2q

s3p − s3q s3r − s3q

⎠ , xq =⎛

⎝x1 − s1q

x2 − s2q

x3 − s3q

⎠ .

(11)

In this example we let q = 1 and (12) is solved only forS1 = M1. Hence, c1 = 1 − c2 − c3 and c1 = (c2, c3)

t . Also,each i-th row of S1 and entries of the transformed inputcolor vector x1, for i = 1,2,3, are given by (mi2 − mi1,

mi3 − mi1) and xi − mi1, respectively. Thresholds appliedto fractions values for generating segmented images werecomputed as

uj = τj

256

k∨

ξ=1

cjξ , (12)

where k = 2,400 and by setting the user defined grayscalethreshold τj = 85 for all j . The first row in Fig. 2 showsthe segmentation produced using M (secondary colors),where the brighter gray tones correspond to high fractionsof saturated colors. Hence color gradients are preservedas grayscale gradients. Additionally, original color regionscomposed of some proportion of the saturated colors m1,m2, and m3 appear as middle or dark gray tones. The sec-ond row displays the results obtained by applying the fuzzyc-means technique with c = 7 and the thresholds values uj

used to cut fuzzy memberships were calculated with (12)setting τj = 64 for all j = 1, . . . ,7. Observe that the brightergray tones are associated with pixels near to fuzzy centroids(high membership values) whereas darker gray tones corre-spond to pixels far from fuzzy centroids (low membership

values); note that original color gradients are not preserved.The third row depicts as black and white binary images theclusters found using the c-means algorithm with c = 7 andinitial centroids given by the set W ∪ M ∪ {v}. In this lastcase thresholds are not needed since the c-means algorithmis a labeling procedure that assigns to all similar colors be-longing to a cluster the color value of its centroid. Con-sequently, a simple labeling procedure is implemented toseparate regions of different color.

If W 1 is selected instead of M1 for the system matrixS1 in (12), similar segmentation results are obtained exceptthat, in this case, red, green, and blue regions would be ex-tracted from the corresponding saturated colors w1, w2, andw3. We remark that Example 2 clearly shows the funda-mental difference between the three segmentation methodscompared: c-means and fuzzy c-means clustering are statis-tical and iterative in nature whereas the LAAM’s approachcoupled with the LM model is a non-iterative geometricalprocedure as discussed in Sect. 3.1.

Example 3 (Real color images) Next we provide additionalsegmentation results for three realistic RGB color imagesof size 256 × 256 pixels displayed in the first column ofFig. 3 (see Table 2 for image information). For each of thesecolor images, we create a set X� = {x1, . . . ,xk�} ⊂ [0,255]3

where � = α,β, γ , and each vector xξ ∈ X� is distinct fromthe others, i.e., xξ �= xζ whenever ξ �= ζ . This is achievedby eliminating pixel vectors of the same color (k� is givenin Table 2). After application of (2)–(3) (LAAMs) and (4)(vector bounds), the scaled matrices W and M are computedwith (5). The numerical entries for the scaled LAAM matri-ces (3rd column, Table 2) of the sample images are explicitly

J Math Imaging Vis (2012) 42:150–162 157

Fig. 4 1st column: sampleRGB color images; 2nd, 3rd,and 4th cols.: quantizedgrayscale segmented imagescomposed from results obtained,respectively, with c-meansclustering, fuzzy c-meansclustering, and scaledLAAMs + LLS linear unmixingtechniques

Table 2 Information of sample real RGB color images

Image Pixels (pq) Colors (|X�| = k�) Scaled LAAMs

Circuit 65,536 35,932 Wα,Mα

Parrot 65,536 55,347 Wβ,Mβ

Baboon 65,536 63,106 Wγ ,Mγ

given below:

Wα =⎛

⎝255 80 10171 255 13546 154 255

⎠ ,

Mα =⎛

⎝19 203 228194 19 120173 139 19

⎠ ,

Wβ =⎛

⎝255 121 3555 251 1281 23 255

⎠ ,

Mβ =⎛

⎝0 200 254

130 0 228220 127 0

⎠ ,

Wγ =⎛

⎝255 129 7255 255 1560 90 255

⎠ ,

Mγ =⎛

⎝0 200 255

126 0 165183 99 0

⎠ .

Notice that the corresponding minimum and maximum vec-tor bounds {v�,u�} for � = α,β, γ are readily available

from the main diagonals of the corresponding LAAM ma-trices. A 3-D scatter plot of each set X showing only 256different colors, including the extreme points of the setW ∪ M ∪ {v,u}, is depicted in the second column of Fig. 3for each sample image. Two tetrahedra enclosing points ofX are illustrated in the third column of the same figure. Thevertices of the left tetrahedron belong to the set W ∪ {v} andthose of the right tetrahedron are in W ∪{u}; similarly, in thefourth column of Fig. 3, the left tetrahedron has its verticesin the set M ∪ {v} and the right tetrahedron is formed withthe points of M ∪ {u}.

Again, for each RGB color image in Fig. 3, (6) was sim-plified to (12) setting q = 1 and solving it using LLS foreach x ∈ X�, by taking first W� and then M� as the S ma-trix for � = α,β, γ . It turns out that for the sample imagesselected the corresponding 3 × 3 computed scaled LAAMsare non-singular matrices (full rank) and, therefore, the solu-tions found by the linear unmixing scheme are unique. Sincethe minimum and maximum bounds {v�,u�} correspond, re-spectively, to a “dark” color near black and to a “bright”color near white it is possible to replace a specific columnin W or M with one of these extreme bounds in order to ob-tain segmentations of dark or bright regions. Thus, final sat-isfactory segmentation results are produced by an adequateselection of saturated colors sj from the set W ∪M ∪ {v,u}.Figure 4 displays the segmentation produced by applyingthe clustering techniques of c-means, fuzzy c-means, andour proposed LAAMs plus linear unmixing based technique.Results are shown as quantized grayscale images where spe-cific gray tones are associated with selected colors corre-sponding to cluster centers or extreme points. Table 3 pro-vides the technical information relative to each segmentingalgorithm; for example, “runs” is the number of times an al-

158 J Math Imaging Vis (2012) 42:150–162

Table 3 Technical data used for RGB color image segmentation

Image c-means Fuzzy c-means LAAMs + LLS

Circuit c = 8, runs = 5 c = 7, runs = 3 c = 6, runs = 1distance: squared Euclidean exp(U) = 2, min.imp = 10−5 Wα,Mα,q = 15th run: 58 iter., 57 sec 3rd run: 108 iter., 720 sec non-iterative, 30 secRGB → 255,128,192 RG1G2B → 255,128,160,200 RGB → 255,128,192

Parrot c = 8, runs = 5 c = 6, runs = 3 c = 6, runs = 1distance: city block exp(U) = 2, min.imp = 10−2 Wβ,Mβ,q = 15th run: 20 iter., 32 sec 3rd run: 134 iter., 238 sec non-iterative, 30 secRGBCY → 255,128,160,192,216 RG1G2Y → 255,128,160,216 RGB → 255,128,192

Baboon c = 7, runs = 5 c = 6, runs = 3 c = 6, runs = 1distance: city block exp(U) = 2, min.imp = 10−2 Wγ ,Mγ ,q = 15th run: 39 iter., 38 sec 3rd run: 94 iter., 270 sec non-iterative, 30 secRGB1B2Y → 255,128,160,176,216 RGB1B2Y → 255,128,160,176,216 RGBCY → 255,128,160,192,216

procedure: kmeans (Matlab) procedure: fcm (Matlab) procedure: geninv (Mathcad)

gorithm is applied to a given image. Specifically, in the Mat-lab environment, “runs” is equivalent to the “replicates” pa-rameter used for c-means clustering; exp(U) and min. imprefer to, respectively, the partition matrix exponent and theminimum amount of improvement needed for the objectivefunction to converge in fuzzy c-means clustering. The no-tation, e.g. RGB → 255,128,192, gives the gray levels as-signed to the red, green, and blue colors.

4 Segmentation in Different Color Spaces

In this section, for brevity, we will refer to the LAAMs basedapproach as the WM method. To test the performance ofthe WM method in different color spaces, besides the stan-dard non-normalized correlated RGB space, we selected asrepresentative alternatives, Ohta’s I1I2I3 linearly decorre-lated RGB color space [12, 17], the HSI non-linear and non-uniform color space [26, 28], and the perceptually uniformcolor space L*a*b* [14, 28]. Mapping RGB colors to theL*a*b* color space makes use of the linear NTSC illumi-nant D65 RGB to XYZ conversion matrix.

Example 4 Figure 5 shows in the top left, the “peppers”RGB color image of size 128 × 128 pixels, its HSI trans-formation displayed as a false RGB color image, and theextreme color pixels determined from W (upper row of rect-angles) and M (lower row of rectangles) in the HSI colorspace. Here, X = {x1, . . . ,x13,844} is reduced from a total of16,384 pixel vectors. The scatter plot of X is depicted to theleft of Fig. 6 together with four tetrahedra enclosing differ-ent subsets of X, namely W ∪ {v} and W ∪ {u} shown in themiddle, or M ∪ {v} and M ∪ {u} displayed to the right.Thecomputed scaled memory matrices and vector bounds are

Fig. 5 1st row, left to right: sample RGB color image, transformedHSI color image, saturated colors obtained from W (upper horizontalarray of colored rectangles) and M (lower horizontal array of coloredrectangles); 2nd and 3rd rows: grayscale segmented images derivedfrom wj , respectively, mj for j = 1,2,3, showing “red/green” pepperregions and bright reflected light regions

Fig. 6 Left: scatter plot of X showing all different colors present inthe HSI representation of the “peppers” RGB color image; middleand right: tetrahedra determined, respectively, from W ∪ {v,u} andM ∪ {v,u} enclosing four different subsets of X

J Math Imaging Vis (2012) 42:150–162 159

Fig. 7 1st row, 1st to 5th cols.:“peppers” image in RGB,I1I2I3, HSI, L*a*b* colorspaces, and NTSC grayscaleversion; 2nd row, 1st to 5thcols.: grayscale segmentedimages of “red/green” peppersand bright portions of reflectedlight corresponding to eachcolor space, and the NTSCgrayscale version quantized to16 levels; 3rd row, 1st to 5thcols.: Sobel edge imagescorresponding to segmentationmethods (1), (2), (5), (6) ofTable 4 and the Sobel edgereference image obtained fromthe quantized NTSC grayscaleversion

given by

W =⎛

⎝255 100 36188 255 16115 103 255

⎠ , M =⎛

⎝0 67 140

155 0 152219 239 0

⎠ ,

v =⎛

⎝000

⎠ , u =⎛

⎝255255255

⎠ .

Using the NNLS numerical method, (6) was solved for ev-ery color pixel. The 2nd and 3rd rows in Fig. 5, display thefraction maps obtained from the HSI saturated colors dis-played in the top right, whose associated column vectorscorrespond, respectively, to W and M . As before, thresh-olds were again computed using (12) with k = 16,384 andtuning τj to adequate values.

For the next example, we recall the mathematical formu-las of two measures used for grayscale image comparisons.Specifically, given to matrices A = (aij ) and B = (bij ) ofsize p × q pixels, the correlation coefficient ρ(A,B) be-tween A and B , and the signal to noise ratio SNR(A,B) arecomputed as

ρ(A,B)

=∑p

i=1

∑q

j=1(aij − μA)(bij − μB)∑p

i=1

∑q

j=1(aij − μA)2∑p

i=1

∑q

j=1(bij − μB)2,

(13)

SNR(A,B) = −10 log10

∑p

i=1

∑q

j=1(aij − bij )2

∑p

i=1

∑q

j=1 a2ij

. (14)

In (13), the quantities μA and μB denote the mean valueof A and B , respectively.

Example 5 The “peppers” RGB color image and its trans-formation to the I1I2I3, HSI, and L*a*b* color spaces, ren-

Table 4 Segmentation performance for the “peppers” color image

Segmentation method Corr. coef. (ρ) SNR

(1) WM in RGB 0.707 14.179

(2) WM in I1I2I3 0.717 14.931

(3) WM in HSI 0.708 14.124

(4) WM in L*a*b* 0.675 14.006

(5) Mahalanobis distance clustering 0.632 12.917

(6) Histograms + Morph. Watersheds 0.594 9.814

dered as false color RGB images, are displayed in the firstfour columns of row one of Fig. 7. In the 2nd row beloweach color image, composed thresholded fraction maps se-lected from W ∪ M , depict the segmentation obtained inthe corresponding color space, e.g., vectors and fractionthresholds used in RGB color space were w1(u1 = 0.454),w2(u2 = 0.363), and m1(u1 = 1.561); similarly, for theI1I2I3 color space, m3(u3 = 0.389), w3(u3 = 0.384), andw1(u1 = 0.347) were chosen. The 3rd row displays So-bel gradient edge images corresponding to the segmentationproduced by the WM method in the RGB and I1I2I3 colorspaces, a clustering method based on Mahalanobis distance,and a hybrid technique employing histograms and morpho-logical watersheds. The 5th column of Fig. 7 shows fromtop to bottom, the NTSC grayscale version of the originalcolor image, a 16-level quantization produced by an opti-mized octree nearest color algorithm, and its correspondingSobel edge image used as reference for quantitative compar-isons (see Table 4).

Example 6 Figure 8 displays the segmentation results of ad-ditional color images. In each row, the source color image inRGB format is shown to the left; to the right, shown as quan-tized grayscale images, follows the segmentation obtained inthe RGB, I1I2I3, HSI, and L*a*b* color spaces. For exam-ple, the corresponding “bear” grayscale image in the I1I2I3

160 J Math Imaging Vis (2012) 42:150–162

Fig. 8 1st column: sampleRGB color images; 2nd to 5thcols.: compound segmentedimages obtained with the WM

method, respectively, in theRGB, I1I2I3, HSI, and L*a*b*color spaces (main regions ofinterest are quantized)

color space (2nd row, 3rd column) was generated by com-posing the fraction maps obtained from w2 and m2 afterthresholding at low values, respectively, setting u2 = 0.387and u2 = 0.326.

Based on the example images given here and the perfor-mance measure values listed in Table 4, the best segmen-tation results produced by applying the WM method andsemi-constrained LM model occur in the I1I2I3 space (cf.again 2nd column of Fig. 7 and 3rd column of Fig. 8).

5 Conclusions

This research work describes a novel pixel based segmen-tation method for color images in different color spacesbased on the W and M lattice auto-associative memories,whose scaled column vectors defines a small finite set ofsaturated color pixels. These extreme points may form dif-ferent suitable base sets to perform semi-constrained linearunmixing to determine color fractions of any other pixel ina given input image. Granular segmented images of all sat-urated pixels are directly produced by scaling the fractiondata computed with the LLS or NNLS numerical methods,and coarse segmented images can be obtained by thresh-olding the corresponding color fraction maps. Examples us-ing synthetic and real RGB color images are given to illus-trate visually the results of segmentation. Table 3 summa-rizes the computational performance of the LAAMs+LLS,the c-means, and the fuzzy c-means techniques, from whichthe main advantage of the proposed technique is the reduc-tion of processing times due to its non-iterative nature. Sim-ilarly, Table 4 gives the computational performance of the

LAAMs+NNLS technique in four different color spacesby quantifying the difference between Sobel edge imagesof segmented grayscale images using the correlation coeffi-cient and the signal to noise ratio. Specifically, color imagesegmentation carried out in the I1I2I3 color space outper-formed the results obtained in the RGB space when using aclustering technique based on the Mahalanobis distance be-tween pixels, and a hybrid technique based on histogramsand morphological watersheds. We point out that the latticealgebra based technique presented here has been applied sofar to still images and further developments are needed forits application to real-time color image segmentation.

Acknowledgements Gonzalo Urcid-S. is grateful with the NationalSystem of Researchers (SNI-CONACYT) in Mexico city for partialfinancial support through grant # 22036; Juan Carlos Valdiviezo-N.thanks the National Council of Science and Technology (CONACYT)in Mexico city for doctoral scholarship # 175027. The authors thankalso the anonymous reviewers for their valuable suggestions.

References

1. Ballard, D.H., Brown, C.M.: Computer Vision, pp. 149–150. Pren-tice Hall, Englewood Cliffs (1982)

2. Haralick, R.M., Shapiro, L.G.: Glossary of computer vision terms.In: Dougherty, E.R. (ed.) Digital Image Processing Methods,p. 439. Dekker, New York (1994)

3. Jain, R., Kasturi, R., Schunck, B.G.: Machine Vision, pp. 73–76.McGraw-Hill, New York (1995)

4. Awcock, G.J., Thomas, R.: Applied Image Processing, pp. 126–129. McGraw-Hill, New York (1996)

5. Pal, N.R., Pal, S.K.: A review on image segmentation techniques.Pattern Recognit. 26(9), 1277–1294 (1993)

J Math Imaging Vis (2012) 42:150–162 161

6. Zhu, S.C., Yuille, A.: Region competition: unifying snakes, re-gion growing, and Bayes/MDL for multiband image segmenta-tion. IEEE Trans. Pattern Anal. Mach. Intell. 18(9), 884–900(1996)

7. Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEETrans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)

8. Chan, T.F., Vese, L.A.: Active contours without edges. IEEETrans. Image Process. 10(2), 266–277 (2001)

9. Skarbek, W., Koschan, A.: Colour image segmentation: a survey,pp. 1–81. Technical Report 94-32, Technical University of Berlin(1994)

10. Plataniotis, K.N., Venetsanopoulos, A.N.: Color Image Processingand Applications, pp. 237–273. Springer, Berlin (2000)

11. Lucchese, L., Mitra, S.K.: Color image segmentation: A-state-of-the-art-survey. Proc. Indian Natl. Sci. Acad. 67(2), 207–221(2001)

12. Cheng, H.D., Jain, X.H., Sun, Y., Wang, J.: Color image segmen-tation: advances and prospects. Pattern Recognit. 34(12), 2259–2281 (2001)

13. Celenk, M., Uijt de Haag, M.: Optimal thresholding for color im-ages. In: SPIE Proc., Nonlinear Image Processing IX, San Jose,CA, vol. 3304, pp. 250–259 (1998)

14. Shafarenko, L., Petrou, H., Kittler, J.: Histogram-based segmen-tation in a perceptually uniform color space. IEEE Trans. ImageProcess. 7(9), 1354–1358 (1998)

15. Meyer, F.: Color image segmentation. In: IEEE Proc., 4th Inter.Conf. on Image Processing and Its Applications, pp. 303–306(1992)

16. Crespo, J., Schafer, R.W.: The flat zone approach and color im-ages. In: Serra, J., Soille, P. (eds.) Mathematical Morphology andIts Applications to Image Processing, pp. 85–92. Kluwer Aca-demic, Dordrecht (1994)

17. Liu, J., Yang, Y.-H.: Multiresolution color image segmentation.IEEE Trans. Pattern Anal. Mach. Intell. 16(7), 689–700 (1994)

18. Park, S.H., Yun, I.D., Lee, S.U.: Color image segmentation basedon 3-D clustering: morphological approach. Pattern Recognit.31(8), 1061–1076 (1998)

19. Géraud, T., Strub, P.-Y., Darbon, J.: Color image segmentationbased on automatic morphological clustering. In: IEEE Proc.,Inter. Conf. on Image Processing, Thessaloniki, Greece, vol. 3,pp. 70–73 (2001)

20. Healey, G.E.: Using physical color models in 3-d machine vi-sion. In: SPIE Proc., Perceiving, Measuring and Using Color, SanDiego, CA, vol. 1250, pp. 264–275 (1990)

21. Klinker, G.J., Schafer, S.A., Kanade, T.: A physical approach tocolor image understanding. Int. J. Comput. Vis. 4(1), 7–38 (1990)

22. Sowmya, B., Sheelanari, B.: Color image segmentation using softcomputing techniques. Int. J. Soft Comput. Appl. 4, 69–80 (2009)

23. Essaqote, H., Zahid, N., Haddaoui, I., Ettouhami, A.: Color im-age segmentation based on new clustering algorithm and fuzzyeigenspace. Res. J. Appl. Sci. 2(8), 853–858 (2007)

24. Palus, H., Kotyczka, T.: Evaluation of colour image segmentationresults. In: Colour Image Processing Workshop, Erlangen, Ger-many (2001)

25. Gonzalez, R.C., Woods, R.E.: Digital image processing, 3rd edn.,pp. 443–446. Pearson Prentice-Hall, Upper Saddle River (2008)

26. Zhang, C., Wang, P.: A new method for color image segmentationbased on intensity and hue clustering. In: IEEE Proc., 15th Inter.Conf. on Pattern Recognition, vol. 3, pp. 613–616 (2000)

27. Palus, H.: Color image segmentation: selected techniques. In:Lukac, R., Plataniotis, K.N. (eds.) Color Image Processing: Meth-ods and Applications, pp. 103–128. CRC Press, Boca Raton(2006)

28. Koschan, A., Abidi, M.: Digital Color Image Processing, pp. 149–174. Wiley, Hoboken (2008)

29. Maragos, P.: Lattice image processing: a unification of morpho-logical and fuzzy algebraic systems. J. Math. Imaging Vis. 22,333–353 (2005)

30. Kaburlasos, V.G., Ritter, G.X. (eds.): Computational IntelligenceBased on Lattice Theory, vol. 67. Springer, Heidelberg (2007)

31. Ham, F.M., Kostanic, I.: Principles of Neurocomputing for Sci-ence and Engineering. McGraw-Hill, New York (1998)

32. Lawson, C.L., Hanson, R.J.: Solving Least Squares Problems.Prentice-Hall, Englewood Cliffs (1974), Chap. 23

33. Urcid, G., Valdiviezo-N., J.C.: Color image segmentation basedon lattice auto-associative memories. In: IASTED Proc., 13th In-ter. Conf. on Artificial Intelligence and Soft Computing, Palma deMallorca, Spain, pp. 166–173 (2009)

34. Urcid, G., Valdiviezo-N., J.C., Ritter, G.X.: Lattice associativememories for segmenting color images in different color spaces.In: Lecture Notes in Artificial Intelligence, vol. 6077 (Part II),pp. 359–366. Springer, Berlin (2010)

35. Cuninghame-Green, R.: Minimax, Algebra. Lectures Notes inEconomics and Mathematical Systems, vol. 166. Springer, NewYork (1979)

36. Cuninghame-Green, R.: Minimax algebra and applications. In:Hawkes, P. (ed.) Advances in Imaging and Electron Physics,vol. 90, pp. 1–121. Academic Press, New York (1995)

37. Ritter, G.X., Sussner, P., Diaz de Leon, J.L.: Morphological as-sociative memories. IEEE Trans. Neural Netw. 9(2), 281–293(1998)

38. Ritter, G.X., Urcid, G., Iancu, L.: Reconstruction of patterns fromnoisy inputs using morphological associative memories. J. Math.Imaging Vis. 19(2), 95–111 (2003)

39. Urcid, G., Ritter, G.X.: Kernel computation in morphological as-sociative memories for grayscale image recollection. In: IASTEDProc., 5th Int. Conf. on Signal and Image Processing, Honolulu,HI, pp. 450–455 (2003)

40. Ritter, G.X., Gader, P.: Fixed points of lattice transforms and lat-tice associative memories. In: Hawkes, P. (ed.) Advances in Imag-ing and Electron Physics, vol. 144, pp. 165–242. Elsevier, SanDiego (2006)

41. Urcid, G., Ritter, G.X.: Noise masking for pattern recall usinga single lattice matrix associative memory. In: Kaburlasos, V.G.,Ritter, G.X. (eds.) Computational Intelligence Based on LatticeTheory, vol. 67, pp. 79–98. Springer, Heidelberg (2007)

42. Valle, M.E.: A class of sparsely connected autoassociative mor-phological memories for large color images. IEEE Trans. NeuralNetw. 20(6), 1045–1050 (2009)

43. Keshava, N.: A survey of spectral unmixing algorithms. Linc. Lab.J. 14(1), 55–78 (2003)

44. Graña, M., Sussner, P., Ritter, G.X.: Associative morphologicalmemories for endmember determination in spectral unmixing. In:IEEE Proc., Inter. Conf. on Fuzzy Systems, pp. 1285–1290 (2003)

45. Graña, M., Jiménez, J.L., Hernández, C.: Lattice independence,autoassociative morphological memories and unsupervised seg-mentation of hyperspectral images. In: Proc. 10th Joint Conf. onInformation Sciences, pp. 1624–1631 (2007)

46. Valdiviezo, J.C., Urcid, G.: Hyperspectral endmember detectionbased on strong lattice independence. In: SPIE Proc., Applicationsof Digital Image Processing XXX, San Diego, CA, vol. 6696,pp. 1–12 (2007)

47. Ritter, G.X., Urcid, G., Schmalz, M.S.: Autonomous single-passendmember approximation using lattice auto-associative memo-ries. Neurocomputing 72(10–12), 2101–2110 (2009)

48. Graña, M., Villaverde, I., Maldonado, J.O., Hernández, C.: Twolattice computing approaches for the unsupervised segmentationof hyperspectral images. Neurocomputing 72(10–12), 2111–2120(2009)

49. Ritter, G.X., Urcid, G.: Lattice algebra approach to endmemberdetermination in hyperspectral imagery. In: Hawkes, P. (ed.) Ad-

162 J Math Imaging Vis (2012) 42:150–162

vances in Imaging and Electron Physics, vol. 160, pp. 113–169.Elsevier, Burlington (2006)

50. MacQueen, J.: Some methods for classification and analysis ofmultivariate observations. In: Proc. 5th Berkeley Symposium onMathematics, Statistics, and Probabilities, vol. I, pp. 281–297.University of California, Berkeley (1967)

51. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2ndedn. Wiley, New York (2000)

52. Elomaa, T., Koivistoinen, H.: On autonomous K-means cluster-ing. In: Proc. 15th Int. Symposium on Methodologies for Intelli-gent Systems, pp. 228–236 (2005)

53. Bezdeck, J.: Pattern Recognition with Fuzzy Objective FunctionAlgorithms. Plenum, New York (1982)

54. Lim, Y.W., Lee, S.U.: On the color image segmentation algorithmbased on the thresholding and fuzzy c-means techniques. PatternRecognit. 23(9), 935–952 (1990)

Gonzalo Urcid received his B.E.(1982) and M.Sc. (1985) both fromthe University of the Americasin Puebla, Mexico, and his Ph.D.(1999) in optics from the NationalInstitute of Astrophysics, Optics,and Electronics (INAOE) in To-nantzintla, Mexico. He holds the ap-pointment since 2001 of NationalResearcher from the Mexican Na-tional Council of Science and Tech-nology (SNI-CONACYT), and cur-rently is an Associate Professor inthe Optics Department at INAOE.His present research interests in-

clude applied mathematics, digital and optical image processing, ar-tificial neural networks, and pattern recognition.

Juan-Carlos Valdiviezo-N. is aPh.D. student in the Optics Depart-ment at INAOE, Mexico. He re-ceived a B.E. (2005) in electron-ics engineering from the Tuxtla In-stitute of Technology in Chiapas,Mexico, and an M.Sc. (2007) inoptics from the National Instituteof Astrophysics, Optics, and Elec-tronics (INAOE) in Tonantzintla,Mexico. He is the recipient of aMexican National Council of Sci-ence and Technology Fellowship(CONACYT) since 2005, and cur-rently is a member of the mexican

SPIE and OSA student chapters. His research interests include digitalimage processing, spectral analysis, and artificial neural networks.

Gerhard X. Ritter received theB.A. (1966) and Ph.D. (1971) de-grees from the University of Wis-consin, Madison. He is currentlyProfessor of Computer Science ofthe Computer and Information Sci-ence and Engineering Department,the Director of the Center for Com-puter Vision and Visualization, andProfessor of Mathematics at theUniversity of Florida. Dr. Ritter isthe Chair of the Society of Industrialand Applied Mathematics (SIAM)Activity Group in Imaging Scienceand of the American Association of

Engineering Societies (AAES) R & D task Force. He is the Editor-in-Chief of the Journal of Mathematical Imaging and Vision, and a mem-ber of the Editorial Boards for both the Journal of Electronic Imagingand the Journal of Pattern Analysis and Applications. Since 1995 heis a Fellow of SPIE and he was the recipient of the 1998 GeneralRonald W. Yates Award for Excellence in Technology Transfer, AirForce Research Laboratory and the 1989 International Federation forInformation Processing (IFIP) Silver Core Award. He is the author oftwo books and more than 100 refereed publications in computer vi-sion, image algebra, mathematics, and neural networks. His currentresearch interests include mathematical foundations of digital imageprocessing and computer vision, artificial neural networks, and patternrecognition.

View publication statsView publication stats


Recommended