Self-Organizing Dynamic Graphs
EZEQUIEL LÓPEZ-RUBIO,$ JOSÉ MUÑOZ-PÉREZ and JOSÉ ANTONIOGÓMEZ-RUIZDepartment of Computer Science and Artificial Intelligence Universidad de Málaga Campus deTeatinos, s/n. 29071-Málaga SPAIN. {ezeqlr, munozp, janto}@lcc.uma.es Phone: ðþ34Þ 95 21371 55 Fax: ðþ34Þ 95 213 13 97
(Received 9 April 2002)
Abstract. We propose a new self-organizing neural model that considers a dynamic topologyamong neurons. This leads to greater plasticity with respect to the self-organizing neural net-
work (SOFM). Theorems are presented and proved that ensure the stability of the networkand its ability to represent the input distribution. Finally, simulation results are shown todemonstrate the performance of the model, with an application to colour image compression.
Key words. computational maps, image compression, neural networks, self organization, self-organizing feature maps (SOFM)
1. Introduction
The Kohonen’s self-organizing neural network ([2, 3]) is a realistic, although very
simplified, model of the human brain. The purpose of the self-organizing feature
map (SOFM) is to capture the topology and probability distribution of input data
and it has been used by several authors to perform invariant pattern recognition,
such as Corridoni [1], Pham [4], Subba Reddy [5] and Wang [7].
This network is based on a rigid topology that connects the neurons. This is not a
desirable property in a self-organizing system, as Von der Marlsburg states in [6].
Here we propose an alternative to this network that shows a greater plasticity, while
retaining the feature detection performance of the SOFM.
Section 2 reviews the SOFM. Our model is proposed in Section 3, and its proper-
ties are stated and proved in Section 4. Experimental results are shown in Section 5.
Finally, conclusions are presented in Section 6.
2. The Self-Organizing Feature Map
The neurons of the SOFM are organized in a Dm-dimensional lattice, where typically
Dm ¼ 1 or Dm ¼ 2. At every time instant t, a input sample xðtÞ is presented to thenetwork from an input distribution. Input samples belong to an input space of
dimension Di. The weight vector wi of a neuron i represents a point in the input
$Corresponding author.
Neural Processing Letters 16: 93–109, 2002. 93# 2002 Kluwer Academic Publishers. Printed in the Netherlands.
space. The unit whose weight vector is closer to the input xðtÞ is called the winningneuron:
i ¼ argminj
kxðtÞ � wjðtÞk ð1Þ
The weight vectors of the winning neuron i and its neighbours in the lattice are mod-
ified to reflect the features of the input distribution. The learning rule is the follow-
ing:
wjðtþ 1Þ ¼ wjðtÞ þ ZðtÞpj;iðtÞ½xðtÞ � wjðtÞ� ð2Þ
where
pj;iðtÞ ¼ exp �d 2j;i
2sðtÞ2
!ð3Þ
dji is the distance between winning neuron i and neuron j in the lattice, and pji is aunimodal function of the lateral distance dji, called neighbourhood function, with
sðtÞ! 0 as t!1. The value sðtÞ controls the neighbourhood size. The degree ofneighbourhood between neuron i and j is reflected by pji.
The learning process is divided into two phases: the ordering phase and the conver-
gence phase. It is during the ordering phase when the topological ordering of the
weight vectors takes place. The learning rate ZðtÞ and the neighbourhood size sðtÞhave large values at the beginning of this phase, and then they are decreased with
either linear or exponential decay:
ZðtÞ ¼ Zo 1�t
T
� �ð4Þ
ZðtÞ ¼ Z0 expð�t=t1Þ ð5Þ
sðtÞ ¼ so 1�t
T
� �ð6Þ
sðtÞ ¼ so expð�t=t2Þ ð7Þ
where T, t1 and t2 are time constants. The convergence phase is required principallyfor the fine-tuning of the computational map. Both the learning rate and the neigh-
bourhood size are maintained at a constant, low value during this phase.
3. The Self-Organizing Dynamic Graph
Our proposal is the following. The weight vectors wi are points in the input space,
like in the SOFM algorithm. Nevertheless, the units are no longer joined by a static
topology. Every unit i has an associated nonnegative adjacency vector zi that reflects
the proximity among neuron i and all the others. This means that zij is the neigh-
bourhood between neurons i and j. We have zii ¼ 0 8i.
94 EZEQUIEL LÓPEZ-RUBIO ET AL.
The winning neuron lookup is performed as in the SOFM:
i ¼ argminj
kxðtÞ � wjðtÞk ð8Þ
The weight vector of the winning neuron i is modified to come closer to input sam-
ple xðtÞ:
wiðtþ 1Þ ¼ wiðtÞ þ aðtÞ½xðtÞ � wiðtÞ� ð9Þwhere aðtÞ is called the winning neuron learning rate, which controls how much theweight vector of the winning neuron is changed. The condition
04aðtÞ4 1 8t ð10Þmust be satisfied so that the modification moves the weight vector between the old
weight vector and the input sample. All the other neurons are modified according
to their adjacency to winning neuron i:
wjðtþ 1Þ ¼ wjðtÞ þ bðtÞzji½xðtÞ � wjðtÞ� ð11Þ
where bðtÞ is called the non-winning neurons learning rate. The following conditionmust be satisfied in order to ensure that the new weight vector is between the old
weight vector and the input sample:
04bðtÞ4 1 8t ð12ÞVon der Marlsburg [6] states that the synapses of a neuron in a self-organizing sys-
tem must compete. So, a selection of the most vigorously growing synapses at the
expense of the others should be performed. Note that this principle of self-
organization is not considered in the SOFM. Here we introduce this principle in
the network architecture by changing the strength of the synapses and by imposing
the condition
XNk¼1
zjkðtÞ ¼ 1 8t 8j ¼ 1; . . . ;N ð13Þ
where N is the number of units.
The learning rule for the adjacency vectors of non-winning neurons is
zjðtþ 1Þ ¼1PN
k¼1 yjkðtÞyjðtÞ 8j 6¼ i ð14Þ
where
yjðtÞ ¼ zjðtÞ þkxðtÞ � wiðtÞkkxðtÞ � wjðtÞk
gðtÞui; ð15Þ
ui is a unit vector with a 1 in the ith component, g(t) is the adjacency vectors learningrate and k k denotes the Euclidean norm. The winning neuron does not change itsadjacency vector, i.e.,
ziðtþ 1Þ ¼ ziðtÞ ð16Þ
SELF-ORGANIZING DYNAMIC GRAPHS 95
The learning rule of the adjacency vectors increases the values zji, where unit j has
a weight vector close to the weight vector of winning neuron i. This reinforces the co-
operation among neighbouring units.
4. Properties
The model that we have proposed has some desirable properties. Here we prove
them. First of all, we can see in Proposition 1 that the adjacency vectors are nor-
malized. This means that the synapses grow at the expense of the others, as stated
before.
PROPOSITION 1. Condition ð13Þ holds 8t > 0. That is, the adjacency vectors arealways 1-normalized.
Proof. Equation (14) shows that yjðtÞ is divided by its 1-norm to obtain zjðtþ 1Þ.So zjðtþ 1Þ must be a 1-normalized vector. &
Our model is stable, i.e., if the input is bounded, then so is the output. The output
of the network is defined as the set of weight vectors. Theorem 1 proves the stability
condition.
THEOREM 1 (network stability). If the input vectors are bounded, then the weight
vectors are bounded.
Proof. If the input vectors are bounded, then we can find two sets of constants
{Ak} and {Bk} such that
Ak 4 xkðtÞ4Bk 8k ¼ 1; . . . ;Di ð17Þ
Ak 4wjkð0Þ4Bk 8j ¼ 1; . . . ;N 8k ¼ 1; . . . ;Di ð18Þ
We are going to prove that the weight vectors are bounded, i.e.,
Ak 4wjkðtÞ4Bk 8t 8j ¼ 1; . . . ;N 8k ¼ 1; . . . ;Di ð19Þ
by induction on t.
–If t ¼ 0, by (17).–Induction hypothesis. We suppose that
Ak 4wjkðtÞ4Bk 8t ¼ 1; . . . ; t08j ¼ 1; . . . ;N 8k ¼ 1; . . . ;Di
ð20Þ
–If t ¼ t0 þ 1. For every component k we havewjkðt0 þ 1Þ ¼ wikðt0Þ þ zjðt0Þðxkðt0Þ � wjkðt0ÞÞ ð21Þ
where
zjðt0Þ ¼aðt0Þ if j ¼ ibðt0Þzjiðt0Þ otherwise
�ð22Þ
96 EZEQUIEL LÓPEZ-RUBIO ET AL.
If we reorder the terms of the second hand of (21) we have
wjkðt0 þ 1Þ ¼ ð1� zjðt0ÞÞwikðt0Þ þ zjðt0Þxkðt0Þ ð23Þ
From (10), (12) and Proposition 1 we have that
04zjðt0Þ4 1 ð24Þ
We know from (23) and (24) that wjkðt0 þ 1Þ is a weighted average of wjkðt0Þ andxkðt0Þ. Then wjkðt0 þ 1Þ lies between wjkðt0Þ and xkðt0Þ. So by (17) and (20) we have
Ak 4 wjkðt0 þ 1Þ4Bk ð25Þ
This means that the weight vectors are bounded. &
Next we consider the convergence of the units towards the regions where the input
vectors lie (Theorem 2). This means that the weight vectors of the units may be used
to perform vector quantization. The weight vectors would be the code vectors of the
vector quantizer.
LEMMA 1. Let C be a convex set. Let A, B be points such that A 2 C, B =2C. Thenevery point in the line segment AB is closer to C than B, except for B.
Proof. Let D2C be the point such that
kD� Bk ¼ minX2C
kX� Bk ð26Þ
that is, the point of C which is closest to B. Let r be the distance from D to B (and the
distance from B to C), i.e., r ¼ kD� Bk. Since D is the point of C which is closest toB, every point inside the hypersphere H of radius r and center B does not belong to
C. Note that D is in the surface of H. Furthermore, since A,D 2 C and C is convex,the line segmentAD is completely insideC. ThenAD is completely outside the interior
of H. Furthermore, BD is completely inside the interior of H, except for D. So the
angle BDA has 90� or more (Figure 1).
Since AD � C and BD is completely outside C (except for D), every point of AB iscloser to C than B (except for B). &
Figure 1. The triangle BDA.
SELF-ORGANIZING DYNAMIC GRAPHS 97
THEOREM 2 (convergence towards the input). Let C be a convex subset of the input
space. If all the input samples lie in C, i.e.,
xðtÞ 2 C 8t ð27Þ
then every update of every weight vector wj which is outside of C reduces the distance
from wj to C.
Proof: Let t be the time instant considered. We have two possibilities:
aÞ j is the winning neuron. Then, by (9) and (10), wjðtþ 1Þ lies in the linesegment from wjðtÞ to xðtÞ.
bÞ j is not the winning neuron. Again, by (11), (12) and Proposition 1, wjðtþ 1Þlies in the line segment from wjðtÞ to xðtÞ.
Since xðtÞ 2 C, wjðtÞ 2 C and C is convex, from Lemma 1 we have that wjðtþ 1Þ iscloser to C than wjðtÞ. &
Our model is stable in a stronger sense: if we have a convex set C where all the
input samples lie, a unit does not go out of C.
THEOREM 3 (improved network stability). Let C be a convex subset of the input
space. If all the input samples lie in C, i.e.,
xðtÞ 2 C 8t ð28Þand wjðtÞ 2 C then wjðtþ 1Þ 2 C.
Proof: By a similar reasoning to that of the proof of Theorem 2, wjðtþ 1Þ liesin the line segment from wjðtÞ to xðtÞ. Since xðtÞ 2 C, wjðtÞ 2 C and C is convex,we have that the line segment from wjðtÞ to xðtÞ is included in C. So, wjðtþ 1Þbelongs to C. &
5. Experimental Results
5.1. CONVERGENCE AND ROBUSTNESS ANALYSIS
Computer simulations have been run to show the convergence and robustness of the
model. A two-dimensional input space has been considered, with 1000 input samples.
The number of iterations has been 5000, and the number of neurons has been 50. We
have used a linear decay of the learning rates, with equations similar to (4). The
learning rate a varied from 0.9 to 0.1, b from 0.1 to 0.01, and g from 1 to 0.1.A comparison has been considered between our network, a SOFM and a kMER
[19]. The kMER network (kernel-based Maximum Entropy learning Rule) is a new
self-organizing model similar to SOFM, which looks for equiprobabilistic map for-
mation. It is based on the maximization of an entropy function. Both the SOFM and
the kMER used 49 neurons (7� 7 square lattice). The number of iterations was also
98 EZEQUIEL LÓPEZ-RUBIO ET AL.
5000. The quantization performance of the three methods is shown in Table I. It has
been computed by measuring the mean Euclidean distance from random input
samples to their nearest neurons in the final state of the network. We can see that
the mean quantization error of SOFM and kMER is worse than that of the self-orga-
nizing dynamic graph for all the shapes considered, convex or not. Thus, our
approach shows a greater ability to represent simple and complex input distribu-
tions. This is because of its improved plasticity, which allows to adapt to input data
while retaining stability.
Furthermore, we have compared the computed topologies. The results for nine
different input distributions are shown in Figures 2 to 11. For Figure 2, the input
has been taken from the uniform distribution over the unit square [0,1]� [0,1].The uniform distribution over the left half of a ring with center (0.5, 0.5), minor
radius 0.25 and major radius 0.5 has been used for Figure 3. A circle and a pentagon
have been selected for Figures 4 and 5, respectively. The results with capital letters
‘T’ and ‘V’ are shown in Figures 6 and 7. Next, we have two shapes with a hole: a
Table I. Mean quantization error for some shapes.
Shape Convex? SOFM SODG kMER
Square Yes 0.0688 0.0588 0.1444
Half ring No 0.0445 0.0324 0.1316
Circle Yes 0.0584 0.0518 0.1227
Pentagon Yes 0.0513 0.0439 0.1373
‘T’ letter No 0.0431 0.0347 0.1593
‘V’ letter No 0.0377 0.0298 0.1413
Hollow square No 0.0554 0.0496 0.2122
Hollow circle No 0.0485 0.0434 0.1773
Chromosome pair 1 No 0.0351 0.0261 0.1720
Chromosome pair 5 No 0.0339 0.0269 0.0925
Figure 2. Unit square results: SODG (left), SOFM (right).
SELF-ORGANIZING DYNAMIC GRAPHS 99
Figure 3. Half ring results: SODG (left), SOFM (right).
Figure 4. Circle results: SODG (left), SOFM (right).
Figure 5. Pentagon results: SODG (left), SOFM (right).
100 EZEQUIEL LÓPEZ-RUBIO ET AL.
hollow square (Figure 8) and a hollow circle (Figure 9). Finally, we have two real
images [18] from two human chromosome pairs only (Figures 10 and 11).
In the pictures of the results with the SODG model, the initial positions of the neu-
rons are marked with ‘þ’ and the final positions with ‘o’ We have plotted the threestrongest adjacencies for each neuron with lines. Please note that the adjacencies
need not be symmetric, i.e., zij 6¼ zji. So, if a neuron i has exactly three lines on it,this means that every neuron that i has selected as one of its closer neighbours has
also selected i as one of its closer neighbours. There is a vast amount of this kind
of neurons, which demonstrates the quality and robustness of the topology built
by the system.
We have plotted the final positions of the SOFM units with solid dots, and the
fixed topology with lines. The results for kMER are shown in Figures 10 and 11.
The meaning of the plot is the same as SOFM, except that the dots are not solid.
It can be observed in Figures 2, 4 and 5 that the final positions of the units of the
SODG are inside the input region. This fact follows from Theorem 3, since the input
Figure 6. ‘T’ letter results: SODG (left), SOFM (right).
Figure 7. ‘V’ letter results: SODG (left), SOFM (right).
SELF-ORGANIZING DYNAMIC GRAPHS 101
distributions for these figures are convex sets and the initial positions also lie inside
the input distribution. The input region for the rest of figures is not convex, so the
above theorem is not applicable, but anyway the final positions do not go out the
half ring, except for two neurons in Figure 9.
The SOFM model builds reasonable topologies for the simplest input distributions
(Figures 2, 4 and 5). Nevertheless, it fails to represent adequately the complex distri-
butions. For example, we can see in Figure 6 a twist of the network. Furthermore, we
see in Figures 8 to 11 that the SOFM model forces some neurons to ‘fill the gaps’
between parts of the input distribution (holes, unconnected regions, and so on).
So the SOFM does not yield a faithful representation of the topology of the input
distribution in these cases, while SODG does.
The kMER network fails to build a topologically ordered map (Figures 10 and
11). This network has a slow convergence, so the results are poor unless we perform
Figure 9. Hollow circle results: SODG (left), SOFM (right).
Figure 8. Hollow square results: SODG (left), SOFM (right).
102 EZEQUIEL LÓPEZ-RUBIO ET AL.
millions of iterations (see [19]). Then, it is outperformed clearly both by SOFM and
SODG.
5.2. COLOUR IMAGE COMPRESSION APPLICATION
Image compression is important for many applications such as video conferencing,
high definition television and facsimile systems. Vector quantization (VQ) is found to
be effective for lossy image compression due to its excellent rate-distortion perfor-
mance over others conventional techniques based on the codification of scalar quan-
tities [8].
However, classic techniques for VQ are very limited due to the prohibitive compu-
tation time required. A number of competitive learning (CL) algorithms have been
proposed for designing of vector quantizers. Ahalt, Krishnamurthy, Chen and
Melton [9] have used algorithms based on competitive neural networks that minimise
the reconstruction errors and lead to optimal or near optimal results, demonstrating
Figure 10. Chromosome pair 1 results. Original image (upper left), SODG (upper right), SOFM (lower
left), kMER (lower right).
SELF-ORGANIZING DYNAMIC GRAPHS 103
their effectiveness. Likewise, Yair, Zeger and Gersho [10] have used Kohonen’s
learning algorithm, proposing a soft-competition algorithm for the design of optimal
quantizers. Ueda and Nakano [11] prove that the standard competitive algorithm is
equivalent to the Linde, Buzo and Gray [12] algorithm and they propose a new com-
petitive algorithm based in the equi-distortion principle that allows the design of
optimal quantizers. Dony and Haykin [13] show the advantages of using neural net-
works for vector quantization, as they are less sensitive to codebooks initialisation,
lead to a smaller distortion rate and have a fast convergence rate. Cramer [14] exam-
ines much recent work regarding compression using neural networks.
Here the SODG and competitive neural networks are used for image compression.
A comparative study between the two methods is then performed, showing that the
SODG algorithm leads to a smaller distortion rate and a better compression rate.
For image compression purposes, the original image is divided into square win-
dows of the same size. The RGB values of the pixels of each of these windows form
Figure 11. Chromosome pair 5 results. Original image (upper left), SODG (upper right), SOFM (lower
left), kMER (lower right).
104 EZEQUIEL LÓPEZ-RUBIO ET AL.
an input vector for the network. The compression process consists of the selection of
a reduced set of representative windows wj (prototypes) and the replacement of every
window of the original image with the ‘‘closest’’ prototype, i.e., the weight vector of
the winning neuron for this input vector. If we use windows with VxV pixels, and the
original image has KVxLV pixels, then the mean quantization error per pixel E of
this representation is
E ¼ 1V2KL
XKLi¼1
minj
kxi � wjk ð29Þ
where xi is the ith window of the original image and wj is the jth prototype.
For the experiments we use the image shown in Figure 12 with 348� 210 pixels(214.1 Kb in BMP format) and 256 red, green and blue values (24 bits per pixel).
Therefore, we have 73080/V2 input patterns.
The compression has been performed by two algorithms, the SODG and the stan-
dard competitive learning. The network’s initial weights have been randomly selected
from all input patterns.
5.2.1. Image compression with standard competitive learning
The standard competitive learning has been used in the following way. First, the net-
work is trained with all the KL windows of the original image as input patterns. Then
the final weight vectors are used as prototypes. Every window of the original image is
substituted by the index of its winning neuron (closest prototype). In addition, it is
necessary to store the prototype vectors in the compressed image file. The original
(uncompressed) image size Su, in bits, is
Su ¼ V2BKL ð30Þ
Figure 12. Original figure to be compressed.
SELF-ORGANIZING DYNAMIC GRAPHS 105
where B is the number of bits per pixel (uncompressed). Typically B ¼ 24 or B ¼ 32.The compressed image size using competitive learning SCL is given by
SCL ¼ V2BNþ ceilðlog2 NÞKL ð31Þ
where N is the number of units, and the function ceil rounds towards þ1. The com-pression rate RCL is obtained as
RCL ¼SCLSu
¼ NKL
þ ceilðlog2 NÞKLV2BKL
ð32Þ
5.2.2. Image compression with SODG
When SODG is used to obtain prototype windows for image compression, we can
use the topology information provided by the network to improve the compression
rate. Neighbour windows in the original image are usually similar. This means that
their prototype windows are frequently very close in the input space. Note that the
SODG computes the adjacency vectors which tell us what units are close to every
unit. So, for every neuron i, we can order the other units of the network by their vici-
nity to i by using the adjacency vector zi. That is, if zij is the kth greatest component
of zi, the unit j is in the kth place in the list of the unit i.
Hence we have a list for every unit i that shows which prototype windows that are
similar to prototype window i. When we use standard competitive learning to com-
press image data, the prototype windows are typically coded with their index. So, the
compressed image is a sequence of prototype indices. Now we can use the above
mentioned lists to perform a relative coding of the prototype windows with respect
to their predecessors in the sequence. This means that a prototype is no longer coded
by its index, but by its position k in the list of its predecessor prototype. If we build
the window sequence from left to right, this means that every window is coded with
respect to the window on its left. Note that if the neighbour prototype is the same as
the current, we write a zero in the sequence (k ¼ 0).A table must be included in the compressed file to decode these numbers k into
actual prototype indexes. In order to reduce the size of this table, we may store only
the P closest neigbours of every unit. Then the value k ¼ Pþ 1 means that the cur-rent window is represented by a prototype which is not in the P nearest neighbours
of the left prototype. Then we need to store in the compressed file the actual proto-
type index of this window.
The advantage of this strategy is that most times the numbers k that form the new
sequence have a low value. So, if we perform a Huffman coding of this sequence, a
compression will be achieved, since the data has some redundancy. Different codings
have been used in image compression. An early work on this issue can be found in [15].
Then a Huffman tree is built from the absolute frequencies fk of the numbers k in
the new sequence, k ¼ 0; 1; . . . ;Pþ 1. This tree assigns a code of bk bits to the num-ber k. Then the overall compressed image size (in bits) SSODG is given by
106 EZEQUIEL LÓPEZ-RUBIO ET AL.
SSODG ¼ V2BNþ ceilðlog2 NÞð fPþ1 þNPÞ þXPþ1i¼0
fibi ð33Þ
So the compression rate RSODG is computed as
RSODG ¼SSODGSu
¼ NKL
þ ceilðlog2 NÞðfPþ1 þNPÞ þPPþ1
i¼0 fibiV 2BKL
ð34Þ
Note that the Huffman coding does not reduce nor increase the distortion. It only
improves the compression rate.
If we compare equations (32) and (34) we can see that the difference between the
compression rates of the two methods depends on the frequencies fi. If the values of
fi are high for low i, then SODG outperforms CL. This depends on the redundancy
of the windows of the original image.
We have found that the final output of this coding is very redundant yet. So, we
have used a sliding-window version of the Lempel-Ziv algorithm (see [16] and [17]) to
provide further compression. Since the Lempel-Ziv algorithm is loseless, the distor-
tion remains the same. If the final file size (in bits) is F, we define the final compres-
sion rate R0SODG as
R0SODG ¼F
Suð35Þ
5.2.3. Image compression results
Computational experiments have been carried out to compare the performance of
the compression system proposed with standard CL. The image of Figure 12 has
been used for this purpose. We have run 4KL iterations of each method for different
values of V and N. For CL simulations we have chosen a linear decay of the learning
rate Z, with an initial value Zo ¼ 1, and a final value near to zero. For SODG simu-lations we have chosen a linear decay of the learning rates a ¼ 0:99 to 0.1, b ¼ 0:01to 0.001, and g ¼ 1 to 0.1. Additionally, we selected P ¼ 8. Figures 13 and 14 showthe results with N ¼ 256 and two different window sizes.
Figure 13. Compressed images using CL (left) and SODG (right) with V ¼ 1 and N ¼ 256.
SELF-ORGANIZING DYNAMIC GRAPHS 107
Table II summarizes the mean quantization errors per pixel E obtained with both
methods, for the values of V and N we have considered. We can see that both meth-
ods achieve similar results. Table III shows the inverse compression rates. We can see
that the SODG method has the best compression rate in all cases. This means that
the Huffman coding of the image sequence and the Lempel-Ziv algorithm always
outperform the fixed coding used with CL.
6. Conclusions
A new self organizing model has been presented that considers a dynamic topology
among neurons. Theorems have been presented and proved that ensure the stability
of the network and its ability to represent the input distribution. This means that it is
suitable for vector quantizer design. Simulation results have been shown to demon-
strate the convergence and robustness of the model.
Finally, an application to image compression has been presented. The dynamic
topology information obtained by the SODG has been used to build a Huffman cod-
ing of the image data. This is aimed to reduce the compressed image size. The experi-
Figure 14. Compressed images using CL (left) and SODG (right) with V ¼ 3 and N ¼ 256.
Table II. Mean quantization errors per pixel, E.
Method V ¼ 1, N ¼ 256 V ¼ 1, N ¼ 1024 V ¼ 3, N ¼ 256 V ¼ 3, N ¼ 1024
CL 3.0969 1.7648 1.5288 1.2388
SODG 4.3972 2.9617 1.7586 1.6090
Table III. Inverse compression rates, R�1CL, R�1SODG and R
0�1SODG.
V=1, N ¼ 256 V ¼ 1, N ¼ 1024 V ¼ 3, N ¼ 256 V ¼ 3, N ¼ 1024
R�1CL 2.9688 to 1 2.3219 to 1 14.5848 to 1 5.8003 to 1R�1SODG 4.5316 to 1 2.9527 to 1 15.7981 to 1 5.0536 to 1R0�1SODG 13.0337 to 1 8.2629 to 1 38.8861 to 1 11.9516 to 1
108 EZEQUIEL LÓPEZ-RUBIO ET AL.
mental results show that Huffman coding leads to very significant improvements in
compression rates with respect to standard competitive learning (CL). Nevertheless,
this enhancement does not affect the distortion of the compressed images.
References
1. Corridoni, J. M., Del Bimbo A. and Landi, L.: 3D Object classification using multi-objectKohonen networks, Pattern Recognition, 29 (1996), 919–935.
2. Kohonen, T.: The self-organizing map Proc. IEEE, 78 (1990), 1464–1480.
3. Kohonen, T.: Self-Organizing Maps, Springer Verlag: Berlin, Germany, 2001.4. Pham, D. T. and Bayro-Corrochano, E. J.: Self-organizing neural-network-based pattern
clustering method with fuzzy outputs’ Pattern Recognition, 27 (1994), 1103–1110.
5. Subba Reddy, N. V. and Nagabhushan, P.: A three-dimensional neural network model forunconstrained handwritten numeral recognition: a new approach, Pattern Recognition, 31(1998), 511–516.
6. Von der Malsburg, C.: Network self-organization, in S. F. Zornetzer, J. L. Davis and C.
Lau (Eds), An Introduction to Neural and Electronic Networks, Academic Press, Inc.: SanDiego, CA, 1990, pp. 421–432.
7. Wang, S. S. and Lin, W. G.: A new self-organizing neural model for invariant pattern
recognition. Pattern Recognition, 29 (1996), 677–687.8. Gray, R. M.: Vector Quantization, IEEE ASSP Magazine, 1 (1980), 4–29.9. Ahalt, S. C., Krishnamurphy, A. K., Chen, P. and Melton, D. E.: Competitive learning
algorithms for vector quantization. Neural Networks, 3 (1990), 277–290.10. Yair, E., Zeger, K. and Gersho, A.: Competitive learning and soft competition for vector
quantizer design. IEEE Trans. Signal Processing, 40(2) (1992), 294–308.
11. Ueda, N. and Nakano, R.: A new competitive learning approach based on an equidistor-tion principle for designing optimal vector quantizers. Neural Networks, 7(8) (1994), 1211–1227.
12. Linde, Y., Buzo, A. and Gray, R. M.: An algorithm for vector quantizer design. IEEE
Trans. Communications, 28(1) (1980), 84–95.13. Dony, R. D. and Haykin, S.: Neural networks approaches to image compression. Proc.
IEEE, 83(2) (1995), 288–303.
14. Cramer, C.: Neural networks for image and video compression: a review, European Jour-nal of Operational Research, 108 (1998), 266–282.
15. Comstock, D. and Gobson, J.: Hamming coding of DCT compressed images over noisy
channels. IEEE Trans. Communications, 32 (1984), 856–861.16. Wyner, A. D. and Ziv, J.: The sliding-window Lempel-Ziv algorithm is asymptotically
optimal. Proc. IEEE, 82 (1994), 872–877.
17. Wyner, A. D. and Wyner, A. J.: Improved redundancy of a version of the Lempel-Zivalgorithm. IEEE Trans. Information Theory, 35 (1995), 723–731.
18. Mc Donald, D.: Homo sapiens karotypes [online]. Date of access: August 2001. Availableat: http://www.selu.com/bio/cyto/karyotypes/Hominidae/Hominidae.html
19. Van Hulle, M. M.: Faithful representations with topographic maps. Neural Networks,12(6) (1999), 803–823.
SELF-ORGANIZING DYNAMIC GRAPHS 109