+ All Categories
Home > Documents > IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Date post: 12-Feb-2017
Category:
Upload: hoanghanh
View: 218 times
Download: 0 times
Share this document with a friend
16
Int. J. Appl. Math. Comput. Sci., 2005, Vol. 15, No. 1, 99–114 IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN-STATE-IN-A-BOX NEURAL NETWORK CHEOLHWAN OH, STANISLAW H. ˙ ZAK School of Electrical and Computer Engineering Purdue University, West Lafayette Indiana 47907–2035, USA e-mail: [email protected], [email protected] An image recall system using a large scale associative memory employing the generalized Brain-State-in-a-Box (gBSB) neural network model is proposed. The gBSB neural network can store binary vectors as stable equilibrium points. This property is used to store images in the gBSB memory. When a noisy image is presented as an input to the gBSB network, the gBSB net processes it to filter out the noise. The overlapping decomposition method is utilized to efficiently process images using their binary representation. Furthermore, the uniform quantization is employed to reduce the size of the data representation of the images. Simulation results for monochrome gray scale and color images are presented. Also, a hybrid gBSB-McCulloch-Pitts neural model is introduced and an image recall system is built around this neural net. Simulation results for this model are presented and compared with the results for the system employing the gBSB neural model. Keywords: associative memory, Brain-State-in-a-Box (BSB) neural network, overlapping decomposition, image recall 1. Introduction This paper is on using a class of nonlinear dynamical sys- tems for image recall. A specific discrete-time dynami- cal system, referred to as the generalized Brain-State-in- a-Box (gBSB) neural network, is used to build an image recall system. It stores original images in the neural asso- ciative memory, and when a noisy image is presented to the system, it reconstructs the corresponding original im- age after the classification process of the neural network and the proposed error correction process. The Brain-State-in-a-Box (BSB) net is a simple non- linear autoassociative neural network that was proposed by Anderson et al. (1989) as a memory model based on neurophysiological considerations. The BSB model gets its name from the fact that the network trajectory is con- strained to be in the hypercube H n =[-1, 1] n . The BSB model was used primarily to model effects and mecha- nisms seen in psychology and cognitive science (Ander- son, 1995). A possible function of the BSB net is to rec- ognize a pattern from its given noisy version. The BSB net can also be used as a pattern recognizer that employs a smooth nearness measure and generates smooth deci- sion boundaries (Schultz, 1993). Three different gener- alizations of the BSB model were proposed by different research groups: Hui and ˙ Zak (1992), Golden (1993), and Anderson (1995). In particular, the network considered by Hui and ˙ Zak (1992), referred to as the generalized Brain- State-in-a-Box (gBSB), has the property that the network trajectory constrained to a hyperface of H n is described by a lower-order gBSB type model. This interesting prop- erty helps significantly to analyze the dynamical behavior of the gBSB net. Another tool that makes the gBSB model suitable for constructing associative memory is the stabil- ity criterion of the vertices of H n (Hui and ˙ Zak, 1992), see also (Hassoun, 1995) for a further discussion of the condition. The gBSB neural network is suitable for asso- ciative memory because it can store patterns as its stable equilibrium points. Lillo et al. (1994) proposed a system- atic method to synthesize associative memories using the gBSB neural network. This method is used to design large scale associative memories by Oh and ˙ Zak (2002; 2003). The hybrid gBSB-McCulloch-Pitts neural network uses the same activation function as the McCulloch-Pitts neural network but the argument of the activation function is the same as in the gBSB net. Both the gBSB neural network and the the hybrid gBSB-McCulloch-Pitts net are suitable for associative memory because they can store patterns as their stable equilibrium points. Designing associative memory using the decompo- sition concept has been investigated by numerous re- searchers (Akar and Sezer, 2001; Ikeda et al, 2001; Oh and ˙ Zak, 2002; 2003; Zetzsche, 1990). Ikeda et al. (2001) used a disjoint decomposition to design large scale asso- ciative memories. They employed the two-level network concept to alleviate the problem of spurious states caused
Transcript
Page 1: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Int. J. Appl. Math. Comput. Sci., 2005, Vol. 15, No. 1, 99–114

IMAGE RECALL USING A LARGE SCALE GENERALIZEDBRAIN-STATE-IN-A-BOX NEURAL NETWORK

CHEOLHWAN OH, STANISŁAW H. ZAK

School of Electrical and Computer EngineeringPurdue University, West Lafayette

Indiana 47907–2035, USAe-mail:[email protected], [email protected]

An image recall system using a large scale associative memory employing the generalized Brain-State-in-a-Box (gBSB)neural network model is proposed. The gBSB neural network can store binary vectors as stable equilibrium points. Thisproperty is used to store images in the gBSB memory. When a noisy image is presented as an input to the gBSB network,the gBSB net processes it to filter out the noise. The overlapping decomposition method is utilized to efficiently processimages using their binary representation. Furthermore, the uniform quantization is employed to reduce the size of the datarepresentation of the images. Simulation results for monochrome gray scale and color images are presented. Also, a hybridgBSB-McCulloch-Pitts neural model is introduced and an image recall system is built around this neural net. Simulationresults for this model are presented and compared with the results for the system employing the gBSB neural model.

Keywords: associative memory, Brain-State-in-a-Box (BSB) neural network, overlapping decomposition, image recall

1. Introduction

This paper is on using a class of nonlinear dynamical sys-tems for image recall. A specific discrete-time dynami-cal system, referred to as the generalized Brain-State-in-a-Box (gBSB) neural network, is used to build an imagerecall system. It stores original images in the neural asso-ciative memory, and when a noisy image is presented tothe system, it reconstructs the corresponding original im-age after the classification process of the neural networkand the proposed error correction process.

The Brain-State-in-a-Box (BSB) net is a simple non-linear autoassociative neural network that was proposedby Andersonet al. (1989) as a memory model based onneurophysiological considerations. The BSB model getsits name from the fact that the network trajectory is con-strained to be in the hypercubeHn = [−1, 1]n. The BSBmodel was used primarily to model effects and mecha-nisms seen in psychology and cognitive science (Ander-son, 1995). A possible function of the BSB net is to rec-ognize a pattern from its given noisy version. The BSBnet can also be used as a pattern recognizer that employsa smooth nearness measure and generates smooth deci-sion boundaries (Schultz, 1993). Three different gener-alizations of the BSB model were proposed by differentresearch groups: Hui andZak (1992), Golden (1993), andAnderson (1995). In particular, the network considered byHui andZak (1992), referred to as the generalized Brain-

State-in-a-Box (gBSB), has the property that the networktrajectory constrained to a hyperface ofHn is describedby a lower-order gBSB type model. This interesting prop-erty helps significantly to analyze the dynamical behaviorof the gBSB net. Another tool that makes the gBSB modelsuitable for constructing associative memory is the stabil-ity criterion of the vertices ofHn (Hui andZak, 1992),see also (Hassoun, 1995) for a further discussion of thecondition. The gBSB neural network is suitable for asso-ciative memory because it can store patterns as its stableequilibrium points. Lilloet al. (1994) proposed a system-atic method to synthesize associative memories using thegBSB neural network. This method is used to design largescale associative memories by Oh andZak (2002; 2003).The hybrid gBSB-McCulloch-Pitts neural network usesthe same activation function as the McCulloch-Pitts neuralnetwork but the argument of the activation function is thesame as in the gBSB net. Both the gBSB neural networkand the the hybrid gBSB-McCulloch-Pitts net are suitablefor associative memory because they can store patterns astheir stable equilibrium points.

Designing associative memory using the decompo-sition concept has been investigated by numerous re-searchers (Akar and Sezer, 2001; Ikedaet al, 2001; OhandZak, 2002; 2003; Zetzsche, 1990). Ikedaet al. (2001)used a disjoint decomposition to design large scale asso-ciative memories. They employed the two-level networkconcept to alleviate the problem of spurious states caused

Page 2: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak100

by disjoint decomposition. Akar and Sezer (2001) usedoverlapping decomposition. The concept of overlappingdecomposition was used earlier in the context of decen-tralized control of large scale dynamical systems (Ikedaand Šiljak, 1980). Akar and Sezer applied the conceptof overlapping decomposition to the associative memorydesign, where they focused on the decomposition of theweight matrix. In their paper, the weight matrix of eachdecomposed sub-network must satisfy certain conditions.That is, a certain portion of each weight matrix must haveall zero elements and another portion of the weight ma-trix of each sub-network must coincide with a certain por-tion of the weight matrix of a neighboring sub-network.This means that a sub-network cannot be designed sepa-rately from its neighboring sub-networks. Also, this maycomplicate the design process of associative memories ifthe number of sub-networks is large. In Oh andZak’spapers (2002; 2003), overlapping decomposition was car-ried out on the patterns, rather than on the weight matrixof the associative memory. This approach eliminates therestrictions on the weight matrices of sub-networks, andtherefore allows designing each sub-network of associa-tive memory independently.

In this paper, we propose an image recall system us-ing the large scale associative memory proposed by Ohand Zak (2002; 2003) to efficiently process the images.The original images are stored in a large scale neural as-sociative memory. When a noisy image is given as aninput to the proposed image recall system, it is decom-posed into sub-images and processed by neural networksand the error correction processor to reconstruct the corre-sponding original image. We assume that the test imagesare simply noisy versions of training images and they areperfectly aligned with the training images. We do not con-sider in this article the cases when the test images are ro-tated, scaled, or transformed to break the alignment withthe training images.

The paper is organized as follows: In Section 2, wereview the structure and stability conditions of the gBSBmodel and the synthesis procedure of associative memo-ries using the gBSB neural model. Also, we introduce thehybrid gBSB-McCulloch-Pitts neural model and discussits stability conditions briefly. In Section 3, we discussthe overlapping decomposition method that is employedto construct a large scale neural net to process large scalepatterns. Also, we introduce the error correction algo-rithm that we employed to enhance the performance ofthe associative memory. In Section 4, we discuss imagerepresentation that is used in the proposed image recallsystem. Simulation results are presented in Section 5. Fi-nally, conclusions are found in Section 6.

2. Overview of the Generalized Brain-State-in-a-Box (gBSB) and the Hybrid gBSB-McCulloch-Pitts Neural Models

In this paper, we use a gBSB neural model to design animage recall processor. The gBSB neural model has itsroot in the Brain-State-in-a-Box (BSB) neural model pro-posed by Andersonet al. in 1977 (cf. Andersonet al.,1989). The dynamics of the BSB neural model are de-scribed by the difference equation

x(k + 1) = g(x(k) + αWx(k)

), (1)

with an initial conditionx(0) = x0, where x(k) ∈ Rn

is the state of the BSB neural network at timek, α > 0is a step size,W ∈ Rn×n is a symmetric weight matrix,and the activation functiong : Rn → Rn is defined as

(g(x)

)i=

(sat(x)

)i≡

1 if xi ≥ 1,

xi if − 1 < xi < 1,

−1 if xi ≤ −1.

(2)

The following definitions are used in the further discus-sion:

Definition 1. A point xe is anequilibrium stateof thedynamical systemx(k + 1) = T (x(k)) if xe = T (xe).

We will be concerned with the equilibrium states thatare vertices of the hypercubeHn = [−1, 1]n. That is,each equilibrium state belongs to the set{−1, 1}n.

Definition 2. An equilibrium statexe of x(k + 1) =T (x(k)) is super stableif there existsε > 0 such thatfor any y satisfying‖y−xe‖ < ε, we haveT (y) = xe,where‖ · ‖ may be any p-norm of a vector.

Definition 3. A basin of attractionof an equilibrium stateof the gBSB neural model is the set of points such that thetrajectory of the gBSB model emanating from any pointin the set converges to the equilibrium state.

The BSB neural net can be used to construct neuralassociative memory, where each pattern vector is stored asa super stable equilibrium state. When a given initial stateis located in the basin of attraction of a certain stored pat-tern, the network trajectory starting from this given initialcondition converges to the pattern. The given initial statecan be interpreted as a noisy version of the correspond-ing stored pattern. In other words, the BSB network canrecall the stored pattern successfully from a noisy vectorif the noisy pattern is located close enough to the desiredpattern.

A generalization of the BSB neural network, referredto as the generalized Brain-State-in-a-Box (gBSB) neuralmodel, was proposed by Hui andZak (1992). The gBSB

Page 3: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Image recall using a large scale generalized Brain-State-in-a-Box neural network 101

neural model is characterized by a nonsymmetric weightmatrix and it offers more control of the extent of the basinsof attraction of the equilibrium states. The dynamics ofthe gBSB model are represented by

x(k + 1) = g((In + αW )x(k) + αb

), (3)

whereg(x) = sat(x) as in (2),In is an n× n identitymatrix, b ∈ Rn is a bias vector, andW ∈ Rn×n isa weight matrix that is not necessarily symmetric, whichmakes it easier to implement associative memories whenusing the gBSB net.

We now discuss the stability condition that we usewhen designing associative memory using the gBSBmodel. Let

L(x) = (In + αW )x + αb,

and let(L(x))i be thei-th component ofL(x). Suppose

v =[v1 v2 · · · vn

]T

∈ {−1, 1}n,

that is, v is a vertex ofHn. This vertexv is an equilib-rium state of the gBSB model if and only if(

L(v))ivi ≥ 1, i = 1, 2, . . . , n.

We can show that if(L(v)

)ivi > 1, i = 1, 2, . . . , n, (4)

then v is a super stable equilibrium state of the gBSBneural model (Hui andZak, 1992; Lilloet al., 1994).

When the desired patterns are given, the associativememory should be able to store them as super stable equi-librium states of the gBSB network. Also, it should mini-mize the number of super stable equilibrium states that donot correspond to the given stored patterns. Such unde-sired equilibrium states are calledspurious states. We cansynthesize associative memories using the method pro-posed by Lilloet al. (1994). This design method is usedby us in this paper. We now briefly present the method.

For given pattern vectorsv(j) ∈ {−1, 1}n, j =1, 2, . . . , r, that we wish to store, we first form a matrixB = [b b · · · b] ∈ Rn×r, where

b =r∑

j=1

εjv(j), εj ≥ 0, j = 1, 2, . . . r,

and εj ’s are design parameters. Then, chooseD ∈ Rn×n

such that

dii >n∑

k=1,k 6=i

|dik|, and

dii <n∑

k=1,k 6=i

|dik|+ |bi|, i = 1, 2, . . . , n,

and Λ ∈ Rn×n such that

λii < −n∑

k=1,k 6=i

|λik| − |bi|, i = 1, 2, . . . , n.

Finally, determine the weight matrixW using the for-mula

W = (DV −B)V † + Λ(In − V V †), (5)

where V = [ v(1) v(2) · · · v(r) ] ∈ {−1, 1}n×r

and V † is a pseudo-inverse matrix ofV . W and bconstructed in such a way are used to implement associa-tive memory that is guaranteed to store the given patternsas super stable vertices of the hypercubeHn. A furtherdiscussion of the above method can be found in some pa-pers (Lillo et al., 1994; Park and Park, 2000; Parket al.,1999).

Next, we introduce the hybrid gBSB-McCulloch-Pitts neural network which is characterized by the samedifference equation (3) withg being an activation func-tion defined as a standard sign function,

(g(x)

)i=

(sign(x)

)i=

1 if xi > 00 if xi = 0−1 if xi < 0.

(6)

We give the stability condition that we use when designingassociative memory using the above model. As before, wesuppose thatv = [ v1 v2 · · · vn ]T ∈ {−1, 1}n.Then, this vertex is a super stable equilibrium state of theabove model if(

L(v))ivi > 0, i = 1, 2, . . . , n. (7)

It is easy to see that (4) implies (7). This means thatWand b constructed for the gBSB network will also workfor the hybrid gBSB-McCulloch-Pitts net.

3. Overlapping Decomposition Method andan Error Correction Algorithm forHigh Dimensional Patterns

3.1. Idea of the Overlapping Decomposition

The designers of most associative memories face the se-rious problem of the quadratic growth of the number ofinterconnections with the problem size. Oh andZak(2002; 2003) attacked these difficulties by proposing a de-sign method of associative memory as a system of decom-posed neural networks. As pointed out by Ikedaet al.(2001), a completely decoupled modularization gives riseto spurious memory problems. In Oh andZak’s papers,the idea of overlapping decomposition was used rather

Page 4: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak102

than decoupled modularization. Also, to further enhancethe performance of the method, an error correction algo-rithm was added. The error correction algorithm buildson overlapping idea; that is, the error correction algorithmarises naturally from the overlapping decomposition. Thereduced size of each sub-network of the memory systemmay reduce the recall capacity of the memory. By addingthe error correction algorithm, the recall performance ofthe proposed neural associative memory is enhanced.

When the gBSB network is in the recall mode, it fol-lows from (3) that the main computational load of the net-work results from the multiplication of a weight matrixW and a state vectorx. The number of multiplicationsof the elements ofW and x is of the order ofn2, wheren is the dimension of the state vectorx. The same is truefor the number of additions. This results in a heavy com-putational cost when the dimension of the pattern vectoris large.

We decompose each stored pattern into sub-patternsand construct gBSB sub-networks using (5). That is, eachgBSB sub-network stores the corresponding sub-patternsof the stored patterns. We employ the overlapping de-composition method and the error correction algorithm toimprove the performance of neural networks. Each sub-network can be designed independently of others and therecall error may be reduced using the error correction pro-cess (Oh andZak, 2002; 2003).

An example of overlapping decomposition of a pat-tern is shown in Fig. 1. This is the case when the patternis represented as a 2-dimensional image. As we can seein Fig. 1, the original pattern is decomposed so that thereexist overlapping parts between neighboring sub-patterns.

1

1

m

2m

pm

1 n n+1 2n+1 qn 12n

m+1

2m+1

V11 V12

V21 V22

V1q

V2q

Vp1 Vp2 Vpq

Fig. 1. Example of toroidal overlappingdecomposition of a given pattern.

Also, by adopting a toroidal structure, we ensure a sym-metry of the image partitions, which makes it easy to im-plement image processing sub-networks.

3.2. Computational Complexity of the OverlappingDecomposition Method

To investigate the computational complexity of the over-lapping decomposition method, consider the patternshown in Fig. 1. The dimension of the original patternis pm × qn. Therefore, if we used just one gBSB neu-ral network to process this image, its weight matrixWwould be apqmn × pqmn matrix containing(pqmn)2

elements. If we use an image partition shown in Figure 1,then the dimension of each sub-pattern is(m+1)× (n+1). Hence, the number of elements of the weight matrix ofeach sub-network is(m+1)2(n+1)2. Because there arepq weight matrices, the overall number of their elementsis pq(m + 1)2(n + 1)2. Therefore, the ratio of the sizesof the weight matrices of sub-networks to the size of thesingle processing network is

pq(m + 1)2(n + 1)2

(pqmn)2=

1pq

(1 +

1m

)2 (1 +

1n

)2

.

If the sub-networks are designed so that the number ofoverlapping rows ismo and the number of overlappingcolumns isno, then the above ratio becomes

1pq

(1 +

mo

m

)2 (1 +

no

n

)2

.

Let

f(p) =1p

(1 +

mo

m

)2

,

and pm = M , where M is a constant. We assume thatmo < m = M/p, that is, the number of overlappingrows is smaller than the number of rows of the sub-pattern.Then,

f(p) =1p

(1 +

mo

Mp)2

=1p

(1 +

2mo

Mp +

m2o

M2p2

)

=1p

+2mo

M+

m2o

M2p.

Therefore,

df(p)dp

= − 1p2

+m2

o

M2< 0

because0 < p < M/mo from the above assumption.This means thatf(p) is a decreasing function ofp in theinterval 0 < p < M/mo. If we let

g(q) =1q

(1 +

no

n

)2

=1q

(1 +

no

Nq)2

,

Page 5: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Image recall using a large scale generalized Brain-State-in-a-Box neural network 103

whereqn = N is a constant andno < n = N/q, g(q) isa decreasing function ofq in the interval0 < q < N/no.This demonstrates that the amount of computer resourcesoccupied by the weight matrices becomes smaller as thenumbersp and q grow.

Let us now consider the same decomposition as inFig. 1 to determine the number of multiplications be-tween the elements ofW and x(k) in (3) during therecall process when only one neural network is used. Be-causeW is a pqmn × pqmn matrix and x(k) is apqmn-dimensional vector in the undecomposed network,(pqmn)2 multiplications between the elements ofWand x(k) are needed for a transition to the next statex(k + 1). With the same reasoning,(m + 1)2(n + 1)2

multiplications are needed in a sub-network if we use theoverlapping decomposition method to process the image.Considering that there arepq sub-networks, we get a to-tal of pq(m + 1)2(n + 1)2 multiplications. Therefore, ifwe let Nm be the number of multiplications in the gBSBnetwork that is not decomposed, then the total number ofmultiplications in the network consisting of decomposedsub-networks is reduced to

1pq

(1 +

1m

)2 (1 +

1n

)2

Nm.

In the case when there aremo overlapping rows andno

overlapping columns, the number of multiplications is

1pq

(1 +

mo

m

)2 (1 +

no

n

)2

Nm.

This shows that the number of multiplications in the recallprocess of neural associative memory is also decreasingas the numbersp and q grow in the interval0 < p <M/mo and 0 < q < N/no.

3.3. Reducing the Classification Error in the ImageRecall Process

While the number of computations is reduced by thesmaller dimensions of the decomposed sub-patterns andcorresponding sub-networks, the capacity of each sub-network is lower than that of the network that is not de-composed. This may lead to a high recall error rate. Toreduce this error rate, we add the error correction stage af-ter the recall stage in order to correct possible recall errors(Oh andZak, 2002; 2003). The overlapping decomposi-tion plays an important role in the error correction proce-dure. Every sub-pattern overlaps with four neighboringsub-patterns in the decomposition of Fig. 1. After the re-call process, we check the number of mismatches of over-lapping portions for each sub-pattern. We record the num-ber of overlapping portions in which mismatches occurfor each sub-pattern. The number of mismatched overlap-ping portions is an integer from the set{0, 1, 2, 3, 4}. We

first check if there are sub-patterns with no mismatches. Ifsuch a pattern is found, the algorithm is initiated by locat-ing a marker on the above sub-pattern and then the markermoves horizontally to the next sub-pattern in the same rowuntil a sub-pattern with mismatches is encountered. If allsub-patterns have mismatches, we select a sub-networkwith the minimal number of mismatches. Suppose thatthe marker is located on the sub-networkNij , and as-sume that the right and the bottom portions ofVij havemismatches. Note that the decomposed input pattern cor-responding to the sub-networkNij is denoted asXij .We denote byVij the result of the recall process, seeFigs. 1 and 2 for an explanation of this notation. The(n+1)-th column ofXij is replaced with the first columnof Vi,j+1, and the (m+1)-th row of Xij is replaced withthe first row ofVi+1,j . That is, the algorithm replaces themismatched overlapping portions ofXij with the corre-sponding portions of its neighboring sub-patternsVi,j+1,Vi,j−1, Vi+1,j , or Vi−1,j , which are the results of therecall process of the corresponding sub-networks. Afterthe replacement, the sub-network goes through the recallprocess again and examines the number of mismatches ofthe resultant sub-pattern. If the number of resultant mis-matched portions is smaller than the previous one, the al-gorithm keeps this modified result. If the number of mis-matched portions is not smaller than the previous one, theprevious resultVij is kept. Then, the marker moves hori-zontally to the next sub-network. After the marker returnsto the initial sub-network, it then moves vertically to thenext row and repeats the same procedure for the new row.Note that the next row of thep-th row of the pattern shownin Fig. 1 is its first row. The error correction stage is fin-ished when the marker returns to the sub-network that themarker initially started from. We can repeat the error cor-rection algorithm so that the sub-patterns can go throughthe error correction stage multiple times.

The main idea behind the error correction algorithmis to replace the incorrectly recalled elements of the sub-pattern with the ones from the neighboring sub-patternsand let the sub-pattern modified in this way go through therecall process again. If the elements from the neighboringsub-patterns are correctly recalled, it is more probable thatthe current sub-pattern will be recalled correctly at the er-ror correction stage. The reason is that we might have putthe sub-pattern in the basin of attraction of the trainingsub-pattern by replacing the overlapping elements.

In summary, the proposed neural image recall sys-tem operates as follows: Prototype images are decom-posed into sub-image patterns with a toroidal overlappingdecomposition structure, and the corresponding individ-ual sub-networks are constructed using (5) independentlyof other sub-networks. The overlapping portions of adja-cent stored sub-patterns coincide with each other if theyare decomposed from the same pattern. When the noisy

Page 6: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak104

Input PatternOriginal

VijijX RecallProcessor

Error CorrectionAlgorithm

Uij

NijSub−network

ReconstructedOutput Pattern

DecomposedOutput Sub−patterns

DecomposedInput Sub−patterns

Fig. 2. Image recall procedure using gBSB neural sub-networks with overlappingdecompositions and the error correction algorithm.

image is presented to the network, it is decomposed intosub-patterns. Then, each sub-pattern is assigned to a cor-responding sub-network as an initial condition, and eachsub-network starts processing the initial sub-pattern. Af-ter all the individual sub-networks finish processing theirsub-patterns, the overlapping portions are checked if theymatch with adjacent sub-patterns. If the recall process iscompleted without recall errors, all the overlapping por-tions of the sub-patterns processed by the correspond-ing sub-networks would match with their correspondingneighboring sub-patterns. If a mismatched boundary isfound between two adjacent sub-patterns, we concludethat a recall error occurred in at least one of the two neigh-boring sub-networks during the recall process. In otherwords, the network detects a recall error. Once an error isdetected, the error correction algorithm described above isused to correct the recall errors. After the error correctionprocess is finished, we combine the resultant sub-patternsto reconstruct the image. The flow of the pattern process-ing above is illustrated in Fig. 2.

4. Representation of an Image as a Patternfor gBSB-Based Neural AssociativeMemory

4.1. Image Representation

An image can be defined as a functionf(x, y), wherex and y are spatial coordinates (Gonzalez and Wintz,1987). For monochrome images,f is a scalar functionthat represents light intensity at each point. A digitalmonochrome image may be represented by a matrix in the2-dimensional space, whose(i, j)-th element isf(i, j),where (i, j) is a discrete spatial coordinate. Each ele-ment of the matrix is called a pixel. The pixel in digitalimages usually has an integer value in a finite range so thatit can be represented by a finite number of binary digits.For example, if0 ≤ f(i, j) ≤ L − 1, then each pixelcan be represented bydlog2 Le bits, whered·e denotesthe ceiling operator. Consequently, a digital monochromeimage can be represented by a matrix whose elements arebinary strings of finite length.

Page 7: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Image recall using a large scale generalized Brain-State-in-a-Box neural network 105

The RGB color coordinate system is one of the mostpopular color coordinate systems. Based on the RGBcolor coordinate system, each pixel of a digital color im-age is represented by three components: red, green, andblue. In other words,f(i, j) is a vector valued functionin a digital color image, and the digital color image can berepresented as three matrices, each corresponding to thered, green, or blue components of the image. Because thethree components of each pixel have integer values in afinite range, we can use binary strings of finite length torepresent a digital color image.

As we have seen in Section 2, gBSB-based neural as-sociative memory can be used as a pattern classifier if thepatterns are represented by vectors whose elements are−1 or +1. Because digital images can be representedwith binary numbers, they can be used as patterns to beprocessed by the gBSB net. After representing each pixelas a binary number, we process each digit of the binarynumber as an element of a pattern vector. For example, ifthe monochrome image is represented by anM ×N ma-trix and the binary representation of each pixel hask bits,the size of a pattern becomesM ×N × k. Using a gBSBnet, a noisy input image can be classified into one of thestored images if its vector representation is located withinthe basin of attraction of a stored image in the gBSB neu-ral network memory.

In the following section, we propose using a uniformquantization to further reduce the computational load ofthe image recall process.

4.2. Uniform Quantization

In our image recall system, all the images to be processedare in the digitized format, that is, each pixel of an imageis assumed to be already represented by a binary numberwith a fixed number of digits. Our goal in this section isto present a technique that reduces the number of bits re-quired to represent a pixel. We propose to use uniformquantization of images, which can be viewed as a simpleimage compression method. This will lighten the com-putational complexity of the image recall process. In ourfurther discussion, we use the following definition (Grayand Neuhoff, 1998):

Definition 4. A quantizeris a mapper defined on a setof intervals S = {Si; i ∈ I}, where the index setI isordinarily a collection of consecutive integers beginningwith 0 or 1, together with a set of reconstruction levelsC = {yi; i ∈ I}, so that the overall quantizerq is de-fined by q(x) = yi for x ∈ Si, which can be expressedconcisely as

q(x) =∑

i

yi1Si(x),

where the indicator function1S(x) is 1 if x ∈ S and 0otherwise.

A quantizer is said to be uniform if the reconstruc-tion levels are equispaced and the thresholds are midwaybetween adjacent levels (Gray and Neuhoff, 1998). All in-tervals in the uniform quantizer have the same size exceptpossibly the outermost intervals (Sayood, 1996).

Suppose now that we have ak-bit image and wantto represent this image withm (< k) bits per pixel usinguniform quantization. The number of levels of the originalimage is2k, and that of the quantized image is2m. Thesimplest way to perform uniform quantization is to dividethe range[0, 2k−1] using2m intervals of the same lengthand assign a binary number withm digits to each inter-val. For example, assume that we want to quantize an 8-bitmonochrome image uniformly using 2-bits-per-pixel rep-resentation. To do this, we divide the range[0, 255] into 4intervals such as[0, 63], [64, 127], [128, 191], [192, 255],and assign binary numbers00, 01, 10, 11 to each interval.

In this paper, we used a slightly different way toquantize images. Instead of dividing the range[0, 2k −1] into intervals of the same length, we allocated thesame length to the inner intervals and we assigned halfthe length of them to two outermost intervals. As anexample, we divide the range[0, 255] into 4 intervalssuch as[0, 42], [43, 127], [128, 212], [213, 255] and assign00, 01, 10, 11 to each interval. The reason why we usedthis scheme was because the images we used in our simu-lations had a lot of extreme pixel values, i.e., 0 and 255.

5. Simulation Experiments

We simulated our proposed image recall system us-ing 150 × 150 gray scale images as test images. Thepixels of original images were represented with 8 bits. Toreduce the computational load, we carried out the uniformquantization described in Section 4.2 so that the quantizedimages could be represented with 6 bits per pixel. Theseimage patterns are shown in Fig. 3. We simulated the im-age recall system with the gBSB neural networks and thehybrid gBSB-McCulloch-Pitts neural nets.

An example result of the image recall with gBSBneural networks is shown in Fig. 4. The input imagein Fig. 4(a) is a noisy version of a stored image pat-tern. The noise in this image is the so-called ‘salt-and-pepper noise’ with the error probability of0.5. Inother words, each pixel might be corrupted by a noisewith the probability of0.5, and this noisy pixel is whiteor black with the same probability. The input imagewas quantized employing the same method as was usedfor the stored image patterns. The whole image pat-tern was decomposed into100 (10 × 10) sub-patterns

Page 8: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak106

Fig. 3. Quantized prototype monochrome image patterns with 6 bits/pixel.

Page 9: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Image recall using a large scale generalized Brain-State-in-a-Box neural network 107

(a) Input image (b) Result after the recall process

(c) Result after the error correction process (d) Output image

Fig. 4. Simulation result of image recall by gBSB neural networks with monochrome images.

using the overlapping decomposition method described inSection 3. Each sub-pattern went through recall process ofthe corresponding sub-network. The result after the recallprocesses of all the sub-networks is shown in Fig. 4(b).There were 5 mismatched portions between sub-patternsin this example. The next stage was the error correc-tion process. The collection of sub-images in Fig. 4(c)is the result of the error correction process. There wasno mismatched portion between these sub-patterns. Fi-nally, the reconstructed image is shown in Fig. 4(d). Inthis image, there was no erroneously reconstructed pixel

out of 22500 (150 × 150), i.e., no pixel in the recon-structed image had different values than the correspondingstored prototype image.

In Fig. 5, an example result of the image recall sys-tem employing the hybrid gBSB-McCulloch-Pitts neuralmodel is shown. We used the same overlapping decom-position and the same noisy input image as in our simu-lation of the image recall system using the gBSB neuralmodel, which is shown in Fig. 5(a). That is, the inputimage is corrupted by the salt-and-pepper noise with theerror probability of0.5, and the image was decomposed

Page 10: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak108

(a) Input image (b) Result after the recall process

(c) Result after the error correction process (d) Output image

Fig. 5. Simulation result of image recall by hybrid gBSB-McCulloch-Pitts neural networks with monochrome images.

into 100 (10 × 10) sub-images using the overlapping de-composition method. The result of the recall processesis shown in Fig. 5(b), where there exist 87 mismatchedportions between sub-images. Figure 5(c) is the result af-ter the error correction process and there still remains 57mismatches. The final result of image recall is shown inFig. 5(d). In this result, 4491 pixels out of 22500 pixelshave different values than the pixels of the correspondingstored prototype image, i.e.,19.96% of the whole pixelswere erroneously reconstructed.

In Figs. 6 and 8(a), image recall systems usingthe gBSB model and the hybrid gBSB-McCulloch-Pittsmodel were compared with each other. The input imageswere corrupted by the salt-and-pepper noise with differenterror rates in this simulation. Also, in Figs. 7 and 8(b), theresults of the simulations are shown when the input im-ages were corrupted by the additive Gaussian noise, andthe two models were compared with each other. The inputimages were corrupted by adding the Gaussian noise to thestored prototype image with different values of standard

Page 11: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Image recall using a large scale generalized Brain-State-in-a-Box neural network 109

(a) Input error rate: 0.3. Reconstruction error rate — gBSB model: 0, hybrid model: 0.

(b) Input error rate: 0.4. Reconstruction error rate — gBSB model: 0, hybrid model: 0.0198.

(c) Input error rate: 0.5. Reconstruction error rate — gBSB model: 0, hybrid model: 0.1996.

Fig. 6. Simulation results of image recall using the gBSB neural model and the hybrid gBSB-McCulloch-Pitts model (inthe case of the salt-and-pepper input noise). Note—left: input images, center: gBSB model, right: hybrid model.

deviation that varies from 0 to 15. Each image used in thissimulation had 64 gray levels because a pixel was repre-sented by 6 bits. Therefore, for example, the standard de-viation 15 means that the standard deviation of the Gaus-sian noise was almost a quarter of the full gray scale of theimage. As we can see in these figures, the results from thesystem using the gBSB model were better than the onesfrom the system using the hybrid gBSB-McCulloch-Pittsmodel.

We can apply the same procedure to the recall ofcolor images. The image patterns used in our simulation

are shown in Fig. 9. The pixels in the original images wererepresented by 24 bits (8 bits for each of the R,G,B com-ponents) before the uniform quantization preprocessing.The image patterns in Fig. 9 are quantized versions of theoriginal images with 6 bits per pixel (2 bits for each of theR,G,B components). An example of a simulation resultis shown in Fig. 10. The image recall system was com-posed of gBSB neural networks in this simulation. Thesize of images used in this simulation was300 × 200.The noisy input image in Fig. 10(a) was generated for thesimulation in such a way that each of the three R,G,B ma-

Page 12: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak110

(a) Standard deviation of the additive Gaussian noise in input: 5.Reconstruction error rate — gBSB model:4.89× 10−4, hybrid model: 0.456.

(b) Standard deviation of the additive Gaussian noise in input: 10.Reconstruction error rate — gBSB model: 0.010, hybrid model: 0.577.

(c) Standard deviation of the additive Gaussian noise in input: 15.Reconstruction error rate — gBSB model: 0.072, hybrid model: 0.609.

Fig. 7. Simulation results of image recall using the gBSB neural model and the hybrid gBSB-McCulloch-Pitts model (inthe case of the Gaussian input noise). Note—left: input images, center: gBSB model, right: hybrid model.

trices was corrupted by the salt-and-pepper noise. Theprobability that each element of a pixel might be cor-rupted by the noise was0.4 in this example. The patternswere decomposed into600 (= 30 × 20) sub-patterns inthis simulation. The number of mismatched portions be-tween sub-patterns was 26 after the recall process, and itwas reduced by the subsequent error correction processto 8. The final reconstructed image is shown in Fig. 10(d).

The number of incorrectly reconstructed pixels was 150out of 60000 pixels.

Remark 1. The gBSB based system outperformed thehybrid gBSB-McCulloch-Pitts network based system. Anexplanation for this can be found in the difference betweenthe activation functions of the two models as given by (2)and (6), respectively. In the hybrid gBSB-McCulloch-

Page 13: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Image recall using a large scale generalized Brain-State-in-a-Box neural network 111

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

The Error Rate of the Input Image(Salt−and−Pepper Noise)

The

Rat

e of

Inco

rrec

tly R

econ

stru

cted

Pix

els

gBSB neural modelhybrid gBSB−McCulloch−Pitts neural model

(a) Input noise: salt-and-pepper noise

0 5 10 150

0.1

0.2

0.3

0.4

0.5

0.6

0.7

The Standard Deviation of the Gaussian Noisein Input Image

The

Rat

e of

Inco

rrec

tly R

econ

stru

cted

Pix

els

gBSB neural modelhybrid gBSB−McCulloch−Pitts neural model

(b) Input noise: Gaussian noise

Fig. 8. Comparison between the system employing thegBSB neural model and the system employing thehybrid gBSB-McCulloch-Pitts model.

Pitts neural network, the initial statex(0) moves to a ver-tex of the hypercubeHn in one step. If this vertex is a sta-ble equilibrium state that is not a prototype vector, this re-sults in a recall error. In other words, if the initial directionof the trajectory is towards a non-prototype stable equilib-rium state, it yields a recall error. On the other hand, in thecase of the gBSB neural network, the trajectory stays in-side the hypercube for several time units depending on thestep sizeα. This means that the convergence of the statein the gBSB network is less sensitive to the initial direc-tion of the trajectory and it is possible that it changes the

direction towards the corresponding stored stable equilib-rium state as time passes even if the initial direction of thetrajectory is towards the non-prototype stable equilibriumstate.

Remark 2. The recall results of the hybrid gBSB-McCulloch-Pitts neural network based system signifi-cantly deteriorated, especially when an input image wascorrupted by the Gaussian noise. This is because the bi-nary representation of the image corrupted by the Gaus-sian noise might be very far in the Hamming distancesense from the stored prototype image to which the inputpattern is supposed to converge. In the case of the salt-and-pepper noise, only the pixels selected with a given er-ror probability are replaced with pixels representing whiteor black. In the case of the Gaussian noise, most pixel val-ues are changed by adding or subtracting small numbers tothem, and the binary representation of the modified valuescan be very far from the binary representation of the orig-inal pixel values. The image corrupted by the Gaussiannoise may be an image that is very remote from the storedprototype image in the Hamming distance sense and maynot belong to the basin of attraction of the correspondingstored prototype image.

6. Conclusions

In this paper, we described an image recall system that weconstructed by employing a large scale gBSB neural net-work. We used the overlapping decomposition method toconstruct this network. This recall system works as neu-ral associative memory that also contains the error correc-tion subsystem to enhance the recall performance. Theproposed system was able to recall the prototype imagesthat were stored in the neural network when the noisy in-put images were given, even when the probability that animage pixel was corrupted with a salt-and-pepper noisewas quite high, even as high as0.5. Also, when the in-put image was corrupted with the Gaussian noise, the pro-posed system successfully reconstructed the stored proto-type image unless the standard deviation of the additiveGaussian noise was too high. We built another image re-call system employing the hybrid gBSB-McCulloch-Pittsneural model, and compared the performance of this sys-tem with the one using the gBSB neural model. The per-formance of the hybrid gBSB-McCulloch-Pitts neural net-work based system was not as good as the gBSB modelbased system, especially when the input noise was theGaussian noise and when the input error rate of the salt-and-pepper noise was high.

In this paper, we assumed that the test images weresimply noisy versions of the prototype images. If the testimages were rotated, scaled, or shifted, the performanceof the proposed system would most probably deteriorate

Page 14: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak112

Fig. 9. Quantized prototype color image patterns with 6 bits/pixel.

significantly. Designing a system whose performance isinsensitive to those kinds of transformations of test imagesis an open problem and it is left out for future researchactivities. We used binary representation of the intensityvalues of images. The use of different image representa-tion methods in the proposed image recall system can beconsidered. Constructing an improved image representa-tion method seems to be an interesting research topic. Theuniform quantization method is used to reduce the compu-

tational load. The main reason why we used the uniformquantization is its simplicity. It would enhance the qualityof quantized images if we used a quantization method thatdepends on the statistics of the training images. We leavethis issue for future research.

Acknowledgement

We are grateful for constructive comments of the review-ers.

Page 15: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

Image recall using a large scale generalized Brain-State-in-a-Box neural network 113

(a) Input image (b) Result after the recall pro-cess

(c) Result after the error correc-tion process

(d) Output image

Fig. 10. Simulation result of image recall using the gBSB neural model with color images.

References

Akar M. and Sezer M.E. (2001):Associative memory design us-ing overlapping decompositions. — Automatica, Vol. 37,No. 4, pp. 581–587.

Anderson J.A. (1995):An Introduction to Neural Networks. —Cambridge: A Bradford Book, The MIT Press.

Anderson J.A., Silverstein J.W., Ritz S.A. and Jones R.S. (1989):Distinctive features, categorical perception, probabilitylearning: Some applications of a neural model, In: Neu-rocomputing; Foundations of Research (J.A. Andersonand E. Rosenfeld, Eds.). — Cambridge, MA: The MITPress, ch. 22, pp. 283–325, reprint from Psych. Rev. 1977,Vol. 84, pp. 413–451.

Page 16: IMAGE RECALL USING A LARGE SCALE GENERALIZED BRAIN ...

C. Oh and S.H. Zak114

Golden R.M. (1993):Stability and optimization analyses of thegeneralized Brain-State-in-a-Box neural network model.— J. Math. Psych., Vol. 37, No. 2, pp. 282–298.

Gonzalez R.C. and Wintz P. (1987):Digital Image Processing,2nd Ed. — Reading: Addison-Wesley.

Gray R.M. and Neuhoff D.L. (1998):Quantization. — IEEETrans. Inf. Theory, Vol. 44, No. 6, pp. 2325–2383.

Hassoun M.H. (1995):Fundamentals of Artificial Neural Net-works. — Cambridge: A Bradford Book, The MIT Press.

Hui S. andZak S.H. (1992):Dynamical analysis of the Brain-State-in-a-Box (BSB) neural models. — IEEE Trans. Neu-ral Netw., Vol. 3, No. 1, pp. 86–94.

Ikeda M. and Šiljak D.D. (1980):Overlapping decompositions,expansions and contractions of dynamic systems. — LargeScale Syst., Vol. 1, No. 1, pp. 29–38.

Ikeda N., Watta P., Artiklar M. and Hassoun M.H. (2001):A two-level Hamming network for high performance associativememory. — Neural Netw., Vol. 14, No. 9, pp. 1189–1200.

Lillo W.E., Miller D.C., Hui S. andZak S.H. (1994):Synthe-sis of Brain-State-in-a-Box (BSB) based associative mem-ories. — IEEE Trans. Neural Netw., Vol. 5, No. 5, pp. 730–737.

Oh C. andZak S.H. (2002):Large scale neural associative mem-ory design. — Przeglad Elektrotechniczny (Electrotechni-cal Review), Vol. 2002, No. 10, pp. 220–225.

Oh C. andZak S.H. (2003):Associative memory design usingoverlapping decomposition and generalized Brain-State-in-a-Box neural networks. — Int. J. Neural Syst., Vol. 13,No. 3, pp. 139–153.

Park J. and Park Y. (2000):An optimization approach to designof generalized BSB neural associative memories. — NeuralComput., Vol. 12, No. 6, pp. 1449–1462.

Park J., Cho H. and Park D. (1999):Design of GBSB neuralassociative memories using semidefinite programming. —IEEE Trans. Neural Netw., Vol. 10, No. 4, pp. 946–950.

Sayood K. (1996):Introduction to Data Compression. — SanFrancisco: Morgan Kaufmann.

Schultz A. (1993):Collective recall via the Brain-State-in-a-Box network. — IEEE Trans. Neural Netw., Vol. 4, No. 4,pp. 580–587.

Zetzsche C. (1990):Sparse coding: the link between low levelvision and associative memory, In: Parallel Processing inNeural Systems and Computers, (R. Eckmiller, G. Hart-mann and G. Hauske, Eds.). — Amsterdam: Elsevier,pp. 273–276.

Received: 16 August 2004Revised: 11 January 2005


Recommended