+ All Categories
Home > Documents > ^Le^^Jk^ - TDL

^Le^^Jk^ - TDL

Date post: 16-Oct-2021
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
128
ANALOG AND DIGITAL REALIZATIONS OF THE HOGG-HUBERMAN NEURAL MODEL by KEVIN J. SPINHIRNE, B.S. in E.E. A THESIS IN ELECTRICAL ENGINEERING Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING Approved airperson of the Committee Accepted /^Le^^Jk^ Dean of the Graduate School December, 1989
Transcript
Page 1: ^Le^^Jk^ - TDL

ANALOG AND DIGITAL REALIZATIONS OF

THE HOGG-HUBERMAN NEURAL MODEL

by

KEVIN J. SPINHIRNE, B.S. in E.E.

A THESIS

IN

ELECTRICAL ENGINEERING

Submitted to the Graduate Faculty of Texas Tech University in

Partial Fulfillment of the Requirements for

the Degree of

MASTER OF SCIENCE

IN

ELECTRICAL ENGINEERING

Approved

airperson of the Committee

Accepted

/^Le^^Jk^ Dean of the Graduate School

December, 1989

Page 2: ^Le^^Jk^ - TDL

a5 T3

CopX

ACKNOWLEDGEMENTS

I would like to thank Dr. Gustafson for his guidance and

patience during my graduate work and the writing of this thesis. I

also wish to thank Dr. Mitra and Dr. Ford for serving on my

committee and for their constructive observations.

In addition, I would like to express my deepest appreciation

to my parents, for their continual faith in me and for instilling in me

some of the values which have helped me get this far.

Finally, I give a tremendous thank you to all of my fellow

students, teachers, and friends who have ever encouraged me to

succeed in my educational goals, and especially to Lori, to whom I

dedicate this thesis, who has been an immeasurable source of

understanding, support, and personal motivation.

11

Page 3: ^Le^^Jk^ - TDL

TABLE OF CONTENTS

ACKNOWLEDGEMENTS ii

LISTOFTABLES vi

LIST OF FIGURES vii

CHAPTER

I. INTRODUCTION TO NEURAL NETWORKS 1

Introduction 1

Evolution of Neural Networks 3

General Features

of Neural Networks 4

Applications and Advantages 6

Implementations of Neural Networks 7

Summary 9

II. THE HOGG-HUBERMAN NEURAL MODEL 10

Introduction 1 0

1-Dimensional H-H Neural Model 1 1

General Model Structure 1 1

Output and Memory Update 1 3

The H-H Model in

Pattern Recognition 1 8

Learning Phase 1 8

Recognition Phase 1 9

Summary 2 0 iii

Page 4: ^Le^^Jk^ - TDL

III. ANALOG CIRCUIT REPRESENTATION OF THE

HOGG-HUBERMAN NEURAL MODEL 2 1

Introduction 2 1

Design Overview of an Analog H-H Neuron 2 1

Output Update Circuitry 2 4

Comparator Circuitry 3 1

Memory and Update Circuitry 3 7

Timing 4 1

Summary 4 5

IV. DIGITAL CIRCUIT REPRESENTATION OF A

HOGG-HUBERMAN NEURAL MODEL 4 6

Introduction 4 6

Design Overview 4 7

Output Logic 4 9

Comparator Logic 5 6

Memory Logic 6 4

Timing 7 0

Summary 7 4

V. CIRCUIT SIMULATIONS AND RESULTS 7 6

Introduction 7 6

The SPICE program 7 7

Analog Model Circuit Simulation 7 9

Single Neuron Simulation 7 9

2 X 2 Network Simulation 8 3

Neural Speed 8 7 iv

Page 5: ^Le^^Jk^ - TDL

Digital Model Circuit Simulation 9 0

Single Neuron Simulation 9 0

Network Simulations 9 2

Neural Speed 9 5

Summary 9 7

VI. POSSIBLE FUTURE WORK AND CONCLUSION 9 8

Possible Future Work 9 8

Conclusion 9 9

LIST OF REFERENCES 1 0 1

APPENDICES

A. SAMPLE SPICE INPUT LISTING FOR AN ANALOG H-H NEURON CIRCUIT 1 0 3

B. FORTRAN LISTING FOR SIMULATION OF AN H-H NEURAL NETWORK 1 07

C A SAMPLE SPICE INPUT LISTING FOR A DIGITAL H-H NEURON CIRCUIT 117

Page 6: ^Le^^Jk^ - TDL

LIST OF TABLES

1. Output binary value in terms of neuron input

and memory values 5 1

2. Output signal logic table 5 3

3. Increment and decrement signals logic table 6 0

4. Memory flip-flops set and reset signal state table 6 6 5. Memory reset logic table 7 1

6. SPICE results for analog neuron with various inputs and corresponding outputs and errors 8 2

VI

Page 7: ^Le^^Jk^ - TDL

LIST OF HGURES

1. 1-dimensional Hogg-Huberman neural network general model structure 1 2

2. H-H network output, (a), and memory, (b),

update cycles 1 5

3. Block diagram of analog H-H neuron circuit 2 2

4. Analog neuron circuit output update block diagram 2 5

5. Difference amplifier op-amp circuit 2 6

6. Multiplier circuit for analog H-H neuron 2 8

7. Op-amp limiter circuit with offset 2 9

8. Comparator circuitry block diagram for

analog H-H neuron circuit 3 2

9. Op-amp limiter/nuller circuit with offset 3 4

10. Four input inverting summer 3 6

11. Analog H-H neuron circuit memory block diagram 3 8

12. Non-inverting two input summer 3 9

13. Memory limiting block diagram for analog

H-H neuron circuit 4 0

14. Sample and hold circuit 4 2

15. Analog H-H neuron circuit signals timing 4 4

16. Digital H-H neuron circuit block diagram 4 8 17. Karnaugh Maps for output update logic of

digital H-H neuron circuit 5 5 18. Implementation of AND, OR, and NOT logic

functions with NAND gates 5 7

V l l

Page 8: ^Le^^Jk^ - TDL

19. Output update logic circuitry for digital H-H neuron circuit 5 8

20. Karnaugh reduction maps for increment and decrement logic of digital H-H neuron circuit 6 2

21. Memory increment/decrement logic circuitry for digital H-H neuron 6 3

22. NAND gate implementation of a clocked R-S flip-flop and state table 6 5

23. Set and reset logic Karnaugh Maps for digital H-H neuron memory flip-flop circuitry 6 8

24. Set and reset logic circuitry for memory of

digital H-H neuron 6 9

25. Memory reset logic for digital H-H neuron circuit 7 2

26. Digital H-H neuron circuit signals timing 7 3

27. 741 op-amp equivalent circuit used for SPICE simulation of analog H-H neuron 8 0

28. SPICE and software simulations of training a 2 X 2 analog H-H neural network to two training patterns 8 4

29. SPICE and software simulations of training a 2 X 2 analog H-H neural network to two patterns with non-saturated outputs 8 6

30. NAND gate circuit representation used for SPICE simulation of digital H-H neuron 9 1

31. Pattern recognition ability of a 10 X 10 digital H-H neural network 9 3

V l l l

Page 9: ^Le^^Jk^ - TDL

CHAPTER I

INTRODUCTION TO NEURAL NETWORKS

Introduction

Since man first began to analyze and wonder about his own

intelligence, his curiosity has led him to try and reproduce that

intelligence artificially. Whereas considerable success has been

achieved in designing serial computing machines which perform

multiple complex calculations extremely quickly and efficiently,

efforts to produce machines which display some of the fundamental

qualities which define true intelligence have more or less been non-

fruitful. Tasks which humans find difficult, such as problems

requiring large computations and great accuracy, are well suited to

computers. On the other hand, tasks which humans find simple,

such as image and speech recognition, are tasks which pose

tremendous problems to traditional computing architectures. In

addition, traditional computers cannot adapt themselves, or learn in

the presence of new information or surroundings in the same sense

as humans do. These shortcomings have led many researchers to

turn to the study of new types of architectures for information

processing. One area which recently has piqued the interest of

many is that of artificial neural networks. It is the hope of those

studying these neural networks that they may possibly be a

Page 10: ^Le^^Jk^ - TDL

solution to some of the more complex problems of creating artificial

intelligent systems.

Neural networks are not a new concept, the beginnings of

which can be traced back several decades. It is only recently,

however, that a great reinterest has been shown and many

advances have been made in this area. Artificial neural networks

are based on processing information similarly to biological systems,

namely through the use of many simple processing elements, or

"neurons," in a highly interconnected network, processing data in a

highly parallel and distributed manner. These neural networks

have shown promise in exactly the areas just mentioned, namely

pattern recognition and in the ability to adapt, or "learn" from their

environment. Although much headway has been made in the

understanding of neural networks, very few practical working

applications have been found to utilize these networks. This fact is

partially due to the fact that the neural networks being studied

currently are very small and primitive as compared to biological

systems. The computation time to simulate even small neural

networks on serial machines is considerably longer than would be

the case if the network were implemented in actual hardware.

There are problems that arise when attempting to put such

networks into hardware, however, due to the structure of most

neural networks.

This thesis shows the study and design of two circuit

representations of one specific model of these neural networks, the

Hogg-Huberman model. It is shown that this particular neural

Page 11: ^Le^^Jk^ - TDL

network model lends itself much better to hardware

implementation than do many other current neural models, and two

possible hardware representations for possible Very Large Scale

Integration(VLSI) of this model are given.

Evolution of Neural Networks

Although the resurgence in recent years in interest of neural

networks has been great, the idea of artificial neural elements is far

from new. As early as 1943, McCulloch and Pitts introduced the

idea of modeling neurons [1]. They had the concept of so called "all

or none neurons" which were either "on," or firing, or "off," or not

firing. The decision of the neuron of whether or not to fire is based

on a threshold function. That is, if the inputs to the neuron exceed

a certain threshold, then the neuron fires, otherwise, the neuron

does not.

Other earlier accomplishments in this area include those of

Rosenblatt [2], who introduced the perceptron model, Minsky and

Papert [3], and others. All of these early studies centered around

similar ideas, and many used the same basic type of threshold

neuron. Although these neuron models exhibited interesting

behavior, many researchers believed that because of their non-

linearity, it was a non-worthwhile cause to pursue them any

further. Many believed that if one could not analyze something at a

basic detailed mathematical description that no useful deductions

could be made, and therefore the modeled neurons could not be put

Page 12: ^Le^^Jk^ - TDL

to any good use. These ideas have since changed, and although

complex non-linear models are still difficult to analyze

mathematically, many feel that such models can still be used even

without necessarily understanding their detailed functioning.

It appears that most of the more recent resurgence in the

interest of neural networks was the result of the work of J.J.

Hopfield [4]. The neural network model he introduced in 1982

seemed to show much more promise than most of the simpler

earlier models such as the perceptron. He showed the ability of his

network to perform well as a content addressable memory, in which

the submission of partial or noised input data resulted in the entire

memory pattern being retrieved. With Hopfield's success has come

a great surge of interest, and many others have become involved in

this area of research, with new models being proposed and studied

[5], [6], [7], [8], [9].

General Features of Neural Networks

Many different algorithms and topologies for neural networks

have been studied and proposed by researchers. Although the

detailed algorithms and structures may vary considerably from one

model to another, neural networks as a whole exhibit certain

general features.

The term "neural network" usually refers to a class of models

which are a densely interconnected network of simple processing

elements, which are usually highly parallel in their processing of

Page 13: ^Le^^Jk^ - TDL

data. These networks can also be asynchronous with no centralized

control. Due to their highly parallel nature, there are usually a

limited number of computing levels, or processing steps, from input

to output. In addition, artificial neural networks generally exhibit

properties which are similar to biological neural systems, i.e., the

ability to learn and the fact that items or patterns learned are

distributed throughout the network rather than localized. Finally,

some neural nets learn with external supervision, while some learn

without.

As just stated, artificial neural nets are usually a highly

interconnected network of simple processing elements, or "neurons."

In some cases, such as with the model of Hopfield [4], and also with

other models, the interconnections between the neurons are

assigned weights, or strengths. These weights determine how

strongly an input to a neuron along one of these interconnections

affects that neuron. Also, the connected network must usually have

some rules for propagation of signals through the network, an

activation rule to define how the input signals to each neuron affect

that neuron's output, and finally a learning rule which allows the

network to adapt itself by changing interconnection weights or

other parameters.

Many artificial neural nets, as with biological systems, are

asynchronous in nature without any centralized control. This can be

contrasted with typical serial computing architectures which rely on

a centralized processor to supervise the processing. It is this

feature which gives neural networks speed in processing

Page 14: ^Le^^Jk^ - TDL

information, since many processing actions can be occurring

simultaneously, rather than only one at a time.

In much of the work done with neural nets to date, it has

been shown that these types of networks can exhibit many

promising properties similar to those of biological systems [10]. A

distributed memory, rather than localized memory is one of these

properties. The actual memory, or ability of the networks to recall

an output when given a certain input resides throughout the

network itself. Furthermore, these networks have shown the ability

to learn in a limited fashion. That is, after having information

submitted to the network, and allowing the network to adapt itself

by changing interconnection strengths or other parameters, the

network is able to recognize these patterns when they are later

submitted.

Applications and Advantages

Most of the applications to which neural nets seem to be most

suited are in the areas of pattern recognition, such as image and

speech recognition. One of the major problems in pattern

recognition is that of noised or partial inputs. Unless exactly the

same input pattern is given to a typical recognition scheme, the

input is not seen as one previously learned. Neural nets have

shown promise in this area because of their ability to overlook

small differences in patterns and thus still recall the correct output

Page 15: ^Le^^Jk^ - TDL

for a learned pattern even if the pattern has been altered slightly,

or if part of the input is missing.

The ability of neural nets to still produce correct results even

in the presence of partial or noisy inputs is one of their great

advantages over other methods. Another advantage lies in the fact

that, as mentioned before, the memory of the neural net is

distributed across the neurons themselves, and is not in a localized

memory. This fact provides the network with another form of fault

tolerance, that of tolerance to faulty neurons or interconnections

between the neurons. Several networks studied have shown the

ability to still perform well even when limited numbers of the

neurons or interconnections are damaged so that they can no longer

function. Finally, the highly parallel nature and limited computing

levels of most neural nets give them the advantage of speed. The

limits on serial computing speeds seem to have almost been

reached, and thus parallel computing offers a way of increasing

processing speed still further.

Implementations of Neural Networks

Currently, the study of neural networks has been for the most

part limited to simulating them in software on serial computers.

The fact that most neural networks are densely interconnected and

have a high degree of parallelism, however, has limited researchers

to the study of either smaller or less interconnected networks than

those of biological systems. Some progress has been gained through

Page 16: ^Le^^Jk^ - TDL

8

the use of new generation, highly parallel computing machines, but

even the largest manmade neural networks have been on a size and

speed scale that is less than that of most insects.

One of the best hopes for consructing faster and larger neural

nets is through specific hardware implementation, such as through

VLSI design. Although this is being done in a few instances, the

problems in putting most neural network models into hardware is

prohibitive [11]. One problem with VLSI implementation arises

from the structural nature of many networks. Most neural nets

require every neuron to be connected to all or many of the other

neurons in the network. This requires a very dense circuit in VLSI

for all these interconnections. Another problem arises if larger

networks are to be composed of many smaller networks which are

on VLSI integrated circuits. In order to interconnect all of the

smaller networks to form a larger one, access to most of the neurons

in each smaller network is required. The number of connecting pins

to each VLSI chip required would be very difficult to achieve.

This thesis discusses the study and design of a possible

hardware implementation, both in digital and analog, of one specific

neural network model known as the Hogg-Huberman model. It will

be shown that a feasible hardware implementation of this model

can be achieved without the problems discussed above due to the

architecture of the Hogg-Huberman model. Such an implementation

in VLSI would allow the study and construction of much faster and

larger networks than are currently being studied.

Page 17: ^Le^^Jk^ - TDL

Summary

The rest of this paper discusses the design and analysis of two

proposed circuit models which simulate a particular artificial neural

network model called the Hogg-Huberman model after the two men

who first introduced it [12]. The next chapter discusses the Hogg-

Huberman neural network model itself, describing the model's

structure and the algorithms which govern the network's ability to

learn. Chapter III shows the design of an analog circuit

representation of a Hogg-Huberman(H-H) neural element, or

"neuron." Chapter IV shows the design of a digital circuit version of

the H-H neuron, and in Chapter V the results of simulations of these

two designs are given. Finally, Chapter VI concludes the results

obtained and proposes possible future work that could be done in

this area.

Page 18: ^Le^^Jk^ - TDL

CHAPTER II

THE HOGG-HUBERMAN NEURAL MODEL

Introduction

The Hogg-Huberman neural model was proposed by T. Hogg

and B. A. Huberman[12]. Their studies with this model have

revealed that it has many interesting and promising qualities,

especially in the areas of pattern learning and recognition. The H-H

model differs from most other models in both its structure and its

operating algorithms, yet its performance exhibits promise in the

very areas which neural networks have become noted for. Others

have also found the H-H model interesting, and considerable study

of this model and its abilities has been undertaken [13], [14], [15].

The model has separate learning and recognition phases which

allows a network trained with one set of patterns to be tested for

recognition capabilities with other patterns. It has been shown that

this model is capable of recognizing trained patterns even in the

presence of noisy input patterns. In addition, the H-H model has the

ability to self-repair after "soft" errors (errors in memory values),

and is immune to a certain number of faulty neurons, or "hard"

errors. It is this model which is the focus of the rest of this work.

10

Page 19: ^Le^^Jk^ - TDL

11

1-Dimensional H-H Neural Model

The one-dimensional neural model introduced by Hogg and

Huberman is one which is composed of a number of layers of

neurons, and information flows from the top of the network to the

bottom through the successive layers of neurons. Each neuron takes

two inputs from the outputs of neurons in the previous layer and

has a single output which is sent to two neurons in the next layer.

The model has distributed memory, with the information being

stored in the memory values of the individual neurons. The H-H

model is one with unsupervised learning, that is, inputs only are

presented to the network, allowing the network to map the input

into some associated output. Thus, the network is a

heteroassociative one.

General Model Structure

The general structure of the 1-dimensional Hogg-Huberman

model is shown in Figure 1. As can be seen in the figure, each

neural element, or neuron, is referenced by its position within the

network matrix. For example, the i,j-th neuron is that neuron in the

i-th row and the j-th column. Each neuron has associated with it a

memory value, denoted by Mij, and an output value denoted Oij.

The output is fed to two neurons in the next layer. Each neuron also

takes two inputs from the previous layer. These inputs are the

outputs of two neurons in the previous layer of the network. The

input pattern, S, is fed into the top row of neurons and the output

pattern, R, is taken from the bottom layer. This layer by layer flow

Page 20: ^Le^^Jk^ - TDL

12

I

1 R J-1

R: R J+1

Figure 1. 1-dimensional Hogg-Huberman neural network general model structure.

Page 21: ^Le^^Jk^ - TDL

13

of data is unlike many neural network models in which any given

neuron may give input to or take output from any other given

neuron in the network.

One readily seen advantage of the matrix-like structure of the

H-H neural network model is that it lends itself to VLSI

implementation, and multiple networks can be connected together

to form even larger networks. This allows the construction of VLSI

chips in which the number of connecting pins is small due to the

fact that access is needed to only the neurons at the edges of the

network.

Output and Memory Update

Both output and memory update algorithms for the H-H

neural model are non-linear in nature. Although the non-linearity

introduced by these algorithms makes large networks difficult to

analyze in a mathematical sense, it is this non-linearity which gives

the neural network its interesting and dynamic behavior. The non-

linearity in the H-H model is introduced into the network through

the limiting of both the memory and output values of the neurons.

Both upper and lower bounds are set on the values of the memory

and output. The outputs of the neurons are constrained to lie in the

interval [Smin,Smax], and the memory values are constrained

within the interval of values [Mmin,Mmax]. The values of these

limits can be varied from network to network, and according to the

specific application. However, while the outputs are usually allowed

Page 22: ^Le^^Jk^ - TDL

14

to go negative, the memory values are usually limited to positive

values only, with Mmin typically having a value of one.

Consider a network made up of m rows of neurons with each

row having n neurons, thus giving an m x n matrix of neurons.

Figure 2(a) shows the basic output update step. In this step the

output is computed from the two inputs received from neurons in

the previous layer of the network. The output algorithm for each

neuron in the matrix is as follows: If Oij is the output of the i,j-th

neuron at time step k, then the output of the neuron at the next

time step, k-i-1, is given as

Oi,j(k-hl) = max{Smin,min[Smax,Mij(k)(Ii,jl(k)-Iij2(k))]}, (1)

which simply multiplies the memory value of the i,j-th neuron at

the k-th time step, Mij(k), by the difference in the two inputs to the

neuron, lijl(k) and Iij2(k). This value is then limited between the

values Smin and Smax. With reference to Figure 1, the two inputs

to the i,j-th neuron are defined as

lijl(k) = Oi-l,j.l(k) (2a)

Ii,j2(k) = Oi-i,j+l(k), (2b)

for 1<= i <=m, and l<=j<=n.

In this algorithm, each neuron is connected to its neighbors in

the row above and below diagonally, taking two inputs from the

Page 23: ^Le^^Jk^ - TDL

15

1.1 1.2 2,1 \l

(a) output update

(b) memory update

Figure 2. H-H network output, (a), and memory, (b), update cycles.

Page 24: ^Le^^Jk^ - TDL

16

previous layer and sending its output along diagonal connections to

two neurons in the row below. The inputs to the first row of

neurons in the network at time step k are the values of the input

signal, S(k), and the output signal, R(k), is taken from the outputs of

the bottom row of neurons in the network. This can be represented

in terms of the two equations:

Oo,j(k) =Sj(k) (3a)

Rj(k) = Om,j(k). (3b)

Values for the boundaries, or edges of the array are usually

defined in one of two ways. The first method is to have zero

boundary conditions. This means that all values from outside the

edges of the network matrix are defined to be zeroes. This is

written in equation form as

Oi,o(k) = Oi,n+l(k) = 0. (4)

The second method used to decide the values for the edges of the

array is that of periodic boundary conditions. For periodic

boundary conditions, the values along the left and right boundaries

of the array are defined as

Page 25: ^Le^^Jk^ - TDL

17

Oi,o(k) =Oi,n(k) (5a)

Oi,n+l(k) = Oi,i(k). (5b)

The ability of the network to "learn" or adapt is acquired

through the updating of the memory values, Mij. The memory

values of the neurons are updated by comparing the output of each

neuron to the outputs of its left and right neighbors, as shown in

Figure 2(b). If the output of each neuron is greater than the output

of both its left and right neighbors' outputs, then the memory is

incremented by one, within the constraint that it remain less than

or equal to Mmax. If the neuron's output is less than the outputs of

both its neighbors, however, then the memory value of that neuron

is decremented by one, again within the constraint that it remain

greater than or equal to Mmin. If neither of these two conditions

exists, then the memory value is left unchanged. This can be shown

in equation form as

Mij(k+1) = Mij(k)+1, if Oij(k4-l) > [Oi,j-i(k+l).and.Oij+i(k-hl)]

Mi,j(k+1) = Mij(k)-1, if Oij(k+l) < [Oi,j-i(k+l).and.Oij+i(k+l)] (6)

Mij(k+1) = Mij(k), otherwise.

Page 26: ^Le^^Jk^ - TDL

18 The H-H Model in Pattern Recognition

Using the algorithms just described, pattern learning and

recognition is implemented using the H-H model. The separation of

the output and memory update algorithms allow ease of

implementing separate learn and recognition phases, thus allowing

the submission of new patterns to a previously trained network to

examine the recognition capabilities of the network. The network is

first trained by submitting patterns and using the output and

memory update algorithms just described, and then the network of

trained neurons is used for recognition with the memory update

algorithm turned off.

Learning Phase

At the start of the learning phase, the network is usually

initialized, setting all memory values of the neurons to

Mmin(usually one), and all output values to zero. During the first

time step, the first input training pattern, S(l), is submitted to the

first row of neurons. The outputs of each successive row of neurons

is then computed according to equation 1. Once all of the new

outputs have been computed, the memory elements of each neuron

are updated according to equation 6. At the beginning of the next

time step, the next training pattern input is submitted to the

network and the output and memory update process is repeated.

The output pattern which corresponds to the associated input

pattern appears at the outputs of the last layer of neurons n time

steps after the input pattern is put into the first layer of a network

Page 27: ^Le^^Jk^ - TDL

19

with n layers. The output pattern associated with each input

pattern is recorded at each time step. When all the training input

patterns have been submitted to the network, the process is

repeated and the first pattern is again submitted, then the second,

and so on. The learning phase is ended and the network is said to

have "learned" the training patterns once the output patterns have

converged and are no longer changing. That is, learning is stopped

when all output patterns at a given time step are such that

R(k)=:R(k-l) (7)

for all R.

Once the network has learned the training input patterns, the

memory values of the neurons can be stored, to be used later for

recognition purposes.

Recognition Phase

Once the network matrix has been trained, patterns may be

submitted to the trained matrix. The outputs of each neuron are

calculated for each input pattern using equation 1 just as during the

learning phase. However, after the outputs have been computed,

the memory values are not updated as before. The output of the

last row of neurons is the associated output pattern for that input

pattern. In this way, the ability of the network to recognize input

patterns may be examined. Patterns which the network was

Page 28: ^Le^^Jk^ - TDL

20

trained with may be submitted to determine if the correct output

pattern is generated. In addition, training patterns may be altered

by adding noise, or only partial training patterns submitted to

determine the tolerance of the network to noisy or partial inputs.

Summary

The Hogg-Huberman neural model can be easily implemented

through software techniques, lends itself to possible VLSI

implementation, and has been shown to be good at pattern

recognition, even with partial or faulty inputs. In addition, the

network has been shown to converge quickly during the learning

phase, and can self repair soft errors. Also, work by Hogg and

Huberman has shown that the network exhibits certain features

similar to biological neural systems. One example of this property is

the ability of the network to learn patterns which are similar to

previously learned patterns rather quickly, whereas learning

patterns which are much different than previously learned patterns

is much slower. This is analogous to the fact that humans and other

animals can learn ideas which are similar to previous learned ideas

quicker than they can learn completely new ideas [5].

It is such qualities as these which merit the further study of

this neural model. One area of interest is hardware implementation

of neural networks. The rest of this thesis discusses possible circuit

implementations of the Hogg-Huberman model.

Page 29: ^Le^^Jk^ - TDL

CHAPTER III

ANALOG CIRCUIT REPRESENTATION OF THE

HOGG-HUBERMAN NEURAL MODEL

Introduction

This chapter discusses the design and theory of operation of

an analog circuit representation of the Hogg-Huberman neural

model. The circuit was designed based on a combination of simple

analog sub-circuits. The basic analog building block for most of

these sub-circuits is the operational amplifier. This gives a highly

homogeneous design. The major circuit blocks of the analog neuron

cell is first given, describing how each plays its part in applying the

H-H algorithm. Each of these major three blocks is then further

broken down and a more detailed design analysis is given. Finally,

the timing of the various input and control signals is discussed.

The model designed here was based on the idea of

implementing a neural network as simply as possible, while still

obtaining a model which demonstrates good results.

Design Overview of an Analog H-H Neuron

Figure 3 shows a general block diagram of a single analog H-H

neural element. This figure shows the three major elements of the

neuron circuit. The first block is the output computational block,

which performs the output update algorithm shown in equation 1 of

21

Page 30: ^Le^^Jk^ - TDL

22

INPl

'

JT 1 INPl <

^ '

JT 2

LEFT NEIGHBOR

OUPUT

^

OUTPUT COMPUTATION

CIRCUITRY

<

0171 [PUT

RIGHT NEIGHBOR

OUTPUT

7 7

OUTPUT COMPARATOR

CIRCUITRY

INCREMENT/ DECREMENT

MEMORY AND MEMORY UPDATE

CIRCUITRY

RESET SIGNAL

MEMORY UPDATE

SIGNAL

Figure 3. Block diagram of analog H-H neuron circuit

Page 31: ^Le^^Jk^ - TDL

23

Chapter II. This block takes the two inputs from neurons in a

previous layer of the network and the memory value of the neuron

and produces the new limited output. This block also has inputs for

two voltages which determine the limiting values for the output,

Smin and Smax. The second major functional block of the analog

neuron circuit is the output comparison circuitry block. This part of

the neuron circuit is responsible for performing the comparison

between the neuron output and the outputs of its left and right

neighbor neurons. This part of the circuit makes this comparison

and decides whether to send an increment value, a decrement

value, or no change to the memory portion of the circuit. Finally,

the third major block contains the memory circuitry of the neuron.

This part of the circuit takes the increment/decrement value from

the comparison circuitry and updates the memory value when it

receives a memory update pulse from the memory update input

signal. In addition, the memory circuitry provides an input for a

memory reset signal, which initializes the memory to its Mmin

value for the beginning of the learn phase as described in the last

chapter. Finally, this circuit block has inputs for voltages which set

the values of the memory limits, Mmin and Mmax. A more detailed

design analysis of these three major circuit blocks of the analog H-H

neuron follows.

Page 32: ^Le^^Jk^ - TDL

24 Output Update rirrnitry

The first block of Figure 3 is the output update circuitry block.

The output circuitry takes the difference of the two input signals,

multiplies this result by the memory value, and then limits this

result between the values of Smin and Smax. A more detailed block

diagram of the output update circuitry is shown in Figure 4. The

two inputs to the neuron are first input into a difference amplifier.

The output of this difference amplifier is the difference of input 1

minus input 2, multiplied by a gain factor if desired. This gain can

be used to allow the circuit to be sensitive to small differences in

the two input voltages. Figure 5 shows the basic operational

amplifier based difference amplifier used for this portion of the

circuit [16]. The output of the circuit is given by the equation

Vout = R2/Rl(V2 - Vi) . (8)

The ratio R2/RI is the gain factor of the circuit. For the range of

input voltages used in this circuit, the gain was set to one by setting

R2 equal to Ri.

Once the difference in the inputs is obtained, the result is

multiplied by the neuron's memory value as shown in the second

block of Figure 4. For this part of the circuit, a dependent voltage

source was used to implement the multiplying operation. The

voltage at the output of the difference amp is multiplied by the

memory voltage. A simplified circuit diagram showing this is in

Page 33: ^Le^^Jk^ - TDL

25

> ; 9 o w ^

x/\ <7i

s ^ " p . ^y

^

^r\

£ cr

^

H p ^ P O •

H

• s 3 H J

1 x/h <73

^ " p . ' ix

k

-1 t a § 6 3

00

1 HJ CL.

hJ ;3

t T

Z i-H a fc JC3 "-^ a HJ E cu BS Q <:

ik

< •

A

^ <N

H fc P ? E c »-2;

3

3 4 - * 3 O

3 O

o D

C

c <

B i-i

o

o o

Page 34: ^Le^^Jk^ - TDL

26

o

V = ^ ( V - V ) O R ' 2 1 ^ 1

Figure 5. Difference amplifier op-amp circuit.

Page 35: ^Le^^Jk^ - TDL

27

Figure 6. The voltage of the dependent source is given by the

equation

Vout = A(Vin) + B(Vmem) + C(Vin X Vmem). (9)

In this case the constants A and B are equal to zero and the constant

C is equal to one, thus giving the desired output voltage as

Vout = Vin X Vmem. (10)

Finally, once the difference of the neuron inputs is multiplied

by the memory value, this result must be limited between Smin and

Smax. This is achieved through the use of two limiting circuits as

shown in the block diagram of Figure 4. As can be seen, each of

these limiting circuits has an inverse limiting transfer function. In

other words, if the input is positive, then the output is negative, and

vice versa. The limiting circuits used are shown in more detail in

Figure 7 [16]. The use of two of these limiting circuits in cascade

performs a dual function. The first and most obvious is the fact that

two cascaded inverse limiters gives the positive limiting which is

desired. Secondly, in the actual limiting circuits used in this design,

the slope of the transfer function for inputs less than Smin and for

inputs greater than Smax is not zero as is ideally desired. Thus, for

an input which is considerably less than Smin or greater than Smax,

the first limiting circuit will limit this value close to, but not exactly,

-Smin or -Smax, respectively. However, when this voltage is then

Page 36: ^Le^^Jk^ - TDL

^ i n ^

mem

V 1

28

A(Vp-HB(V2)+C( ( V ) V^^^

Figure 6. Multiplier circuit for analog H-H neuron.

Page 37: ^Le^^Jk^ - TDL

29

Vj • — ^ / w

- V • OFF

SLOPE = -

Figure 7. Op-amp limiter circuit with offset.

Page 38: ^Le^^Jk^ - TDL

30

input to the second limiter, the output is limited much closer to

Smin or -Smax, thus reducing the error. As an example, for a slope

of 0.01 in the limiter circuit of Figure 7, and an input voltage of 18

volts, the output of the first limiter circuit is -3.15 volts. If this

result were then simply inverted, the result would be 0.15 volts in

error. However, after the -3.15 volt output of the first limiter is fed

to the second limiter the resulting final output is 3.0015 volts,

which is much closer to the desired 3.0 volts. The slope of the

transfer curve between the limiting values is given by the ratio

Rp/Rl. In this case, this ratio is one. The values for the limits Smin

and Smax are set according to the voltages V2 and -V3. The limits

are given in terms of these voltages and the circuit resistors R2

through R5 in the equations

Smin = -V3 x R4/R5 (Ha)

Smax= V2X R3/R2. (Hb)

Both Smin and Smax are limited by the supply voltage of the op-

amp used in the limiting circuit. Thus, for a typical 15-volt supply

voltage for the op-amp, the absolute value for Smin and Smax is

slightly less than 15 volts. Voff in Figure 7 is set to zero for this

part of the neuron circuit.

The output of the second limiting circuit is then the desired

value needed according to the output update algorithm of the H-H

neural model.

Page 39: ^Le^^Jk^ - TDL

31 Comparator Circuitry

The next block of Figure 3 is the comparator circuitry block.

The comparator circuitry in each neuron is responsible for the

decision of whether to increment the neuron's memory value,

decrement it, or leave it unchanged during the memory update

cycle of the learning phase. As already discussed, this decision is

based on a comparison of the neuron's output with the outputs of

the left and right neighboring neurons in the same row of the

network. The memory value is incremented/decremented by one if

both neighboring neurons' outputs are less/greater than the

neuron's output. Otherwise, the memory value is left unchanged.

Figure 8 shows a more detailed block diagram of the circuitry

used to implement the output comparison algorithm just described.

In the first stage of this circuit, the neuron's output is compared

with the outputs of its two neighbor neurons. This is achieved in

this circuit by first taking the difference between the output of the

neuron and each of the two neighboring outputs. This is done using

two difference amplifiers of the same type as used in the output

update circuitry described earlier and shown in Figure 5. In these

difference amplifiers, it was determined that each of these

difference amplifiers would have a gain of two in order to enhance

small differences for the following circuitry.

In the next stage of the circuitry, the two difference values

are input into limiting/nulling circuits with the transfer functions as

shown in Figure 8. The function of these circuits is to detect

whether the neuron's output is greater than, less than, or equal to

Page 40: ^Le^^Jk^ - TDL

32

w S o< a

• A A A

A

^ ^ 4H

1 1

ler i

lim

iter

i

1

1

6

u-i

w

I

a>

«-i

6

»—1

1/-)

1

"a I-I ' 0)

«r

o Q

[3

o

I-I GO

O

o

is

O (-1 4-)

• 1 ^ .1—1

O 3 O

o •i-t (7j c3 c

3

C S o U

E g >-i O

^8

Page 41: ^Le^^Jk^ - TDL

33

each of the neighboring outputs. Thus, for each of the two

difference circuits, the output of this second stage is as follows: If

two times the difference in the neuron's output and the neighbor

output is greater(less) than +(-)0.5 volts , the sum of the two

limiter/nuller circuits is negative(positive) one volt. If two times

the difference is not greater than +0.5 volts or less than -0.5 volts,

the sum of the two limiter/nuller circuits is zero. This means that

differences as small as 0.25 volts will be detected, while for any

smaller differences the sum of the two limiter/nuller circuits will be

a null, or zero. This selectivity can be changed by changing either

the gain of the difference amplifiers, or the width of the null region

of the limiter/nuller circuits, or both.

The limiter/nuller circuits just described are similar to the

one shown previously in Figure 7. The limiter/nuller circuit is

shown in Figure 9. With reference to Figure 7, this circuit does not

have the feedback resistor Rp, and an offset voltage Voff is applied

to give the positive and negative offsets of +.5 volt and -.5 volt.

The limiter/nuller circuits used in the second section of the

comparator circuitry are inverting, therefore, the outputs of the

four limiter/nuller circuits are next summed together by an

inverting summing circuit. The output of this summer circuit can

take on one of five possible values by nature of the outputs of the

limiter/nuller circuits just described. These five values are 0, +1, -

1, +2, or -2 volts. A value of +2 volts corresponds to the case when

the neuron's output is greater than both of the neighbor neurons'

outputs. A value of -2 volts corresponds to the case when the

Page 42: ^Le^^Jk^ - TDL

34 +V.

R,

V ^ I

V ^ OFF

A/V\r--

^

<}

Vo^

SLOPE = -R, R

/

1 ^ ^ G F I ^ '

0

R 2 R,

3 R5

R

V o

R

R.

-V.

I

/ SLOPE =

/

R i

Figure 9. Op-amp limiter/nuller circuit with offset.

Page 43: ^Le^^Jk^ - TDL

35

neuron's output is less than both of the neighbor neurons' outputs.

Any of the other three values corresponds to other cases. The

summing circuit used is shown in Figure 10. This is a simple op-

amp summing circuit, with the output given by

Vout = -R2/Rl( Vi + V2 + V3 + V4). (12)

The output of this summer is then fed into another two

limiter/nuller circuits of the same type as in Figure 9. The null

values in this case are set at +1.5 volts and -1.5 volts. The limiting

values are set at +1 volt and -1 volt. Thus, the sum of these two

limiter/nuller circuits is +1(-1) volt if the neuron's output is

less(greater) than both of the two neighbors' outputs. Again,

because of the inverting nature of the limiter/nuller circuits, this is

the negative of the desired function for the H-H neural model.

The output of these two limiter/nuller circuits is finally

negatively summed together with the memory reset input of the

neuron as shown in the fifth major block of Figure 8. During normal

operation of the neuron, the memory reset input is at a zero voltage

level. During memory reset, the input to the memory reset line is

set to a positive 2 voltage level, thus ensuring that the total

negative sum of the limiter/nuller circuits' inputs and the memory

reset input is negative 1 volt or less. This will cause the memory

value to be decremented to the value Mmin. The inverting summer

used is similar to that shown in Figure 10, with the exception that

there are only two input voltages to the circuit.

Page 44: ^Le^^Jk^ - TDL

36

_ ( V ^ , V ^ , V 3 . V ^ )

Figure 10. Four input inverting summer.

Page 45: ^Le^^Jk^ - TDL

37

Memory and Update Circuitry

The final circuit block of Figure 3 is the memory section of the

neuron. This is where the actual memory value of the neuron

resides. A more detailed block diagram of the memory circuit is

shown in Figure 11. The basic operation of this part of the neuron

circuit follows.

In the first section of Figure 11, the output value from the last

inverting summer in the comparator circuitry described in the last

section is summed together with the memory value in order to

produce the new memory value at the output. In this case a non-

inverting summing circuit was used. This circuit is shown in Figure

12. The gain of this summer, with equal resistor values, as shown

in Figure 12, equals unity.

The next block of Figure 11 is the memory value limiting

circuitry, with the transfer function as shown. This limiting circuit

is shown in a more detailed block diagram in Figure 13. The new

memory value is input into a limiter circuit of the same type as

shown in Figure 7. This limiter circuit limits the input between the

values of negative 1 and positive 1 volt. The output is offset by

applying a two-volt offset voltage Voff to the limiter circuit of

Figure 6. This produces the transfer function shown in the first

block of Figure 13. Finally, the output of this limiting circuit is

negatively summed with a negative four volts in order to produce

the overall transfer function shown in the second block of Figure 11.

The actual memory value of the neuron is stored using two

sample and hold circuits as shown in Figure 11. Two sample and

Page 46: ^Le^^Jk^ - TDL

o2§

38

I

f

Q

<

^ ^ 1

k X ii

^ ^

1

Mm

ax

1

Mm

in

I-I

o B B

3 O

o I-I 3

C

I

c <

I-I

3 ^ 00 73

IJH

O O

Page 47: ^Le^^Jk^ - TDL

39

' 2 • • A/W-^ V = V + V ^O 1 ^ ^2

Figure 12. Non-inverting two input summer.

Page 48: ^Le^^Jk^ - TDL

40

-2

i INVERTING

SUMMER MEMORY VALUE

Figure 13. Memory limiting block diagram for analog H-H neuron circuit.

Page 49: ^Le^^Jk^ - TDL

41

hold circuits are used because of the feedback nature of the

memory value. The new updated memory value from the memory

limiting block just discussed is first sampled by the first sample and

hold circuit by sending a sample pulse to the circuit. The sample

pulse on the first sample and hold circuit is next removed so that

the sampled value is now held. Then a sample pulse is sent to the

second sample and hold circuit, which latches the new memory

value to the memory line of the neuron. The first sample and hold

circuit thus acts as a buffer, preventing the new memory value

being sampled from affecting the input of the second sample and

hold circuit.

The actual sample and hold circuits used are shown in Figure

14 [16]. The op-amp part of the circuit acts as a voltage follower

circuit, with the input voltage Vj appearing at node 2. When the

sample voltage Vs is at a value of positive 15 volts, the transistor

Ql turns on and the capacitor C charges up to the value of Vj.

When Vs goes to a negative 15 volts, Qi turns off and the sampled

voltage is held on C. Q2 is a voltage follower and acts as a buffer to

reduce the droop in voltage on C.

Timing

As was already mentioned, control over the memory update

of the analog neuron is accomplished through the timing of the

sample pulses which are sent to the two sample and hold circuits of

the memory circuit. The two sample pulses are timed so that the

Page 50: ^Le^^Jk^ - TDL

42

ra

>

cT i

en JWV

en

U

!•

>

> > >o »n

3 O

^ — ^

(N "AA/V

C>H

Ci GO

o

c

'EH

B a

00

I-I 3

PL,

Page 51: ^Le^^Jk^ - TDL

43

updated memory value at the input of the first sample and hold

circuit is latched to the input of the second sample and hold circuit,

and then is latched immediately after to the output of the second

sample and hold circuit.

A timing diagram shown in Figure 15 shows the timing of the

neuron circuit during the learning phase of operation. During the

learning phase of the neuron, once inputs are available at the two

inputs of the neuron, the output is automatically computed. This

output appears after the time delay of the circuit. Once a stable

output is available for the output of the neuron and the outputs of

its left and right neighbors, then the comparator circuitry compares

these three outputs and automatically puts an updated memory

value at the input to the first sample and hold circuit of the

neuron's memory circuit. If the neuron is in the learn phase of

operation, then the new memory value is sampled by the first

sample and hold circuit, and then by the second sample and hold

circuit immediately after. The minimum sample pulse width

required for each sample and hold circuit is about 50 microseconds,

with two of these pulses required to update the memory. These

time periods are shown in Figure 15. The overall time required by

a network of such neurons will be the total time from stable inputs

to a new memory value times the number of rows in the network.

The time is independent of the number of neurons per row since

every neuron in each row is updated simultaneously in parallel.

This is the inherent advantage in parallel processing neural

Page 52: ^Le^^Jk^ - TDL

>

>

o 2 B O

c3

-a • f - i

>

(/3

44

CA

3 P H 3

3 O I-I 3

• * - »

3 PH

3 o c

Neu

ro

OH - 2

1 3 C^ I-I

to

S Id

ci

o

Inpu

ts

and

h

<a

ampl

!/3

Fir

st

pu

lse

i>

sam

p

T3

Seco

n p

uls

e

00

c

C 00

• 1-H OS

3 O I-I

o 3 O I-I 3 (D 3

I

00

3 <

I-I 3 00

Page 53: ^Le^^Jk^ - TDL

45

networks. Greater detail of the actual time delays of the analog

neuron circuit is given in Chapter V.

In a network of such neurons, the input pattern is first

introduced to the first layer of neurons, allowing their circuitry to

compute outputs. Once the outputs are computed, the memory

values of the neurons in each row are updated simultaneously, one

row at a time. The memory values must be updated from the

bottom first and working back to the top layer in the network. This

is to prevent updated memory values to affect the inputs to the

neurons in the row below due to the automatic output update.

Summary

The design of the analog Hogg-Huberman neuron was based

on series of basic analog sub-circuits, with the operational amplifier

as the basic building block. The circuit was designed, trying to keep

as simple a design as possible and yet still obtain a workable design.

This was achieved as will be shown from the results of simulations

in Chapter V.

Page 54: ^Le^^Jk^ - TDL

CHAPTER IV

DIGITAL CIRCUIT REPRESENTATION OF A

HOGG-HUBERMAN NEURAL MODEL

Introduction

This chapter discusses the design of a digital circuit

representation of a Hogg-Huberman neural cell, or neuron. The

circuit described herein represents a neural cell which has a two-bit

binary output. Thus, the output is automatically limited to four

possible binary output levels. This is in contrast to the analog

neuron designed in the last chapter in which the output is an analog

voltage taking on an infinite number of values between the two

limiting values.

The memory of the digital neuron in this chapter also has two

bits of resolution. However, the memory is not allowed to have a

value of binary zero, thus limiting the number of possible values to

three rather than four.

The entire circuit is based on a single logic circuit, that of the

NAND gate. Although the circuit can easily be implemented in

standard AND, NOR, and NOT logic gates, the use of only NAND gates

was implemented for consistency as is usually the case in VLSI

implementations of logic circuitry. This design shows that a digital

circuit implementation of a H-H neuron is straightforward, and

gives a workable hardware implementation which is considerably

46

Page 55: ^Le^^Jk^ - TDL

47

faster when larger networks are implemented than standard

software algorithms.

Design Overview

The digital H-H neuron, in a similar fashion to that of the

analog neuron designed in Chapter III, can be broken down into

three major functional blocks. This is shown in Figure 16, which is

similar to Figure 3. The neuron receives inputs from two neurons in

the previous layer in the network. In the case of the digital neuron,

each neuron has two bits of data which represent the output, thus

each neuron receives four input signals, two from each of the two

neurons in the previous network layer.

The comparison logic circuitry block has six signal inputs, two

bits each from the neuron output, the left neighbor neuron output,

and the right neighbor output. Again, this part of the circuit also

has an input for a memory reset signal, which resets the memory to

its initial state.

The memory circuit of the digital H-H neuron is composed of

two clocked flip-flop circuits. The clock inputs to the two flip-flops

serves as the memory update signal input to the neuron. These

flip-flops are also implemented with NAND gate logic.

These three major functional blocks of the digital neuron are

described in more detail in the following sections.

Page 56: ^Le^^Jk^ - TDL

48

INPUT 1 INPUT 2 (2-BITS) (2-BITS)

LEFT NEIGHBOR

OUTPUT (2-BITS)

RIGHT NEIGHBOR

OUTPUT (2-BITS)

OUTPUT COMPUTATION

LOGIC

OUTPUT COMPARATOR

LOGIC

INCREMENT DECREMENT

MEMORY (2-BITS)

MEMORY AND MEMORY UPDATE

LOGIC

MEMORY RESET SIGNAL

MEMORY UPDATE SIGNAL

OUTPUT (2-BITS)

Figure 16. Digital H-H neuron circuit block diagram.

Page 57: ^Le^^Jk^ - TDL

49

Output Logic

The output logic circuitry of the digital H-H neuron was

designed using straightforward digital logic design methods. First

the output update algorithm is written in terms of a state table,

showing the required output for each of the possible combinations

of inputs and memory values. Once this is done, the output function

in terms of the inputs and memory values can be reduced using

standard reduction methods, in this case Karnaugh Maps [17]. This

reduced logic function is then implemented using NAND logic

implementation.

A major advantage of a digital H-H neuron is the fact that

because the output and memory values are digital, they are

automatically constrained, or limited, to upper and lower values,

and thus no complex limiting circuitry is needed. In a more

complex digital circuit with more than two bits for the output or

memory values, limiting could be introduced through logic

functions, or the values could be allowed to take on values over the

whole range.

As each neuron has two bits for the output, the output can be

thought of as taking on the values of zero, one, two or three,

corresponding to binary zero, one, two, or three. This corresponds

to values for Smin and Smax of zero and three, respectively. The

memory is not allowed to have a value of zero, so it can have a

value of one, two, or three. Thus, the values of Mmin and Mmax are

one and three, respectively. For the H-H output update algorithm,

the output is the value of the difference in the two inputs

Page 58: ^Le^^Jk^ - TDL

50

multiplied by the memory value. A table showing the output value

for all possible values of the two inputs and the memory is shown

in Table 1. Each input, the memory, and the neuron output is

composed of two bits, however, and thus Table 2 is written in terms

of the binary representations of each of these signals. II and 12

represent the two inputs, M represents the memory value, and O is

the output value. The subscripts of one and two denote the most

and least significant bits, respectively, of each of these signals. The

X's in the table represent don't care conditions. This occurs because

of the fact that the memory value of zero is not allowed. Thus,

essentially, we "don't care" what the output is for this case. Don't

care conditions are sometimes useful in the reduction of the logic

implementation of the function.

The next step in determining the logic implementation of the

desired output function is to use the methods of Karnaugh Maps to

obtain a reduced boolean equation for each of the two bits of the

output in terms of the inputs and memory [17]. Figure 17 shows

the two Karnaugh Maps used to determine each of the two bits for

the output, 01 and 02. The I's and X's are copied into the maps

from Table 2. The final reduced boolean equations for the two

output bits in terms of the inputs and the memory are shown in

Figure 17 and are:

01 = 11; V^ 2 " V 2* l " ^ * l* Ml

+ I1*I1»I2?M1 +I1M2M2»M1 ^ ^ ^ 2 1 2 ( j 3 ^ j

Page 59: ^Le^^Jk^ - TDL

Table 1. Output binary value in terms of neuron input and memory values.

51

11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

12

0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3

M

0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3

0

X 0 0 0 X 0 0 0 X 0 0 0 X 0 0 0 X 1 2 3 X 0 0 0 X 0 0 0 X 0 0 0

Page 60: ^Le^^Jk^ - TDL

Table 1. continued. 52

11

2 2

2

2

2

2

2

2

2

2

2

2

2

2

2

2

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

3

12

0 0

0

0 1

1

1

1

2 2

2

2

3

3

3 3

0

0

0 0 1

1

1

1

2

2

2

2

3 3

3

3

M

0

1 2

3

0 1

2

3

0 1 2

3

0 1

2 3 0 1 2

3

0 1

2

3

0

1

2

3

0

1

2

3

0

X 2

3 3 X

1

2

3 X

0 0

0 X

0 0

0 X

3

3 3 X

2

3

3

X

1

2

3

X

0

0

0

Page 61: ^Le^^Jk^ - TDL

Table 2. Output signal logic table. 53

I ll I I . 12. 12 Ml M2

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0 1

1

1

1

0

0

0

0 1

1 1 1

0

0

0

0 1

1

1

1

0

0

0

0

1

1

1

1

0 0

1 1

0

0

1 1

0

0

1 1

0 0

1 1

0

0

1 1

0

0 1

1

0

0

1 1

0

0

1 1

0 1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1

01 02

X 0

0

0 X

0 0

0 X

0

0

0 X

0 0 0 X

0 1

1

X

0

0

0

X

0

0

0

X

0

0

0

X 0

0

0 X

0

0 0 X

0

0

0 X

0 0 0 X

1

0

1

X

0

0

0

X

0

0

0

X

0

0

0

Page 62: ^Le^^Jk^ - TDL

Table 2. continued 54

11 i ' \

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

' \

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

' \

0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

Ml

0 0

1 1 0 0

1 1 0 0

1 1 0 0 1 1 0 0

1 1 0 0 1 1 0 0

1 1 0 0

1 1

M2

0

1 0 1 0 1 0

1 0

1 0

1 0 1 0 1 0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1

01

X 1 1 1 X 0 1 1 X 0 0 0 X 0 0 0 X 1 1 1 X 1 1 1 X 0 1 1 X 0 0 0

02

X 0 1 1 X 1 0 1 X 0 0 0 X 0 0 0 X 1 1 1 X 0 1 1 X 1

0 1 X 0 0 0

Page 63: ^Le^^Jk^ - TDL

tN 55

r _

X o o X X o

X

o o o o o o

1 - H

1 o 1 - H

o o o o

1—H

X 1 - H

o X X o

X

X o o X X o

X

o o o o o o

1 - H

o o o o o o

o

X o o X X o

X

cs _

T

cs

X o o X X o

X

o o o o o o

o

1 - H

1 - H

o o 1—1

1 - H

1—1

X o o X X o

X

X o o X X o

X

o o o o o o

o

o o o o o o

1 - H

X o o X X o o X

-s

-s

- - -s

1-H

+

•<N 1 - H

HH

• ^ 1 - H

HH

+ r4

I t s

1^ • ^ |<S 1^

+ ^^ s • CN

|<S 1 HH

• CA

1—' • HH • ^

1 - H

HH

r^. +

1-H HH CA

+

IS

HH

• T - l

HH

il<^ I -

HH ^

ts

la CN

4—•

DH 3

3 O H

3 O

o

C x: o 00 b 3 3 3 C

1

i~H 4 ^

ure

Fig

• I H

-d (4H O

o * ^H 00 O

O

Page 64: ^Le^^Jk^ - TDL

56

02 = Il « i2^»I2»Ml + Il»Il2»i2»M2 + Il« IlJ 12-Ml

+ I l»i2; i2»M2 + lit n»i2; I2»M2 2 1 2 1 2 1 2 (135)

where a dot between variables denotes the logical AND function, a

plus sign denotes the logical OR function and a bar over the variable

denotes the logical NOT function. Figure 18 shows the

implementations of these three basic logic functions in terms of the

NAND logic function. Using these configurations, the final output

logic circuitry using NAND gate implementation is shown in Figure

19.

Comparator Logic

The design of the comparator logic circuitry of the neuron

proceeds in the same fashion as the output logic design. In this part

of the circuit, the output of the neuron is compared with the outputs

of its two neighbors, and the decision of whether to increment,

decrement, or leave the memory unchanged is made. The output of

this section of the circuit consists of two signals, an increment

signal, INC, and a decrement signal, DEC. If the memory is to be

incremented, the increment signal goes to a logic one level. If the

memory is to be decremented, then the decrement signal goes to a

logic one level. If no change is to be made to the memory, then

both signals are at a logic zero level. The two signals are not

allowed to both be at a logic one level simultaneously.

Page 65: ^Le^^Jk^ - TDL

B •-

A •- {

B •- C

A N D

OR

57

>-tf> A • B

-t A + B

A ^ cO •* A

N O T

Figure 18. Implementation of AND, OR, and NOT logic functions with NAND gates.

Page 66: ^Le^^Jk^ - TDL

58

>^ I H

•1-H

3 O I-I

•1.H

o o

• F H

00

4 - )

OH t i 3 ' 3

3 . ^ DH O

4—«

^ 3 O o I-I

3

^ 3 1-H

3 ' 00 *

• i H

t^ « 4->

• 1H

00

-5

Page 67: ^Le^^Jk^ - TDL

59

A table showing the increment and decrement signals in

terms of the neuron output and the outputs of the two neighbors is

given in Table 3. The increment signal goes to a logic one level only

when the neuron output is greater than both of the neighboring

neuron outputs. The decrement signal goes to a logic one level only

when the neuron output is less than both of the neighboring neuron

outputs.

Again, the information in the state table is put into Karnaugh

Maps in order to obtain a reduced boolean expression for the

increment and decrement signals. These two maps are shown in

Figure 20. The final reduced expressions for INC and DEC are given

by:

INC = N1»N2*01 + Nl5N2|01«02 + N1»N2«01«02

+ N1*N2*01»02 + N 1 * N 1 » N 2 » N 2 « 0 2 1 2 1 2 1 2 ( 1 4 ^ )

DEC = N1«N2«01 + N1|N22»01»02 + m*N2*0\*02

+ Nl»N2*0"l«02 + N1»N1*N2;N2»02 1 2 1 2 1 2 ( i 4 i j )

Finally, Figure 21 shows the implementation of these two boolean

expressions in NAND logic.

Page 68: ^Le^^Jk^ - TDL

60 Table 3. Increment and decrement signals logic table.

l o i 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

02

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Ni l

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

Nl2

0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

N2i

0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1

N22

0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

INC

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

EBC

0 0 0 0 0

0

0

0 0 0 0 0 0 0 0 0 0

1 1

0 0 1

'

Page 69: ^Le^^Jk^ - TDL

61

01 0 2

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

Ni l

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

Table 3.

NI2

0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

Continued

N2i N22

0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0

1 1

INC

1 1 0 0 1 1 0 0

0 0 0 0 0 0 0 0

0

0

0 0 0 0 0

TEC

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Page 70: ^Le^^Jk^ - TDL

o

;z;-

O O O 1—I o o o o o o o o o o o o o o o o o o o o o o o o o o o o O O i — I I - H O O O O

o o o ^ ^ o o o o O O O ^ i — ( O O O

"T :z;

cs

o L

- o h:z;

o o

o o o o o

o o

1 - H

o o o o

1 o ^

o o o o o

o 1 - H

o o o o o

o o

o o o o

o o

o o o o

1 1 - H

^

1—H

o 1 - H

o o o o

o 1 - H

o o o o o

- o

"T ;z;

<s T :z;

CN

O

15 :z;

lo

• C N <S

:z; 1-H

:z; +

o

:z:

o •

o l:z; l:z; + r i O 0

o

\ ^ •cs

iS +

1-H

o • 1 — 1

l;z;

O • f M

.N2

IS • t N

Iz • ^

Iz + cs O 9

o IS 1 ^

62

• ^H

1^

u

lo • < N

:z; A-:z; <N

:z; lo i-

:z; +

lo 1-H

lo z 1-H

:z; +

-o 3 c«

4 . ^

3 D s (D I H

O 3

• I H

I H

O UH

(A DH

s 3 o

4.^

o 3

O I H

. 3 00 3

3 I H C3

1 •

O <s <D I H

3 00

. ^ H u.

4 - ! • 1 H

3 O I H

• i H

o 3 O I H

3 a> 3

K 1

ffi

c 4 - >

• 1H

00 . | H •a < 4 H

o o

• I H 00 o ^ H

4 - J

3 a> s <l> I H

O <D -o

u w

Page 71: ^Le^^Jk^ - TDL

63

00

o

3

B I M

o

4—1

3

s <D I-I O 3

3 O I-I 3

D MH

4 - i >^H

• 00 1-H - i - l

<S T3

v-i O 3 '^

• ' H > ^

fc JH • 1H 3 O »H

• 1H

o

Page 72: ^Le^^Jk^ - TDL

64 Memory Logic

The memory section of the digital neuron is composed of two

flip-flop circuits. These two flip-flops are clocked R-S flip-flops

implemented with NAND gates. This implementation and the

corresponding state table are shown in Figure 22. One flip-flop is

used for each bit of the memory. The clock input to the flip-flops is

the memory update signal. When this signal goes to a logic one

level, the next memory state is latched to the outputs of the flip-

flops. When this signal is at a logic zero level, the inputs to the

second half of each flip-flop are disabled, maintaining the current

memory value at the outputs.

The update logic receives the increment and decrement

signals from the comparator logic circuitry, and combines these

signals with the current memory state to determine the inputs to

the two memory flip-flop circuits. A table showing the dependency

of the inputs to the two memory flip-flops on the increment signal,

the decrement signal, and the current memory value is shown in

Table 4. Rl' and SI' are the reset and set inputs for the most

significant memory bit flip-flop. Ml, and R2' and S2' are the reset

and set inputs for the least significant memory bit flip-flop M2.

Due to the fact that the increment and decrement signals can

never simultaneously be at a logic one level, the last four rows in

Table 4 contain don't care conditions. Furthermore, due to the

functioning of the R-S flip-flop circuit, if there is to be no change on

a memory bit from the current state to the next, then a don't care

condition exists for the set input to the flip-flop if the current state

Page 73: ^Le^^Jk^ - TDL

65

CLOCK

R

0

0

1

1

S

0

1

0

1

CLOCK

1 1 1 X

QN+1

QN

1

0

X

Figure 22. NAND gate implementation of a clocked R-S flip-flop and state table.

Page 74: ^Le^^Jk^ - TDL

Table 4. Memory flip-flops set and reset signal state table.

66

Inpu t s

INC EBC

0 0 0 0

0 0

0 0

0 1

0 1

0 1

0 1 1 0

1 0

1 0

1 0 1 1

1 1

1 1

1 1

Present

M l

0

0 1

1

0

0 1

1

0

0 1

1

0

0

1

1

State

M2

0

1 0

1 0

1 0

1 0

1 0

1 0

1 0

1

Next

M l

0

0

1 1

0

0 0

1 0

1 1

1 X

X

X

X

State

M2

1 1

0 1 1

1 1

0

1 0 1 1

X

X

X

X

Outputs

SI '

0

0 X

X

0 0

0 X

0 1

X

X

X

X

X

X

R l '

X X

0

0 X X

1

0 X

0 0

0 X

X

X

X

S2'

1 X

0 X

1 X

1

0 1

0 1

X

X

X

X

X

R2'

0

0 X

0

0

0 0 1

0 1

0

0 X

X

X

X

Page 75: ^Le^^Jk^ - TDL

67

is a logic one, and a don't care condition exists for the reset input to

the flip-flop if the current state is a logic zero.

Although the memory value of zero is not allowed, the circuit

provides a means of returning to an allowable memory state should

this case ever accidentally occur.

From the information in Table 4, the four Karnaugh Maps used

to find the set and reset inputs to the two flip-flops are derived.

These are shown in Figure 23. The reduced boolean equations for

the set and reset signals obtained from these maps are:

R1'=DEC«M2 (15a)

S1'=INC#M2 (15b)

R2'= INC»M2«Mr + DEC«M1«M2 (15c)

S2' = INC«M2 + M2«DEC + Ml* M2. (I5d)

Figure 24 shows the implementation of these signals in NAND logic.

The final feature to be incorporated into the digital neuron is

a signal to reset the memory to its original state of a binary one

value. This means that Ml must be reset to a zero value and M2

must be set to a one value. A reset signal is combined with the set

and reset signals for the two flip-flops so that when the reset signal

is at a logic one level the memory is reset to a binary one value.

Page 76: ^Le^^Jk^ - TDL

68

INC-

Ml

0

0

1

0

0

0

X

X

0

X

X

X

X

X

X

X

EBC

SI' =INC«M2

- M 2

INC-

Ml

X

X

0

X

X

X

X

X

1

0

X

X

0

0

0

0

-r- DBC

R1'=DEC«M2

^VI2

INC-

Ml

1

X

0

1

1

X

X

X

1

0

X

X

0

X

X

1

L J

-M2

INC-

S2' =

EBC I N C * ^ + M2»DEC

+ M1«M2

Ml I

0

0

1

0

0

0

X

X

0

1

X

X

X

0

0

0

L J

^ ^ 2

-1~ EBC

R2' = INC* M2 • Ml + DEC*M1«M2

Figure 23. Set and reset logic Karnaugh Maps for digital H-H neuron memory flip-flop circuitry.

Page 77: ^Le^^Jk^ - TDL

69 00 0^

A A

~ >

s—*

o

A A

I.H

o

3 O I H

o

00

SH 3

O - H CO e«

4—> •1-H

. 00

. ^*H

3 ;^ 00 c

Page 78: ^Le^^Jk^ - TDL

70

When the reset signal is at a logic zero value, it has no effect on the

values of the memory flip-flops.

A table showing the new set and reset inputs to the two flip-

flops in terms of the signals Rl', SI', R2', S2', and the reset signal

RES, is shown in Table 5. When the reset signal is at a logic zero

level, Rl, SI, R2, and S2 correspond to Rl', SI', R2' and S2',

respectively. Otherwise, Rl(Sl) goes to a logic 1(0), and R2(S2) goes

to a logic 0(1), resetting the memory value to one. The

implementation of these set and reset signals in NAND logic is

shown in Figure 25.

Timing

The timing of the signals to the neuron is shown in Figure 26.

During the first cycle, inputs are given to the neuron. The output is

computed during the next time step. During the next cycle, if the

network is in the learning phase, the memory is updated. Once the

neurons have stable outputs the memory update signal is sent

during the next cycle. This signal is fed to the clock inputs to the

memory flip-flops and latches the new memory to the flip-flops'

outputs. More details of the actual time delays involved in the

digital neuron circuit are discussed in Chapter V.

The timing for a network of neurons proceeds as follows: A

pattern is input into the first row of neurons during the first time

period. The outputs of the neurons are allowed to become stable,

and then the memories are updated. Again, as with the analog

Page 79: ^Le^^Jk^ - TDL

71

Table 5. Memory reset logic table.

SI '

X

X

Rl '

X

X

S2'

X

X

R2'

X

X

RES

0

1

SI

sr

0

Rl

Rl '

1

S2

S2'

1

R2

R2'

0

Page 80: ^Le^^Jk^ - TDL

RES

72

sr

R I V

S2' ^

>

>

>

>

>

>

S2' >

-•SI

Rl

S2

R2

Figure 25. Memory reset logic for digital H-H neuron circuit.

Page 81: ^Le^^Jk^ - TDL

ei >

>

73

Vi

.5 >

I H

O B

4.^

DH 3

00 3

3 00

• I H C/3

3 O I H

• i H

o 3 O I H 3 a>

3

I

4-»

*ob > |H

Q

VO (N

I H 3 00

• i H

C/3

3 DH 3

3 O IH 3 1)

3 DH 4->

3 o 3 O I H

3

o S (D e o

4-»

3 D H 3

ty^ D H

flo

1

D H

fli

0) 4.J

DH 3

I H

is

Page 82: ^Le^^Jk^ - TDL

74

neuron circuit, the memories of all neurons in each row are updated

simultaneously, but the rows are updated one at a time, proceeding

from the last row and working up the network row by row,

ensuring that no memory update of a neuron affects the input to

neurons in the next row before they can be updated with their

current input and output values. As the rows of neurons are

updated one at a time, the time required for a network to complete

a learning phase cycle increases linearly with the number of rows in

the network, but does not increase with the number of neurons per

row. For large networks, this results in considerable time savings,

which will be discussed in a later chapter.

Summary

The digital circuit design of the H-H neuron shown in this

chapter was straightforward, and shows the fact that such networks

can be easily implemented in hardware designs. Such hardware

implementations offer an increase in speed performance over

software implementation. This will be shown later for actual circuit

simulation results. Furthermore, the circuit designed here could be

easily implemented in a chip design, such that these neural network

chips could be linked together to form larger networks.

The circuit design here using all NAND gate logic has a logic

gate count of 143 gates per neuron. Typical VLSI technology

available today would allow neural networks containing thousands

of neurons to be put on a single chip.

Page 83: ^Le^^Jk^ - TDL

75

The results of the simulation of the digital circuit model for

the H-H neuron are given in the next chapter.

Page 84: ^Le^^Jk^ - TDL

CHAPTER V

CIRCUIT SIMULATIONS AND RESULTS

Introduction

Simulation of both the analog and digital Hogg-Huberman

neuron circuits of Chapter III and Chapter IV was performed using

SPICE, a standardized software package for circuit simulation and

analysis. Version 2.0 of the SPICE program was used for all the

results given here and was run on a VAX 11/780 computer. The

first section of this chapter briefly discusses some of the basic

aspects of SPICE.

In addition to the SPICE simulation of the analog and digital

single neuron cells, a simple two by two neural network composed

of analog neurons was simulated with SPICE. For comparison, the

network was also implemented in software on an HP-1000

computer. A network of digital neurons was also simulated in

software. The results obtained and shown later in this chapter

showed a close correspondence between the SPICE simulation of the

neural circuit networks and the software implementations of the

same networks. Due to the nature of the analog circuit, however,

the analog neuron does have a certain amount of noise intolerance.

General results were encouraging, however, and show that

hardware implementations hold promise, given more study.

76

Page 85: ^Le^^Jk^ - TDL

77

Although a SPICE simulation of more than just a single digital

neuron of the design from Chapter IV was infeasible due to the

immense computational demands required, the results from the

single neuron simulations showed behavior exactly as desired. In

order to examine the learning and recognition performance of a

network composed of such neurons, a software program was

written to simulate such a network. The results show that the

network is able to categorize patterns to a certain extent, with a

certain amount of immunity to noise. The digital H-H neuron offers

a considerable increase in speed over the analog circuit model, and

as it is a digital circuit, it is inherently less susceptible to

performance degradation due to changes in such parameters as

temperature.

The SPICE Program

The SPICE circuit simulation program is a software program

which uses numerical methods and equivalent models to analyze

and simulate circuits composed of discrete electronic components.

The circuit may be composed of a combination of resistors,

capacitors, inductors, voltage and current sources, transistors, and

diodes. In addition, SPICE allows various different models of each

of these elements to be used. For example, various transistor types,

such as FET's or BJT's may be used, and the different models and

parameters for each type can be specified by the user. The circuit

is described in terms of the elements in the circuit, the nodes of the

Page 86: ^Le^^Jk^ - TDL

78

circuit to which each element is attached, and the associated values

for the various parameters of each element.

The SPICE program allows for various types of circuit analysis

to be performed. The circuit can be analyzed for its d.c. response,

a.c. response, and transient response both for d.c. and a.c. inputs.

The values of voltages and/or currents at selected nodes of the

circuit may be printed out or plotted as desired, allowing the user to

analyze the results.

When larger circuit devices such as operational amplifiers or

logic gates are desired to be used in a circuit, they must be

described in terms of their internal composition of discrete circuit

elements. That is, SPICE can only simulate an op-amp if that op-

amp is described to SPICE in terms of resistors, transistors, etc., or

described in terms of an equivalent circuit model, again made up of

resistors, dependent voltage/current sources, etc. Such a sub-circuit

description need only be made once in an input file to SPICE. Once

this is done, larger circuits composed of these sub-circuits may be

designed, with SPICE making a call to the sub-circuit description

each time the device is encountered in the larger circuit.

It can easily be understood why circuits using such elements

as op-amps or digital logic gates can easily become computationally

intensive for SPICE, because the SPICE program must reduce the

entire circuit down to its discrete element level, analyzing the

voltages and currents at every node and branch in the circuit.

Page 87: ^Le^^Jk^ - TDL

79

A sample input file listing to SPICE for the analog neuron

circuit designed in Chapter III is shown in Appendix A for

reference.

Analog Circuit Simulation

The analog circuit representation of the H-H neuron, given in

Chapter III, was simulated using SPICE, and the results compared

with desired performance. In addition to the single neuron circuit

simulation, a two by two neural network composed of these analog

neurons was simulated and compared with software simulation

results. The analog design showed encouraging performance,

although the precision of the circuit and its tolerance are not of the

level of the digital neuron design of Chapter IV.

Single Neuron Simulation

Many SPICE simulations of the analog neuron circuit were

performed using various neuron input values and various limiting

levels for Smin, Smax, Mmin, and Mmax. For simplicity of analysis,

the model for the op-amps used in the circuit was that of the

standard 741 op-amp. This model was used to increase the

simplicity of the op-amp model representation to the SPICE

program. Due to the complexity of the internal structure of the 741

op-amp, an equivalent circuit model was used during the SPICE

simulations. This equivalent circuit model is shown in Figure 27.

This equivalent circuit is a good model for the op-amp in most cases

Page 88: ^Le^^Jk^ - TDL

, Hi'

80

DH

O CO

r<i vw .

i

HH

ON

OH. r

+

^/W

^

Q > ^

<l> ON

•vw

u^,

,JWV

+

o o

I H

o t3 c/5 3

3 O I H

3 O

5 (L> .^ 3 ^ H^

£ 00 c« O DH 'C3

3 . O

t^ Tl

3 S 00 'SS

HH W U HH CO

Page 89: ^Le^^Jk^ - TDL

81

in general, and for all of the circuits used in the analog neuron

circuit.

Although different values for the limiting values of the output

and the memory were tried, it was found that the best results were

obtained when the range of the output limits was kept within a few

volts, with the lower limit usually no smaller than a negative three

volts, and the upper limit no more than a positive three volts. In

addition, the memory values were also kept within the range of a

few volts, usually from the range of one to three volts. Of course, as

mentioned earlier, absolute maximum and minimum values for the

limits are imposed by the supply voltages of the op-amp circuits

used for the limiting.

Table 6 shows a listing of some of the results of the SPICE

simulation for the single analog neuron circuit. For all of these

simulations, the output was limited between negative and positive

three volts, and the memory was limited between positive one and

three volts. The table lists the actual outputs obtained from SPICE

of the neuron for various inputs and memory values, along with the

percentage error in these outputs as calculated from the exact

values for the given inputs. Also shown is the SPICE next memory

state values after update for various present memory states and

present neuron and neighboring neuron outputs, along with the

percentage error in this memory value. Table 6 shows that the

neuron performs as it should with moderately accurate results. The

largest percentage error for an output value based on input values

and memory values is 8.0%. These errors are based on the actual

Page 90: ^Le^^Jk^ - TDL

82

Table 6. SPICE results for analog neuron with various inputs and corresponding outputs and errors.

Inp

11

2.00 2.00 1.00 1.00 2.00 2.00 3.00

uts

12

0.00 1.00 1.00 0.00 2.00 1.00 3.00

Output

0

2.01 2.03 0.00 2.04 0.00 1.08 0.00

% Error

0.5% 1.5% 0.0% 2.0% 00% 8.0% 00%

Neighbor OutDuts

Nl

0.00 0.00 1.00 1.00 1.00 1.00 1.00

N2

1.00 1.00 2.00 2.00 2.00 2.00 2.00

Memory

M

1.00 2.01 2.95 2.03 2.04 1.07 1.08

Next Memory

M'

2.01 2.95 2.03 2.04 1.07 1.08 0.96

%

Error

0.5% 1.67% 1.5% 2.0% 7.0% 8.0% 4.0%

Page 91: ^Le^^Jk^ - TDL

83

values which should be obtained from the H-H algorithm with the

given inputs. The largest percentage error in the next state

memory is 8.0%. The average error encountered in the output of

the neuron over many runs was 2.0%, while the average error in the

updated memory value was somewhat larger at 3.5%.

2 X 2 Network Simulation

In addition to the simulation of the single neuron circuit

designed in Chapter III, a simple neural network consisting of two

rows of these analog neurons with two neurons in each row was

also simulated using SPICE. The results were compared to those

from a Fortran program written to simulate the same 2 X 2

network. A listing is given in Appendix B. Several simulation runs

were made using both SPICE and the program to simulate the

network for various input patterns. The network was given various

input patterns to train with, and allowed to converge. The memory

values for the neurons, along with the output patterns associated

with the training patterns, were recorded. The network was then

given various input patterns and results were recorded. A

comparison was made between the SPICE simulation of the analog

neural circuit and the program simulation of the same network to

determine the relative accuracy of the analog circuit network.

Figure 28 shows an example of one of the simulation runs

made. This figure shows the two training patterns, the outputs of

the four neurons after each time step, and the final output pattern

Page 92: ^Le^^Jk^ - TDL

84

Time Step

1

2

3 4

5

6

7

Inpu t s 11

2.0

-3.0

2.0 -3.0 2.0

-3.0

2.0

12

-1.0

-1.0

-1.0 -1.0 -1.0 -1.0

-1.0

Outp

01

-2.00

3.03 -2.29 2.99

-2.01 2.98

-2.01

uts

02

0.99

2.17

3.01 3.00

3.00 3.00 3.00

Next M l

1.05

2.06

2.08 2.97 3.00 3.00 3.00

Memory State M2

2.01

1.06

1.09 0.96 1.04 1.04 1.04

M3 M4

1.00 1.00

2.01 1.06 1.05 2.07 1.09 2.08 1.00 2.97 1.00 2.97 1.00 2.97

SPICE simulation of analog neural network

Time Step

1

2

3

4

5

6

7

Inpi

11

2.0

-3.0

2.0

-3.0

2.0

-3.0

2.0

i t s

12

-1.0

-1.0

-1.0

-1.0

-1.0

-1.0

-1.0

Outputs 01 02

-2.00 1.00

3.00 2.00 -3.00 3.00

3.00 3.00

-2.00 3.00

3.00 3.00

-2.00 3.00

Next

M l

1.00

2.00 2.00

3.00

3.00

3.00

3.00

Memory State

M2

2.00

1.00 1.00

1.00

1.00

1.00

1.00

M3 M4

1.00 2.00

2.00 2.00 1.00 3.00

1.00 3.00

1.00 3.00

1.00 3.00

1.00 3.00

Software simulation of analog neural network

Figure 28. SPICE and software simulations of training a 2 X 2 analog H-H neural network to two training patterns.

Page 93: ^Le^^Jk^ - TDL

85

associated with each input pattern, along with the final memory

values. These results are shown for both the SPICE simulation of

the analog neural network circuit and the software simulation of a

network with the same input patterns. The network has converged

after six time steps. As with the single neuron simulation, the

results show that the analog circuit's performance is close to that of

the software simulation of a network trained to the same patterns

and with the same limiting parameters.

One noticeable feature of the results of Figure 28 is that each

of the two output patterns corresponding to the two training

patterns is composed of saturated values (for the true values

obtained from the program simulation of the network). That is, the

values in the output patterns are either Smin or Smax. For the case

of the analog neuron network, the values of the outputs are very

close to, but not exactly the limiting values. This is due to the error

introduced by the analog circuitry composing the network.

However, if an output value is very close to a limiting value, then in

this case it can be said to equal that limiting value and good results

can still be obtained.

One problem with the analog neural network can be shown

from the results of training the network to the two input patterns

as shown in Figure 29. Notice that in this case, the second term of

the output pattern corresponding to the second training pattern is

not a saturated value equal to either Smin or Smax. In this case, if

an input pattern with a second term slightly different from that of

the training pattern is given to the trained network, then the second

Page 94: ^Le^^Jk^ - TDL

86

Time Step

1 2 3 4 5 6 7

Inputs I I l 2

3.00 1.60 1.00 3.00 3.00 1.60 1.00 3.00 3.00 1.60 1.00 3.00 3.00 1.60

Outputs Oi 0 2

-2.89 -1.62 -2.97 -2.76 -2.91 -1.59 -2.98 -2.97 -2.91 -1.67 -2.99 -2.98 -2.93 -1.68

Next Memory State Ml M2 M3 M4

1.00 1.00 0.96 1.05 0.96 2.01 1.04 0.96 0.96 2.95 0.96 1.04 0.96 3.00 0.96 1.04 0.96 3.00 1.04 1.07 0.96 3.00 0.96 1.08 0.96 3.00 0.96 1.09

SPICE simulation of analog neural network

Time Step

1 2 3 4 5 6 7

Inputs

II I2

3.00 1.60 1.00 3.00 3.00 1.60 1.00 3.00 3.00 1.60 1.00 3.00 3.00 1.60

Outputs Oi O2

-3.00 -1.60 -2.00 -3.00 -3.00 -1.60 -3.00 -3.00 -3.00 -1.60 -3.00 -3.00 -3.00 -1.60

Next Ml

1.00 1.00 1.00 1.00 1.00 1.00 1.00

Memory State

M2

2.00 3.00 3.00 3.00 3.00 3.00 3.00

M3 M4

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00

Software simulation of analog neural network

Figure 29. SPICE and software simulations of training a 2 X 2 analog H-H neural network to two patterns with non-saturated outputs.

Page 95: ^Le^^Jk^ - TDL

87

term of the output is also slightly different. It might be decided

that this should be considered a new pattern and not the same as

the training output pattern. In the case of the analog circuit

network, however, this different output is unresolvable from the

training pattern output due to the percentage error of the analog

circuit. In this case the analog circuit would fail to distinguish

between the two patterns. One suggestion that might be made from

this is that it might be desirable to have output patterns which are

made up of saturated values. Although this limits the number of

individual output patterns that can be achieved with a given

number of neurons per row of the network, this enables the analog

circuit network to better resolve different patterns.

Neural Speed

Because the equivalent model for the 741 op-amp was used

instead of the actual circuit, the time delays for the neuron circuit

obtained from SPICE analysis cannot be considered completely

accurate. To obtain a measure of the speed of the analog neuron

circuit, the delay response was found from typical specification data

for the 741 type op-amp. The maximum rate of change of the

output of an op-amp with respect to a change in input is defined as

the slew rate of the op-amp. This gives a measure of how fast the

output reaches its actual value after a change in the input. For a

typical 741 type op-amp, this rate of change for the output is about

0 5 volts per microsecond. For the analog neuron circuit, the

Page 96: ^Le^^Jk^ - TDL

88

maximum change in the output of the op-amp is 6 volts for typical

Smin and Smax values of negative and positive three volts,

respectively. This means that the maximum time delay for each op-

amp would be about 12 microseconds. Thus, for the output of the

analog neuron to change to its new value after a change in inputs is

approximately the time delay of four op-amp circuits, or 48

microseconds.

The next time delay for the analog neuron is the delay from

the time new outputs are available on the neurons of a given row to

the time the new updated memory values are available at the

inputs to the sample and hold circuits of the memory. From the

circuit design in Chapter III, it can be seen that this time delay is

approximately the time delay of eight cascaded op-amp circuits, or

96 microseconds. Once the new memory value is available at the

input to the first sample and hold circuit of the memory, it takes

another 124 microseconds to update the memory. This is equal to

two times the minimum sample pulse width needed for each sample

and hold circuit, plus the time delay of the two op-amps used in the

sample and hold circuits.

Adding all of these times together, the average time needed

for one learn cycle for the analog neuron is approximately 268

microseconds. This is considered a very slow pace in comparison

with the cycle time of some of today's serial computers. However,

when a comparison is made between neural networks implemented

in hardware and those implemented in software on a serial

machine, the advantage of a hardware implementation in terms of

Page 97: ^Le^^Jk^ - TDL

89

speed is seen. For an n x n network, the time required for one

learning cycle is simply n times the learning cycle time for a single

neuron. This is due to the fact that all the neurons in a row are

updated simultaneously in parallel. For an n x n network

implemented in software on a serial computer, each learn cycle

requires n x n differences to be performed, n x n multiplications,

and n X n comparisons, among other requirements. For a single

learn cycle then, even if the multiplications can be performed in one

clock cycle, at least 8 x n x n cycle times would be a low estimate

for the number of clock cycles required for each learn cycle of the

network. For large networks, the advantage of a hardware

implementation is obvious. For example, a 1000 X 1000 network

would require 1000 x 268 microseconds, or 268 milliseconds for

each learning step for the analog circuit network designed in

Chapter III. The same network running on a serial computer with a

clock rate of 8 MHz would require 8x1000x1000x125 nanoseconds,

or 1 second per learning cycle. The hardware implementation

provides a savings of a factor of 3.73 in time. For even larger

networks, the savings in speed becomes even more noted. For

example, a 10,000 x 10,000 neuron network would require 2.68

seconds to process a single learn cycle on hardware, whereas the

same network would require at least 100 seconds to run a single

learn step when simulated in software.

Page 98: ^Le^^Jk^ - TDL

90 Digital Circuit Simulation

The digital logic circuit implementation of the Hogg-Huberman

neural element given in Chapter IV was also analyzed using SPICE

simulation, and the results showed behavior of the neuron exactly

as predicted. Although SPICE simulation of a larger network

composed of the digital neurons designed in Chapter IV was

infeasible due to computation times, larger networks were

simulated using straightforward software implementation of a H-H

network with neuron parameters equal to those of the digital

neuron. As the digital neuron performed as predicted, this software

implementation allows a look at the performance of a network

made up of such neurons.

Single Neuron Simulation

The single digital neuron circuit was simulated with the SPICE

software, and the results taken. In order to implement NAND gate

logic using the SPICE program, the NAND gate internal circuit had to

be given to the SPICE program. Figure 30 shows the simple NAND

gate circuit used, and Appendix C gives the SPICE program listing.

The simulation of the digital neuron showed behavior exactly

as predicted by the design. This was to be expected with the digital

design, and the SPICE analysis was used mainly for the purpose of

obtaining timing results for the neuron to predict neural speed

which is discussed later.

Page 99: ^Le^^Jk^ - TDL

91

B*-

+ 5V •

^ -^-AAAn

Vj 4K, 1.6K

/I

• A»B

Figure 30. NAND gate circuit representation used for SPICE simulation of digital H-H neuron.

Page 100: ^Le^^Jk^ - TDL

92 Network Simulations

Although SPICE simulation of a larger network incorporating

the digital neuron circuit was infeasible due to computational limits,

a network composed of neurons with the same parameters as the

digital neuron designed here was implemented with a software

program written by the author. Due to the preciseness of the digital

neuron's performance, there is no loss of generality in using

software to examine some of the qualities of a larger network

composed of these neurons.

In order to examine some of the features and the performance

of a digital H-H network model, many runs were performed using

the program simulation of a 10 X 10 network. The network was

trained to various input patterns, the memory values recorded, and

then the trained network was tested for recognition abilities.

The graphs in Figure 31 show the results of one test

procedure used. In this test, a 10 X 10 network of digital neurons

was trained with four randomly generated input patterns. As each

neuron takes four bits of input, each pattern consisted of 40 bits.

After the network had learned the four training patterns, the

memory values were recorded for the recognition phase. For each

of the training patterns, 100 recognition patterns were generated,

with each recognition pattern varying from the training pattern by

a single bit of data. These 100 patterns were submitted to the

trained network and a record of the number of mismatches

recorded. This process was repeated, submitting 100 patterns

which differed by two bits, 100 patterns differing by three bits, etc.,

Page 101: ^Le^^Jk^ - TDL

93

100%

1 80% o 4-»

a B 60% • lH

B ^ 40% 3 (D O

S 20%

Training

<

/

/

Pattern 1

'

Training Pattern 2

1 2 3 4 # of bits different

1 2 3 4 # of bits different

Training Pattern 3 Training Pattern 4 100%

^ 20%J <

1 2 3 4 # of bits different

1 2 3 4 # of bits different

Figure 31. Pattern recognition ability of a 10 X 10 digital H-H neural network.

Page 102: ^Le^^Jk^ - TDL

94

up to five bits different. This whole process was repeated for each

of the four training patterns. Figure 31 shows the percentage of

mismatched patterns as a function of the number of bits differing

from the training pattern for each of the four training patterns. As

can be seen from these graphs, the network maps most of the

patterns which differ by only a single bit of data into the same

output pattern. The percentage of mismatched patterns increases

with the number of differing bits, with almost no patterns differing

by more than four bits from the training pattern being mapped into

the same output pattern. This ability to map patterns which are

close to the training pattern into the same output suggests the

network has a certain amount of noise tolerance. That is, if the

network is given a noised version of a training pattern, the network

will usually still recognize the pattern. This is a desirable feature in

most cases of recognition problems.

One problem in certain recognition problems is that of

deciding how close a pattern should be to a training pattern to be

classified as the same pattern. The network designed here classifies

most input patterns which vary by more than one eighth of the

number of bits in a training pattern into another output. Whether

this classification is too broad or too narrow would depend on the

specific application.

Page 103: ^Le^^Jk^ - TDL

95 Neural Speed

Results obtained from SPICE simulations of a single digital

neuron circuit were used to determine the average speed of the

neuron during the learn and recognition phases. For the NAND gate

circuitry shown in Figure 30, the time delay from stable logic level

inputs to a stable logic level output, as obtained from SPICE

averaged approximately 15 nanoseconds. This value was then used

to determine the time delay of the neuron for output and memory

update. The largest number of NAND gates in a single path from the

input of the neuron circuit to the output is 9. This number

multiplied by the time delay above for a single gate gives the

approximate time delay for the neuron to update its output,

assuming negligible time delays between the NAND gates

themselves. This would be the case if the neuron were

implemented in a VLSI design. Thus, the overall average time

delay for the output update of the digital neuron circuit is 135

nanoseconds. The maximum number of NAND gates along a single

path from the output of the neuron to the inputs of the two R-S flip-

flops of the memory is 11. This number shows that the set and

reset inputs to the memory flip-flops are stable 165 nanoseconds

after the outputs of the neuron and its neighboring neurons are

stable. Finally, the time delay from stable memory inputs to a new

updated memory value is the time delay of four NAND gates plus

the length of the memory update pulse to the clock input of the

flip-flops. For a minimum memory reset pulse length of 30

nanoseconds, this time delay is 90 nanoseconds.

Page 104: ^Le^^Jk^ - TDL

96

The total time delay of the neuron during a learn cycle is thus

390, or about 400 nanoseconds. For a network composed of such

neurons, the time delay during a learning cycle is this number times

the number of layers in the network. Again, the number of neurons

in each layer does not affect the time since every neuron in a single

layer is updated simultaneously. The savings in time over software

becomes significant for large networks. For example, a software

simulation of a 1000 X 1000 network running on a machine with an

8 MHz clock rate would require at least 1 second to compute a

single learn cycle of the network, if the multiplications, subtractions

and comparisons required were all performed in eight total clock

cycles. The same network running in a hardware implementation of

the type designed here would take only 1000 X 400 nanoseconds, or

400 microseconds, to complete a single learning cycle, a factor of

2,500 less. Clearly, if many cycles are to be run on a network of

this, or even a larger scale, then hardware implementation provides

a great advantage in speed.

The time delay during the recognition phase, when memory

update is not performed, is simply the time delay from input of the

neuron to the output which was given before as 135 nanoseconds.

For the same 1000 X 1000 network, this means a time delay of only

140 microseconds. For software simulation, one difference and one

multiplication must still be performed for each neuron, in addition

to the limiting steps. Not even counting the limiting operation, the

minimum time required for the software implementation running at

Page 105: ^Le^^Jk^ - TDL

97

8MHz is approximately 500 milliseconds, a factor of 3,703 longer

than hardware implementation.

Clearly, the advantage of speed shows the need for looking to

hardware implementation of neural networks.

Summary

The results in this chapter show that hardware

implementations of the Hogg-Huberman neural model are realizable

with good results. Performance of both the analog and digital

neurons was as predicted with only small percentage errors for the

analog circuit. In addition, this chapter has shown that an

important advantage in speed can be gained by hardware

implementation.

Page 106: ^Le^^Jk^ - TDL

CHAPTER VI

POSSIBLE FUTURE WORK AND CONCLUSION

Possible Future Work

The encouraging results achieved with neural networks

clearly demonstrate the need for continued study. Many behavioral

features of the networks are as yet still not completely understood.

Although the need for hardware implementations of these networks

is clear, a better understanding of the networks is possibly needed

before large extremely useful networks can be produced. Clearly,

though, the study of ways to implement various types of networks

in hardware is needed, and continues to be an area of interest.

Although the network circuits designed here were simulated

with the SPICE circuit simulation software package, a next step

might be to implement the actual network in a hardware design, to

further test the performance of the network and perhaps examine

the behavior of larger networks. One area of the analog neuron

designed in Chapter III which might need improvement is that of

the memory circuits. An improved sample and hold circuit is

needed if long-term memory is to be achieved in a network. The

sample and hold circuits used here will let the current memory

value slowly diminish over time due to the leakage currents of the

transistors used in the circuit.

98

Page 107: ^Le^^Jk^ - TDL

99

One problem facing the users of neural networks in the area

of pattern learning and recognition is that of how to represent a

desired learning pattern to the network. In some cases, this is not

an easy task. In many instances it is not possible to give raw data,

as in the case of an elecromagnetic signal, directly to a network for

recognition. The pattern must be preprocessed, or reduced, into a

form which can be operated on by the network, (i.e., a set of

numbers or voltage levels) in a such away that the relevant

features of the pattern remain intact. Much more study is required

in this area if greater success in certain areas of pattern recognition

is to be achieved.

Conclusion

Artificial neural networks, although perhaps an old idea, are

still in the younger stages of development, still with many

unknowns. The possibilities may prove to be far beyond anything

at this point, and it is such possibilities which encourage continued

study. The promising results that have been achieved with neural

networks and the increase in the number of different models of

such networks in just a few years time gives reason to believe that

perhaps large scale successes in certain applications may only be a

short time away. This rapid growth is understandable, as these

networks promise possible solutions to the very problems which

have stymied traditional computing machines, tasks which man

finds easy, such as speech and image recognition.

Page 108: ^Le^^Jk^ - TDL

100

One prohibitive problem in studying artificial neural networks

is the amount of computational time required to simulate the

networks on traditional serial computers. A possible solution to this

problem is the implementation of the networks in hardware. This

thesis has shown that certain artificial neural model algorithms,

such as the Hogg-Huberman model, can be relatively easily

implemented in hardware design. The gain in computational speeds

for large networks implemented in hardware can be very

significant. In addition, the H-H model has been shown to lend

itself to hardware implementation in VLSI, due to the fact that

access to the network is needed only at the edges of the network

array.

This thesis has shown that the H-H neural network model can

be implemented in hardware both in analog and digital form

suitable for VLSI integration. It has also been shown that such an

implementation would allow a significant increase in network speed

over software simulation.

Although much study and work in the area of artificial neural

networks is still needed, the door to the future possibilities of such

networks is still wide open. Perhaps the continued study of these

networks may someday yield a solution to the ever-continuing

quest for a true intelligent non-biological system.

Page 109: ^Le^^Jk^ - TDL

LIST OF REFERENCES

[1] W. S. McCulloch and W. Pitts, "A Logical Calculus of the Ideas Imminent in Neurons Activity," Bulletin of Mathematical Biophysics, Vol. 5, pp. 115-133, 1943.

[2] F. Rosenblatt, "Principles of Neurodynamics," Spartan Books, Washington D. C, 1962.

[3] M. Minsky and S. Papert, "Perceptrons," MIT Press, Cambridge, MA, 1969.

[4] J.J. Hopfield, "Neural Networks and Physical Systems with Emergent Collective Computational Abilities," Proc. Natl. Acad. Sci., Vol. 79, pp. 2554-2558, 1982.

[5] J.J. Hopfield and D.W. Tank, "Computing with Neural Circuits: A Model," Science, Vol. 233, pp. 625-633, Aug. 8, 1986.

16] K. Fukushima, S. Miyake, and T. Ito, "Neocognitron: A Neural Network Model for a Mechanism of Visual Pattern Recognition," IEEE Trans. Systems, Man, and Cybernetics, Vol. SMC-13, No. 5, Sept./Oct. 1983.

[7] G. A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for a Self-Organizing Neural Pattern Recognition Machine," Computer Vision, Graphics, and Image Processing, Vol. 37, pp. 54-115, 1987.

[8] T. J. Sejnowski, G. E. Hinton, and J. A. Anderson, "Parallel Models of Associative Memory," Erlbaum, Hillsdale, N. J., 1981.

[9] J. A. Anderson, "Cognitive and Psychological Computation with Neural Models," IEEE Trans. Systems, Man, and Cybernetics, Vol. SMC-13, No. 5, Sept./Oct. 1983.

[10] T. Hogg and B. A. Huberman, "Understanding Biological Computation: Reliable Learning and Recognition," Proc. Natl. Acad. Sci. Vol. 81, pp. 6871-6875, Nov. 1984.

101

Page 110: ^Le^^Jk^ - TDL

102

[11] O. Port, "Computers that Come Awfully Close to Thinking," Business Week, pp. 92-97, June 2, 1986.

[12] T. Hogg and B. A. Huberman, Phys. Rev. Letter 52, pp. 1048-1051, 1984.

[13] W. J. B. Oldham, "Modeling Neural Networks," E-Systems Report No. G3854.1401.01c, Jan. 8, 1986.

[14] W. J. B. Oldham, "Image Recognition and Pattern Recognition Through the Use of Neural Networks," E-Systems Report No. G6012.00.41, Dec. 1986.

[15] C. H. Rogers and W. J. B. Oldham, "Pattern Recognition Application of a Lightly Interconnected 3-Dimensional Neural Network," E-Systems, technical paper.

[16] A. S. Sedra and K. C. Smith, "Microelectronic Circuits," 2nd. Ed., Holt, Rinehart and Winston, N. Y., N. Y., pp. 92-97, 105, 238, 316-317, 1987.

[17] M. M. Mano, "Computer Logic Design," Prentice Hall, Englewood Cliffs, N. J., pp. 57-64, 129-143, 177, 1972.

Page 111: ^Le^^Jk^ - TDL

APPENDIX A: SAMPLE SPICE INPUT LISTING

FOR AN ANALOG H-H NEURON CIRCUIT

AN ANALOG NEURON CIRCUIT * 741 OP-AMP SUBCIRCUIT * NODES: INVERTING INPUT-l,N0NINVERTES[GINPUT-2 * OUTPUT-3 .SUBCKT OPAIVIP 1 2 3 Rl 1 0 lOMEG R2 2 0 lOMEG Gl 4 0 1 2 -.19MMH0 R3 4 0 6.7MEG R4 4 0 4MEG CI 4 0 15.9NF El 3 0 4 0 -529 .ENDS OPAIVIP * LIMITER/COMPARATOR SUBCIRCUIT * NODES: IN-1, OUT-2, OFFSET-3, NEGLIMIT-4, POSLIMIT-5, * RF GOES BETWEEN 2&3, WHERE SL0PE=RF/1 MEG .SUBCKT LIMIT 12 3 4 5 Rl 1 3 IMEG Dl 3 8 DIODE D2 7 3 DIODE R2 8 4 700 R3 8 2 200 R4 2 7 200 R5 7 5 700 R6 6 0 500K XI 3 6 2 OPAMP .ENDS LIMIT * NEURON CIRCUIT * NODES: INPUTS-1&2, OUTPUT-3, MEMORY UPDAT-4&7, * NEIGHBOR INPUTS 5&6 Rl 2 8 lOOK R2 1 9 lOOK R3 9 0 lOOK R4 8 10 lOOK

103

Page 112: ^Le^^Jk^ - TDL

104 XI 8 9 10 OPAMP R5 10 0 500K El 11 0 P0LY(2) 10 0 12 0 0 0 0 0 1 X2 11 13 14 15 16 LIMIT RF2 13 14 IMEG V15 15 0 DC 8.7V V16 16 0 DC -8.7V X3 17 13 17 OPAMP X4 17 3 18 15 16 LIMIT RF4 3 18 IMEG * OUTPUT COMPARISON WITH NEIGHBORS FOR MEMORY * ESrCREMENT/DECREMENT. V(50)=l IF V(3)>V(5)&V(6), * V(50)=-l IF V(3)<V(5)&V(6), ELSE V(50)=0 R6 6 19 lOOK R7 3 20 lOOK R8 20 0 200K R9 19 21 200K X5 19 20 21 OPAMP RIO 5 22 lOOK Rll 3 23 lOOK R12 23 0 200K R13 22 24 200K X6 22 23 24 OPAMP X7 21 25 26 27 0 LIMIT X8 24 28 29 27 0 LIMIT X9 21 30 31 0 32 LIMIT XIO 24 33 34 0 32 LIMIT R0FF7 26 35 IMEG R0FF8 29 35 IMEG R0FF9 31 36 IMEG ROFFIO 34 36 IMEG VOFFl 35 0 DC -IV V0FF2 36 0 DC IV V27 27 0 DC 6V V32 32 0 DC -6V R25 25 37 lOOK R28 28 37 lOOK R30 30 37 lOOK R33 33 37 lOOK R38 38 0 50K XII 37 38 39 OPAMP X12 39 40 41 42 0 LIMIT

Page 113: ^Le^^Jk^ - TDL

105 XI3 39 43 44 0 45 LIMIT R0FF12 41 46 IMEG R0FF13 44 47 IMEG VOFFl2 46 0 DC -4V VOFF 13 47 0 DC 4V V42 42 0 DC 13VG V45 45 0 DC -13V R39 40 48 500K R40 43 48 500K R41 48 50 lOOK R42 49 0 50K XI4 48 49 50 OPAMP R50 50 52 lOOK R51 12 52 lOOK R52 51 0 lOOK R53 51 53 lOOK X15 51 52 53 OPAMP XI6 53 54 55 56 57 LIMIT RF16 54 55 IMEG R0FF16 55 58 IMEG V016 58 0 DC -4V V56 56 0 DC -5.5V V57 57 0 DC -8.5V R54 54 59 lOOK R55 60 59 lOOK R56 61 0 50K R57 59 62 lOOK X17 59 61 62 OPAMP V60 60 0 DC -4V Jl 64 63 62 JFET RJ 63 62 500K DJ 63 4 DIODE CJ 64 0 5NF IC=1V E2 65 0 64 0 1 J2 67 66 65 JFET RJ2 66 65 500K DJ2 66 7 DIODE CJ2 67 0 5NF IC=1V E3 12 0 67 0 1 VI 1 0 PULSE(2 1 50US 5US 5US 140US 200US) V2 2 0 PULSE(1 2 50US 5US 5US 15US 50US) VNl 5 0 PULSE(0 1 50US 5US 5US 140US 200US)

Page 114: ^Le^^Jk^ - TDL

106 VN2 6 0 PULSE(1 2 50US 5US 5US 140US 200US) VSl 4 0 PULSE(-15 15 lOUS lUS lUS 3US 25US) VS2 7 0 PULSE(-15 15 15US lUS lUS 3US 25US) .MODEL DIODE D .MODEL JFET NJF(BETA=2.0E-3) .TRAN lUS 195US .PRINT TRAN V(l,2) V(12) V(3) V(5) V(6) .NODESET V(64)=1V V(67)=1V V(4)=-15V V(7)=-15V .IC V(64)=1V V(67)=1V .OPTIONS NOMOD NOPAGE LIMPTS=300 ITL5=0 .WIDTH IN=80 OUT=80 END

Page 115: ^Le^^Jk^ - TDL

APPENDIX B: FORTRAN LISTING FOR SIMULATION

OF AN H-H NEURAL NETWORK

^ ^ ^ ^ ^ v ^l# *>tr *'Jr ^ * ^ 1 ^ '"ill- ^ * ^ ^ ^ ^ ^ ^ ^ t ^ ^ ^ ^ ^ ^ ^ ^ ^ 1 ^ ^ ^ ^ t ^t^ ^ ^ ^ ^ ^ t ^ ^ ^ ^ ^ ^ .J r ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ 1 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^

^1# *t0 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^1^ ^1^ ^ 1 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ 1 ^ ^t^ ^ ^ ^ ^ ^ ^ ^0 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ K^ ^ - « ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ft^ « ^ « ^ k ^ kip ^ ^ kl— k ^ &^ ^ ^ « ^

program hhld c c c this program simulates a 1- dimensional Hogg-Huberman c neural network. The user can choose any grid size up to c a 1024 X 50 array. The user can also choose zero or c periodic boundary conditions, neuron output based on c absolute difference or actual difference in inputs, and c comparison with neighbor outputs or comparison with c average of neighbor outputs for memory update. c

integer o(52,1026),s(52,1026),r(52,1026) integer smax,smin,mmax,mmin,m(52,1026) integer con,num,tper,steps integer npat,npt,nlay,itrain,nout,nbc,nmem real delta character pname(32)*15,ofile*32,memfile*32 character str(2)*21,bound(2)*21,outrule(2)*25, character memrule(2)*20 data strf train and recognize ',' recognize '/ data bound/' zero boundaries ',' periodic boundaries '/ data outrule/' input difference ','absolute input

+difference'/ data memrule/' neighbor compare ',' average compare '/

c c c

Initialize some parameters to default values

itrain =0 npt=1024 nlay=50 npat=2 nbc=0

107

Page 116: ^Le^^Jk^ - TDL

108 nout=0 ipn=0 nmem=0 smin=-15 smax=145 mmin = l mmax=4 tper=1500 delta=32.0

c c allow user to set memory and output limits, boundary conditions, c output rule, memory update rule, array size, and number of c input training patterns, c 15 write(*,*)'0 - exit'

write(*,'(a,i3,i5)')' 1 - smin,smax = ',smin,smax write(*,'(a,i3,i5)')' 2 - mmin,mmax = ',mmin,mmax write(*,'(a,i5)')' 3 - number of neurons per layer= ',npt write(*,'(a,i5)')' 4 - number of layers in network= ',nlay write(*,'(a,i5)')' 5 - number of training patterns= ',npat write(*,'(a,i6)')' 6 - maximum # of time periods for

+convergence=',tper write(*,'(a,a21)')' 7 - ',str(itrain+l) write(*,'(a,a21)')' 8 - boundary conditions= ',bound(nbc+l) write(*,'(a,a25)')' 9 - output rule= ',outrule(nout+l) write(*,'(a,a20)')' 10 - memory update rule= ',

+memrule(nmem+l) write(*,'(a,fl2.3)')' 11 - compare delta= ',delta write(*,'(a)')' 12 - run the program' write(*,*y write(*,'(a,i2,a,$)')' enter selection (12 to run) [',inq,

+']:• read(*,'(i2)',iostat=ierr)inq if(ierr.ne.O)goto 15 if(inq.lt.0.or.inq.gt.l2)goto 15 if(inq.eq.O)then

stop end if goto (20,25,30,35,40,45,50,55,60,65,70,200),inq

2 0 write(*,'(a,$)')' enter smin,smax: ' read(*,*)smin,smax goto 15

Page 117: ^Le^^Jk^ - TDL

109 2 5 write(*,'(a,$)')' enter mmin,mmax: '

read(*,*)mmin,mmax goto 15

3 0 write(*,'(a,$)')' enter number of neurons per layer- ' read(*,*) npt goto 15

3 5 write(*,'(a,$)')' enter number of layers in network- ' read(*,*)nlay goto 15

40 write(*,'(a,$)')' enter number of training patterns- ' read(*,*)npat goto 15

4 5 write(*,'(a,$)')' enter max # of periods to converge: ' read(*,*)tper goto 15

5 0 itrain = iabs(itrain-l) ipn=0 goto 15

5 5 nbc= iabs(nbc-l) goto 15

60 nout=iabs(nout-l) goto 15

65 nmem=iabs(nmem-l) goto 15

7 0 write(*,'(a,$)')' enter compare delta: ' read(*,*)delta goto 15

c c run the network c c if training, initialize the memory, output, and input values to zero c 200 if(itrain.eq.O)then

do i=l,1026 do j=l,52

m(i,j)=0 o(i,j)=0 s(i,j)=0 r(i,j)=0

end do end do

end if

Page 118: ^Le^^Jk^ - TDL

no

c get the training or recognition patterns c

if (itrain.eq.O)then do i=l,npat

call sload(s,npt,pname,ipn) end do

else call sload(s,npt,pname,ipn)

end if c c call the training/recognition subroutine c

call hoghub(s,npt,itrain,nlay,npat,m,o,r,smin,smax,mmin, +mmax,tper,steps,con,nbc,nout,nmem,ipn)

if(itrain.eq.O)then if(con.eq.O)then

write(*,'(a,i6,a)')' network did not converge in less than ', + tper,' time steps'

goto 15 end if write(*,'(a,i4)')' # of time steps to converge= ',steps

end if c c output the results to files c

if(itrain.eq.l)then call compare(s,r,npt,npat,pname,ipn,delta)

end if write(*,'(a)')' do you wish to output pattern results? l=yes

+, 0=no: ' read(*,*)inum if(inum.eq.l)then

220 write(*,'(a,$)')' enter name of output file: ' read(*,'(a25)') ofile open(3,file=ofile,iostat=ierr,status='unknown') if(ierr.ne.O)then

write(*,'(a,i6)')' output file open error # ',ierr goto 220

end if call outfile(s,r,steps,smax,smin,mmax,mmin,nbc,nout,nmem,

+npt,nlay,npat,bound,outrule,memrule)

Page 119: ^Le^^Jk^ - TDL

I l l close(unit=3)

end if write(*,*)' do you wish to output memory values? l=yes,

+ 0=no: ' read(*,*)inum if(inum.eq.l)then

23 0 write(*,'(a,$)'y enter name of output file: ' read(*,'(a25)') ofile open(3,file=ofile,iostat=ierr,status='unknown') if(ierr.ne.O)then

write(*,'(a,i6)')' output file open error # ',ierr goto 230

end if call memout(m,mmin,mmax,steps,nmem,npt,nlay,memrule) close(unit=3)

end if itrain=l goto 15 end

c

subroutine sload(s,npt,pname,ipn) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

c c this subroutine reads in the patterns for training or recognition c and puts them in the s(,) array. c

integer s(52,1026),npt,ipn,nst,temp(1024),maxs real x(1026),maxx character pname(32)*15 iname*32

c ipn=ipn+l nst=l

10 write(*,'(a,i3)')' enter input pattern name # ',ipn read(*,'(al5)')pname(ipn) write(*,'(a,al5)')' name of input file for ',pname(ipn) read(*,'(a,a32)')iname open(2,file=iname,iostat=ierr,status='old') if(ierr.ne.O)then

write(*,'(a,i6)')' input file open error # ',ierr goto 10

end if

Page 120: ^Le^^Jk^ - TDL

112 1 5 write(*,'(a,i4,a,$)')' enter starting sample number [',

+nst,']: ' read(*,*)nst if(nst.gt.3072)goto 15 if(nst.gt.l)then

read(2,20)(temp(i),i=l,nst-l) end if

20 format(8i7) read(2,20)(s(i,ipn),i=2,npt+l) maxs=0 maxx=0. do i=2,npt+l

if(s(i,ipn).gt.maxs)maxs=s(i,ipn) maxx=maxx+float(s(i,ipn))

end do savg=maxx/float(npt) do i=2,npt+l

x(i)=(2.0*float(s(i,ipn))/savg)-l. end do do i=2,npt+l

s(i,ipn)=ifix(16.*x(i)) end do close(2) return end

c

subroutine hoghub(s,npt,itrain,nlay,npat,m,o,r,smin,smax, +mmin,mmax,tper,steps,con,nbc,nout,nmem,ipn)

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * c

c c this is the training/recognition subroutine for the network

integer s(52,1026),r(52,1026),o(52,1026) integer smin,smax,mmin,mmax,tper,steps,con,nbc,nout,nmem integer p(52,1026),q(52,1026),u

c c put the pattern into the first layer and compute the outputs c

con=0 steps=0 u=0

Page 121: ^Le^^Jk^ - TDL

113 if(itrain.eq.l)num=l if (itrain.eq.O)num=npat do 300 while(con.ne.l)

do i=l,num steps = steps+1 if (steps.gt.tper)then

con=0 return

end if if(itrain.eq.O)then

do j=l,npt+2 p(l,j)=s(i,j) end do

else do j=l,npt+2

p(l,j)=s(ipn,j) end do

end if if(nbc.eq.l)then

if(itrain.eq.O)then p(l,l)=s(i,npt+l) p(l,npt+2)=s(i,2)

else p(l,l)=s(ipn,npt+l) p(l,npt+2)=s(ipn,2)

end if end if

c c calculate the outputs c

do j=2,nlay+l do k=2,npt+l

nden=(p(j-l,k-l)-p(j-l,k+l)) if(nout.eq.l)nden=iabs(nden) nden=m(j,k)*nden o(j,k)=nden if(nden.ge.smax)o(j,k)=smax if(nden.le.smin)o(j,k)=smin

end do if(nbc.eq.l)then

oO,l)=oG,npt+l) o(j,npt+2)=oa,2)

Page 122: ^Le^^Jk^ - TDL

114 end if

end do c c if not training then we are done c

if(itrain.ne.O)goto 400 c c if training then check for convergence c

if(steps.gt.nlay)then checks=0 u=u+l if(u.gt.npat)u=l do j=2,npt+l

r(u,j)=o(nlay+l,j) if(r(u,j).ne.q(u,j))checks=checks+l

end do if (checks.ne.O)then

con=0 else

con=l end if

end if c c now update the memory elements c

do j=2,nlay+l do k=2,npt+l

if(nmem.eq.l)then nw=(oa,k-l)+o(j,k+l))/2 if(o(j,k).gt.nw)then

m(j,k)=m(j,k)+l if(m(j,k).gt.mmax)m(j,k)=m(j,k)-l

else if(o(j,k).lt.nw)then m(j,k)=m(j,k)-l if(m(j,k).lt.mmin)m(j,k)=m(j,k)+l

end if else

nw=o(j,k-l)-o(j,k+l) if(o(j,k).gt.nw)then

m(j,k)=m(j,k)+l if(m(j,k).gt.mmax)m(j,k)=m(j,k)-l

Page 123: ^Le^^Jk^ - TDL

115 else if(o(j,k).lt.nw)then

m(j,k)=m(j,k)-l if(m(j,k).lt.mmin)m(j,k)=m(j,k)+l

end if end if

end do end do do j=2,nlay+l

do k=l,npt+2 q(u,k)=r(u,k) PO,k)=o(j,k)

end do end do

end do 300 end do 400 if(itrain.ne.O)then

do j=2,npt+l r(ipn,j)=o(nlay+l,j)

end do end if return end

c ^ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * : ( c : j j 5 | ; j | j

subroutine outfile(s,r,steps,smax,smin,mmax,mmin,nbc, +nout,nmem,npt,nlay,npat,bound,outrule,memrule)

^ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

c c this subroutine outputs the pattern results to a file c

integer s(52,1026),r(52,1026),steps,smax,smin,mmax integer mmin,nbc,nout,nlay,nmem,npt,npat character bound(2)*21,outrule(2)*25,memrule(2)*20

c write(3,'(a,i4,a,i4)')'smax= ',smax,' smin= ',smin write(3,'(a,i4,a,i4)')' mmax= ',mmax,' mmin= ',mmin write(3,'(a,a21)')'boundary conditions= ',bound(nbc+l) write(3,'(a,a25)')'output rule= ',outrule(nout+l) write(3,'(a,a20)')'memory update rule= ',memrule(nmem+l) write(3,'(a,i4)')'number of neurons per layer= ',npt write(3,'(a,i4)')'number of layers= ',nlay write(3,'(a,i4)')'number of steps to converg= ',steps

Page 124: ^Le^^Jk^ - TDL

116

write(*,*)' • c

do i=l,npat write(3,20)(s(i,j),j=2,npt+l) write(3,20)(r(i,j),j=2,npt+l)

end do 20 format(8i7)

re turn end

c ^ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

subroutine memout(m,mmin,mmax,steps,nmem,npt, +nlay,memrule)

c integer m(52,1026),mmin,mmax,steps,nmem,npt,nlay character memrule(2)*20

c write(3,'(a,i4,a,i4)' mmax= ',mmax,' mmin= ',mmin write(3,'(a,i4)')' number of steps to converge= ',steps write(3,'(a,a20)')'memory update rule= ',memrule(nmem+1) write(3,*)' ' do i=2,nlay+l

write(3,20)(m(i,j),j=2,npt+l) end do

20 format(8i7) re turn end

Page 125: ^Le^^Jk^ - TDL

APPENDIX C: A SAMPLE SPICE INPUT LISTING

FOR A DIGITAL H-H NEURON CIRCUIT

A DIGITAL NEURON CIRCUIT * NAND GATE SUBCIRCUIT .SUBCKT NAND 12 3 4 * NODES: INPUTS(1&2), 0UTPUT(3), VCC(4) Ql 9 5 1 QMOD Dl CLAMP 0 1 DMOD Q2 9 5 2 QMOD D2CLAMP 0 2 DMOD RB 4 5 4K Rl 4 6 1.6K Q3 6 9 8 QMOD R2 8 0 IK RC 4 7 130 Q4 7 6 10 QMOD DVBEDROP 10 3 DMOD Q5 3 8 0 QMOD .ENDS NAND .SUBCKT FLIP 12 3 4 5 7 9 * R-S FLIP FLOP SUBCIRCUIT * NODES: S(l), R(2), CL0CK(3), Q(4), VCC(5) XI 1 3 6 5 NAND X2 2 3 8 5 NAND X3 6 7 9 5 NAND X4 8 9 7 5 NAND X5 3 3 10 5 NAND X6 9 10 11 5 NAND X7 7 10 12 5 NAND X8 11 13 4 5 NAND X9 12 4 13 5 NAND ENDS FLIP * SINGLE NEURON CIRCUIT XI 3 3 10 99 NAND X2 5 5 9 99 NAND X3 4 4 8 99 NAND

117

Page 126: ^Le^^Jk^ - TDL

118 X4 2 2 7 99 NAND X5 1 1 6 99 NAND ECU 11 0 POLY(3) 6 0 7 0 3 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 .08 BCD 12 0 POLY(3) 1 0 2 0 10 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 . 1 Rl 1 0 500K R2 2 0 500K R3 3 0 500K R4 4 0 500K R5 5 0 500K R6 6 0 500K R7 7 0 500K R8 8 0 500K R9 9 0 500K RIO 10 0 500K Rll 11 0 500K R12 12 0 500K R13 13 0 500K ESI 13 0 P0LY(2) 11 0 5 0 0 0 0 0 . 3 ERl 14 0 P0LY(2) 12 0 9 0 0 0 0 0 .25 ES2A 15 0 P0LY(2) 8 0 9 0 0 0 0 0 . 2 2 ES2C 17 0 P0LY(2) 11 0 9 0 0 0 0 0 .23 ER2A 18 0 P0LY(2) 13 0 8 0 0 0 0 0 .25 ER2B 19 0 P0LY(3) 12 0 4 0 5 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 .1 X6 15 15 20 99 NAND X7 14 14 21 99 NAND X8 20 21 22 99 NAND X9 22 22 23 99 NAND XIO 17 17 24 99 NAND XI1 23 24 25 99 NAND X12 18 18 26 99 NAND X13 19 19 27 99 NAND XI4 26 27 28 99 NAND X15 29 29 31 99 NAND X16 14 14 30 99 NAND X17 30 31 32 99 NAND XI8 25 25 33 99 NAND X19 31 33 34 99 NAND X20 38 38 39 99 NAND X21 35 32 42 4 99 97 98 FLIP

Page 127: ^Le^^Jk^ - TDL

119

X22 34 36 42 4 99 95 96 FLIP R37 37 0 500K R39 39 0 500K El 40 0 POLY(2) 37 0 39 0 0 0 0 0 .27 R40 40 0 500K X23 8 9 41 99 NAND R41 41 0 500K E2 3 0 P0LY(2) 40 0 41 0 0 0 0 0 .25 R28 28 0 500K E3 35 0 P0LY(2) 31 0 13 0 0 0 0 0 .25 E4 36 0 P0LY(2) 31 0 28 0 0 0 0 0 .25 VCC 99 0 DC 5V VNl 1 0 PULSE(0 3 615NS IONS IONS 780NS 1400NS) VN2 2 0 PULSE(0 3 815NS IONS IONS 580NS 1400NS) VII 37 0 PULSE(0 3 15NS IONS IONS 780NS 1400NS) VI2 38 0 PULSE(0 3 615NS IONS IONS 7SONS MOONS) VCL 42 0 PULSE(0 3 0 5NS 5NS 200NS MOONS) VSET 29 0 PULSE(0 3 0 5NS 5NS 200NS MOONS) .NODESET V(97)=0 V(98)=3.5 V(95)=0 V(96)=3.5 +V(4)=3.5 V(5)=3.5 V(3)=0 MODEL DMOD D .MODEL QMOD NPN(BF=75 RB=100 CJE=1PF CJC=3PF) .PRINT TRAN V(37) V(38) V(4) V(5) V(3) .TRAN IONS 1500NS .OPTIONS NOMOD NOPAGE LIMPTS=250 CPTIME=20000 ITL5=0 ' W I D T H I N = 8 0 O U T = 8 0

END

Page 128: ^Le^^Jk^ - TDL

PERMISSION TO COPY

In presenting this thesis in partial fulfillment of the

requirements for a master's degree at Texas Tech University, I

agree that the library and my major department shall make it

freely available for research purposes. Permisssion to copy this

thesis for scholarly purposes may be granted by the Director of the

Library or my major professor. It is understood that any copying or

publication of this thesis for financial gain shall not be allowed

without my further written permission and that any user may be

liable for copyright infringement.

Disagree (Permission not granted) Agree (Permission granted)

Student's signature Student's signature

Date

^-2I-S'\ Date


Recommended