+ All Categories
Home > Documents > VLSI Implementation of CSFN Neural Network for Pattern ... · PDF fileVLSI Implementation of...

VLSI Implementation of CSFN Neural Network for Pattern ... · PDF fileVLSI Implementation of...

Date post: 01-Feb-2018
Category:
Upload: ngoque
View: 232 times
Download: 0 times
Share this document with a friend
4
VLSI Implementation of CSFN Neural Network for Pattern Recognition Application Hamed Farshbaf and Hadi Esmaelzadeh Electrical and Computer Engineering Department University of Tehran, Tehran, Iran Abstract A digital implementation is presented for a neural network, which uses conic section function neurons. This network is employed in a digit pattern recognition application. The neural network is trained without any consideration about non- idealities of hardware implementation and then obtained weight parameters are converted to fixed-point bit-string format in order to hardware implementation. Number of bits used in this conversion, forces a trade off between accurate operation of the network and size of the hardware. Finding the optimum number of bits, steps are taken for implementation of network. Simulation results in different levels of design flow are presented. 1 Introduction Neural networks are among the best-known algorithms for pattern recognition problems. Their ability to learn from training epochs and to generalize the knowledge makes them a suitable alternative for problems in which uncertainty and complexity exist [1]. Learning is important because neural network can attain an algorithm to solve a complicated problem using available examples without dealing with the problem directly. Generalization is essential as well, since the neural network can utilize obtained knowledge in learning phase, in other similar problems, which were not experienced in training set. In literature, several neuron models have been introduced for artificial neural networks. The most well-known of these models are Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) networks. Both MLPs and RBFs have been used in numerous pattern recognition applications [2], [3]. Due to complementary specifications of these networks, several attempts have been done to bring MLPs and RBFs under a unified framework to make use of advantages of both networks together. Introduction of Conic Section Function Networks (CSFNs) [4] is a novel approach, which uses analytic interpretation of type of decision-boundaries made by MLPs and RBFs to combine these two networks in one network. Aim of this work is to deploy potential power of CSFN networks in pattern recognition of digits. A digital architecture is introduced for the network and simulation results in different levels of design flow are presented. Section 2 gives a brief overview on theory of CSFNs and section 3 defines the application in which the network is employed then section 4 presents VLSI design and section 5 summarizes this work. 2 Conic Section Function Networks (CSFNs) Introduction of CSFNs is aimed to utilize the advantages of both MLP and RBF networks in a unified network. The main idea is that CSFNs are networks, which have capability to show behaviors like RBFs, or MLPs or mixed RBF-MLP according to conditions enforced by distribution of training epochs. Because of linear relations in excitation equation of neurons, MLP networks form open boundaries in input space. On the other hand, RBF networks separate data points using closed boundaries since they use square-law distance measuring equations. Having capability to create both type of open and closed decision-making boundaries, gives more flexibility to a neural network to separate different clusters of data, which are spread in input space. CSFNs use Equation 1 as excitation of their neurons. First term is MLP part and second term is RBF part, which are combined by a parameter. Equation 1 = = = n i n i i i i i i c i Cos w c i Net 1 1 2 ) ( ) ( ) ( ω i i s are inputs of neuron and i w , i c and ) ( ω Cos are parameters. i c s act as centers in RBF networks and i w s are weights similar to weight parameters in MLP networks. That is the ) ( ω Cos , which gives capability to the CSFN neurons to create different types of open and closed boundary decision borders. Variation of ω angle from 2 π to π conveys shape of decision borders from open boundaries to closed boundaries. 2 π ω = 10 7π ω = π ω = 5 3 π ω = 10 9 π ω = Figure 1. Decision borders of a CSFN neuron with variant ω . Proceedings of the 6th WSEAS Int. Conf. on NEURAL NETWORKS, Lisbon, Portugal, June 16-18, 2005 (pp256-259)
Transcript

VLSI Implementation of CSFN Neural Network for

Pattern Recognition Application Hamed Farshbaf and Hadi Esmaelzadeh

Electrical and Computer Engineering Department

University of Tehran, Tehran, Iran

Abstract A digital implementation is presented for a neural network,

which uses conic section function neurons. This network is

employed in a digit pattern recognition application. The neural

network is trained without any consideration about non-

idealities of hardware implementation and then obtained weight

parameters are converted to fixed-point bit-string format in

order to hardware implementation. Number of bits used in this

conversion, forces a trade off between accurate operation of the

network and size of the hardware. Finding the optimum number

of bits, steps are taken for implementation of network.

Simulation results in different levels of design flow are

presented.

1 Introduction

Neural networks are among the best-known algorithms for

pattern recognition problems. Their ability to learn from training

epochs and to generalize the knowledge makes them a suitable

alternative for problems in which uncertainty and complexity

exist [1].

Learning is important because neural network can attain an

algorithm to solve a complicated problem using available

examples without dealing with the problem directly.

Generalization is essential as well, since the neural network can

utilize obtained knowledge in learning phase, in other similar

problems, which were not experienced in training set.

In literature, several neuron models have been introduced

for artificial neural networks. The most well-known of these

models are Multi-Layer Perceptron (MLP) and Radial Basis

Function (RBF) networks. Both MLPs and RBFs have been

used in numerous pattern recognition applications [2], [3]. Due

to complementary specifications of these networks, several

attempts have been done to bring MLPs and RBFs under a

unified framework to make use of advantages of both networks

together. Introduction of Conic Section Function Networks

(CSFNs) [4] is a novel approach, which uses analytic

interpretation of type of decision-boundaries made by MLPs and

RBFs to combine these two networks in one network.

Aim of this work is to deploy potential power of CSFN

networks in pattern recognition of digits. A digital architecture

is introduced for the network and simulation results in different

levels of design flow are presented. Section 2 gives a brief

overview on theory of CSFNs and section 3 defines the

application in which the network is employed then section 4

presents VLSI design and section 5 summarizes this work.

2 Conic Section Function Networks (CSFNs)

Introduction of CSFNs is aimed to utilize the advantages of

both MLP and RBF networks in a unified network. The main

idea is that CSFNs are networks, which have capability to show

behaviors like RBFs, or MLPs or mixed RBF-MLP according to

conditions enforced by distribution of training epochs.

Because of linear relations in excitation equation of

neurons, MLP networks form open boundaries in input space. On

the other hand, RBF networks separate data points using closed

boundaries since they use square-law distance measuring

equations. Having capability to create both type of open and

closed decision-making boundaries, gives more flexibility to a

neural network to separate different clusters of data, which are

spread in input space. CSFNs use Equation 1 as excitation of

their neurons. First term is MLP part and second term is RBF

part, which are combined by a parameter.

Equation 1 ∑ ∑ −−−== =

n

i

n

iiiiii ciCoswciNet

1 1

2)()()( ω

ii s are inputs of neuron and iw , ic and )(ωCos are

parameters. ic s act as centers in RBF networks and iw s are

weights similar to weight parameters in MLP networks. That is

the )(ωCos , which gives capability to the CSFN neurons to

create different types of open and closed boundary decision

borders. Variation of ω angle from 2

π to π conveys shape of

decision borders from open boundaries to closed boundaries.

2πω=

107π ω=

πω =

53π ω=

109π ω=

Figure 1. Decision borders of a CSFN neuron with variantω .

Proceedings of the 6th WSEAS Int. Conf. on NEURAL NETWORKS, Lisbon, Portugal, June 16-18, 2005 (pp256-259)

This is shown in Figure 1 for a three-input neuron in which

one input has fixed value as bias and two others are scanned

from 0 to 1. Circular (closed) borders and linear (open) borders

are formed in a two-dimensional input space for πω = and

2πω = respectively. Other conic section shapes are obtained

for ω values between these two extremes.

The CSFNs can adapt their decision boundaries according

to distribution of training epochs in learning phase. This is

shown in two examples with different distributions of data in

which one CSFN network is exercised to separate four classes of

data.

(a) (b)

Figure 2. Open and colsed discriminating borders of CSFN.

In example shown in Figure 2(a), data classes are chosen

such that open boundaries are required to solve the problem and

after training, the CSFN network has shaped appropriate borders.

The dashed lines are decision boundaries and the four data

classes are shown with symbols +, ∗, ×, and ο.

As another example, Figure 2(b) shows decision borders for

the data classes, which are selected in a way that closed

boundaries, are needed to separate them accurately. In

simulations of Figure 2(a) and Figure 2(b) same CSFN network is

employed and results prove excellent flexibility of decision

borders in CSFN networks. The novel properties of CSFN

networks in pattern recognition applications are the motivation

of present work.

3 Digits Pattern Recognition

Designed CSFN is utilized in digits pattern recognition

application. The input patterns are applied to networks as 3×5

matrix of pixels. The network has 15 input neurons which pixels

of input pattern are applied to them. As well, the network has

ten neurons in its output layer, in which each output neuron is

corresponded to one of digits. Desired behavior of the network

is to activate the corresponding output neuron of the digit whose

pattern is applied to input neurons.

The deployed network has two layers (without input

neurons taken into account as layer). Input neurons work as

buffers and neurons in hidden layer are CSFN neurons while

neurons of output layer are simple linear neurons. Due to

definition of pattern recognition application, number of input and

output neurons are fixed, but number of hidden neurons are

determined through simulation. Minimum number of neurons in

hidden layer, that enables the network to solve the defined digits

pattern recognition problem, is five.

Considering equation presented for CSFN neurons in

section 2, error back propagation algorithm [5] is used for

training of the network. Figure 3 shows simulation results for

trained network with 1000 epochs using C++ language.

0

0.2

0.4

0.6

0.8

1

Output

0 1 2 3 4 5 6 7 8 9

Digit

Figure 3. Outputs of CSFN in digit pattern recognition application.

The horizontal axis is the digit whose pattern is applied to

the network and vertical axis is output of ten neurons in output

layer. As seen, in each case only one output is activated (value

more than 0.7) and all others are not activated (values less than

0.3).

4 VLSI Design

This section contains brief description of steps taken

toward VLSI design of the trained CSFN network. The design

flow starts with a system-level simulation of the network. This

step is intended to solve a trade off which exists between number

of bits for fixed-point conversion in one hand and average error

caused by limited accuracy of this fix-point format, in other

hand. This trade off will be discussed in more detail in section

4.1. System-level simulation results are utilized in HDL

description of system in which bus widths are directly

determined according to above results. Then HDL codes are

synthesized and post-synthesis simulation is accomplished.

Netlist of synthesis is used for automatic layout generation.

Following subsections are corresponded to above-mentioned

steps.

4.1 System-level Simulation

System-level simulation is aimed to consider some

limitations of hardware, which influence design and high level

description of system. First, the network is trained using C++

codes in which all parameters and variables of the neural

network are defined in double format. Moving toward VLSI

design, main issue is how to present these parameters in

corresponding hardware. In other words, parameters should be

converted to a format, which is well-suited for hardware

implementation.

The chosen format in present work is 2’s complement

fixed-point. This is because of simplicity and smaller size of

Proceedings of the 6th WSEAS Int. Conf. on NEURAL NETWORKS, Lisbon, Portugal, June 16-18, 2005 (pp256-259)

fixed-point arithmetic units in comparison to other formats. In

this format a bits are used to show integer part of variables and

b bits are assigned for fraction part, making ba + bits to use for

each variable. The value of bits in integer part are 20, 2

1, 2

2, and

so on and those of bits in fraction part are 2-1, 2

-2, 2

-3, and so on.

Negative numbers could be shown in a same way as for 2’s

complement numbers.

Figure 4 presents examples of numbers shown in fixed-

point format with a =5 and b =4 and corresponding arithmetic

operations.

Figure 4. Fixed-point format, used for hardware implementation.

Arithmetic operations on fixed-point variables are similar

to 2’s complement. For example, in case of adding up two

numbers, they should be added in the same manner as 2’s

complement numbers and result would be in an accurate fixed-

point format. The case for multiplication is a bit different. If two

fixed-point numbers with a =5 and b =4 were multiplied, the 2’s

complement multiplication result would have 18 digits.

However, it is obvious that the fixed-point result should have 10

bits for integer part and 8 bits for fractional part. In this work,

when multiplying two numbers, just 4 bits of fractional part of

answer are saved and 4 bit with less value are discarded. This is

done in order to limit hardware size (see Figure 4).

As discussed above, using high precision will lead to a

format with large number of bits and consequently larger

hardware area. The reason is that, when a system uses larger

number of bits, it needs wider buses and larger arithmetic units

and registers, in its data path. On the other hand, low precision

will cause performance degrading due to error in desired

operation of the neural network. However, because of soft-

computing property of neural networks, some errors would be

tolerable and loss of performance can be ignored. Therefore,

there is a trade off between accuracy of operation and size of

hardware. That is the system-level simulation which determines

which values of a and b in the presented fixed-point format

best solves this trade off.

For system-level simulation MATLAB codes are developed.

Operation of fixed-point-based network is compared to that of

double-format-based network. With setting acceptable average

error bellow 5%, a and b have been chosen.

Shown in Figure 5 is average error of networks which use

data format of Figure 4 with parameters a =5 and b =2, 3, 4, and 5.

0

2

4

6

8

10

12

14

1 2 3 4 5 6 7 8 9 10

Digit

Error

a=5,b=2a=5,b=3a=5,b=4a=5,b=5

Figure 5. System-level simulation with fixed a and variant b .

These results suggest that using the format with b =2 and

b =3 causes sever error. Although both give acceptable average

error level, difference between taking b =4 and b =5 is not

slight. So one can conclude that b =4 is best choice in this

simulation.

As well, shown in Figure 6 are output errors for networks

that are planned to exploit data formats with parameters b =4,

a =3, 4, 5, and 6. Again a =3 and a =4 are not good choices and

a =5 can deliver same performance as a =6 with less area used.

Considering above results, the optimum data format will be a =5

and b =4. Using the obtained format, system is described with

HDLs.

0

5

10

15

20

25

30

1 2 3 4 5 6 7 8 9 10

Digit

Error

a=3,b=4

a=4,b=4

a=5,b=4

a=6,b=4

Figure 6. System-level simulation with fixed b and variant a .

4.2 HDL Description

The hardware structure of this network is shown in Figure

7. As shown, there exist controller and data path like most digital

systems. Data path contains computational units and controller

issues signals to control accurate sequence of computations and

weight parameters are stored in ROMs, which are part of data

path. Data path and controller will be discussed bellow.

4.2.1 Data Path

Data path is composed of ROMs and neurons and buses.

ROMs hold the obtained weight parameters in chosen fixed point

format. The neural network has three types of input, hidden and

Proceedings of the 6th WSEAS Int. Conf. on NEURAL NETWORKS, Lisbon, Portugal, June 16-18, 2005 (pp256-259)

output neurons. Input neurons act as register and receive data

from chip inputs and store them. Hidden neurons have the

excitation function of (1), and their activation function is

sigmoid. All inputs ii s are in fixed-point format. Also

parameter iw , ic , and )(ωCos use above format. To implement

square root and sigmoid functions, a lookup-table-based

approach is used.

1 2 15

1 5

1 102

Input

Layer

Hidden

Layer

Output

Layer

D

a

t

a

P

a

t

h

C

o

n

t

r

o

l

l

e

r

Input

Output

Control Signal

Input & Output Signal

Input & Output of neuron

System

Figure 7. Block diagram of the CSFN network.

4.2.2 Controller

It is employed to keep track of appropriate sequence of

operations which should be performed. First, data are loaded

from input pads to input neuron registers. Then controller allows

hidden neurons to compute their output by allocating enough

clock cycles. Then outputs of hidden neurons are moved to

output neurons to be computed in required clock cycles and final

results are assigned to outputs of network.

4.3 Synthesis and Post-synthesis Simulation

HDL description of the designed network is synthesized

using a library of standard cells. Obtained from synthesis tool is

information about area and delay of system. Overall area of

system is 10964 Mill2, 10666 for data path and 35 for controller

in a 0.5 micron technology. Delay of critical path of system is

54.58 ns, which, will result in maximum frequency of 19 MHz

for the network.

Post-synthesis simulation is performed to insure

appropriate operation of synthesized system. Table 1 presents the

output results for simulation of synthesized core. These outputs

show correspondence to those outputs obtained in system-level

and pre-synthesis simulations.

Columns of this table are digit, which, its pattern is applied

to input neurons, and rows of the table show value of output

neurons. As seen, in each case the output of the network that is

corresponded with applied pattern has high value (more than 0.7)

and all others have low value (less than 0.3).

Table 1. Summary of post-synthesis simulation.

4.4 Layout

The last step is to utilize the netlist of components

generated with synthesis tool for layout generation. The layout is

produced using automatic place and route layout generation for

standard cell design. Total area of layout of system is

32394λ× 98698λ.

5 Conclusions

An implementation for a CSFN network, which is

employed in a pattern recognition application, was presented.

Theory of Conic section function networks was discussed and

based on this theory a CSFN network was trained for the patter

recognition application. System-level simulation was performed

to consider limitation of digital implementation. Then steps of

design flow were detailed through presenting several simulation

results. Finally reports on hardware implementation are

presented.

As future work, a more complicated application will be

chosen and goal is to show CSFN’s performance in a real

application using ASIC or FPGA implementations.

References:

[1] R. Lippmann, "An introduction to computing with neural nets,"

IEEE Acoustic, Speech, and Signal Processing Magazine, vol. 4,

no. 2,pp. 4-22, April 1987.

[2] M. D. Ganis, C. L. Wilson, J. L. Blue, “Neural Network-Based

Systems for handprint OCR applications,” IEEE Transactions on

Image Processing, vol. 7, issue 8, pp. 1097-1112, Aug. 1998.

[3] H. Osman, M.M. Faluny, “Neural Classifiers and statistical Pattern

recognition applications for currently established links,” IEEE

Transactions on systems, Man and Cybernetics, PartB, vol. 27,

issue 3, pp. 488-497, June 1997.

[4] G. Dorffner, "Unified framework for MLPs and RBFNs: Introducing

conic section networks," Cybernetics and Systems: An

International Journal, vol. 25, pp. 511-554, 1994.

[5] T. Yildirim, J.S. Marsland, “Improved back propagation training

algorithm using conic section functions,” Proceeding of the IEEE

Int. Conf. On Neural Networks (ICNN’97), Houston, Texas, USA,

June 9-12, 1997.

Proceedings of the 6th WSEAS Int. Conf. on NEURAL NETWORKS, Lisbon, Portugal, June 16-18, 2005 (pp256-259)


Recommended