+ All Categories
Home > Documents > Implementation of a 3-layer feedforward backpropagation...

Implementation of a 3-layer feedforward backpropagation...

Date post: 15-Jun-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
39
Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board by Laoulakos Charilaos and Tzanoudakis Theodoros The aim of this project is the design and implementation of a system simulating a NN in the Spartan-3 Starter Board of Xilinx. The NN will be a 3-layer feedforward backpropagation. The specification of each layer consisting of 120 neurons is considered not only exaggerating but also wrong, since such a NN would not output the correct results. Design Flow The process of modeling is a step to reach the final system, an embedded system. Design flow is a sequence of processes using CAD tools to produce the final design. In Drawing 1 we see two design flows. The first design flow includes modeling in MATLAB and simulation of the whole system in Simulink. Then there is VHDL extraction and there may be simulation in Modelsim. Then there is synthesis, the transformation of behavioral or structural model in a representation of logical elements and their interfaces. These representations are generally called netlist. After this step there is simulation in Modelsim. The final step is the final design. If there are problems the design flow is repeated. In the second design flow the modeling is in C/C++ and the design is in VHDL. The next steps are similar to the first design flow. These design flows have a lot in common meaning that they both begin from a model and end up with a final design. Both design flows include steps like simulation, synthesis and Place & Route. But they also have important differences. In the first design flow the MATLAB modeling Figure 1: Design Flows
Transcript
Page 1: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Charilaos and Tzanoudakis Theodoros

The aim of this project is the design and implementation of a system simulating a NN in the Spartan-3 Starter Board of Xilinx. The NN will be a 3-layer feedforward backpropagation. The specification of each layer consisting of 120 neurons is considered not only exaggerating but also wrong, since such a NN would not output the correct results.

Design Flow

The process of modeling is a step to reach the final system, an embedded system. Design flow is a sequence of processes using CAD tools to produce the final design. In Drawing 1 we see two design flows.

The first design flow includes modeling in MATLAB and simulation of the whole system in Simulink. Then there is VHDL extraction and there may be simulation in Modelsim. Then there is synthesis, the transformation of behavioral or structural model in a representation of logical elements and their interfaces. These representations are generally called netlist. After this step there is simulation in Modelsim. The final step is the final design. If there are problems the design flow is repeated.

In the second design flow the modeling is in C/C++ and the design is in VHDL. The next steps are similar to the first design flow.

These design flows have a lot in common meaning that they both begin from a model and end up with a final design. Both design flows include steps like simulation, synthesis and Place & Route. But they also have important differences. In the first design flow the MATLAB modeling

Figure 1: Design Flows

Page 2: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

and the Simulink simulation allow us a better understanding of the problem. The VHDL extraction bypasses the step of engineering design. Such productivity benefits do not come in free: The VHDL code can not be altered by the engineering team meaning it is not subject to optimization. A MHL study suggests that an implementation of the first design flow has half the speed and uses double resources than the one of the second design flow[1], which has the disadvantage that the transform from C to VHDL is not automated, thus more time-consuming and should be tested for proper function. The main advantage of the second design flow is the modification ability in all the design stages using optimization techniques.

Our proposal is the combination of the two design flows. The modeling of the NN is going to be in MATLAB. There is no automatic extraction of VHDL but instead the system is designed in hardware by the engineering team. Thus, both the explicit comprehension (through MATLAB) of the stated problem is achieved and customized optimization is possible(VHDL code is readable).

Modeling of NN

The artificial neural network (ANN), often simply called neural network (NN), is a processing model loosely derived from biological neurons. Neural networks are often used for classification problems or decision making problems that do not have a simple or straightforward algorithmic solution. The beauty of a neural network is its ability to learn an input to output mapping from a set of training cases without explicit programming, and then being able to generalize this mapping to cases not seen previously.

Figure 2: Suggested Design Flow

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 1 of 18

Page 3: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

● Neural Network Principles

A neural network is constructed from a number of individual units called neurons that are linked with each other via connections. Each individual neuron has a number of inputs, a processing node, and a single output, while each connection from one neuron to another is associated with a weight. Processing in a neural network takes place in parallel for all neurons. Each neuron constantly (in an endless loop) evaluates (reads) its inputs, calculates its local activation value according to a formula shown below, and produces (writes) an output value.

The activation function of a neuron a(I, W) is the weighted sum of its inputs, i.e. each input is multiplied by the associated weight and all these terms are added. The neuron’s output is determined by the output function o(I, W), for which numerous different models exist. In the simplest case, just thresholding is used for the output function. For our purposes, however, we use the non-linear “sigmoid” output function defined in Figure 3 and shown in Figure 4, which has superior characteristics for learning [2]. This sigmoid function approximates the Heaviside step function, with parameter ρ controlling the slope of the graph (usually set to 1).

Figure 3: Individual Artificial Neuron

Figure 4: Sigmoid output function

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 2 of 18

Page 4: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

● Feed-Forward Networks

A neural net is constructed from a number of interconnected neurons, which are usually arranged in layers. The outputs of one layer of neurons are connected to the inputs of the following layer. The first layer of neurons is called the “input layer”, since its inputs are connected to external data, for example sensors to the outside world. The last layer of neurons is called the “output layer”, accordingly, since its outputs are the result of the total neural network and are made available to the outside. These could be connected, for example, to robot actuators or external decision units. All neuron layers between the input layer and the output layer are called “hidden layers”, since their actions cannot be observed directly from the outside.

If all connections go from the outputs of one layer to the input of the next layer, and there are no connections within the same layer or connections from a later layer back to an earlier layer, then this type of network is called a “feedforward network”. Feed-forward networks (Figure 5) are used for the simplest types of ANNs.

For most practical applications, a single hidden layer is sufficient, so the typical NN for our purposes has exactly three layers:

○ Input layer (for example input from robot sensors)○ Hidden layer (connected to input and output layer)○ Output layer (for example output to robot actuators)

The number of neurons in the input and output layer are determined by the application. Unfortunately, there is no rule for the “right” number of hidden neurons. Too few hidden neurons will prevent the network from learning,since they have insufficient storage capacity. Too many hidden neurons will slow down the learning process because of extra overhead. The right number of hidden neurons depends on the “complexity” of the given problem and has to be determined through experimenting.

We also connect every output from layer i to every input at layer i + 1. This is called a “fully connected” neural network. There is no need to leave out individual connections, since the same effect can be achieved by giving this connection a weight of zero. That way we can use a much more general and uniform network structure.

Apparently the whole intelligence of an NN is somehow encoded in the set of weights being used. What used to be a program is now reduced to a set of floating point numbers. With sufficient insight, we could just “program” an NN by specifying the correct (or let’s say working) weights. However, since this would be virtually impossible, even for networks with small complexity, we need another technique. The standard method is supervised learning, for example through error backpropagation (see Section ). The same task is repeatedly run by the NN and the outcome judged

Figure 5: Fully connected feedforward network

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 3 of 18

Page 5: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

by a supervisor. Errors made by the network are backpropagated from the output layer via the hidden layer to the input layer, amending the weights of each connection.

For example:

Weights from the input layer to the hidden layer, summarized as matrix win i,j (weight of connection from input neuron i to hidden neuron j).

Weights from the hidden layer to the output layer, summarized as matrix wout i,j (weight of connection from hidden neuron i to output neuron j).

No weights are required from sensors to the first layer or from the output layer to actuators. These weights are just assumed to be always 1. All other weights are normalized to the range [–1 .. +1].

Figure 6: A simple 3-layer feedforward NN

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 4 of 18

Page 6: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

Calculation of the output function starts with the input layer on the left and propagates through the network. For the input layer, there is one input value (sensor value) per input neuron. Each input data value is used directly as neuron activation value:a(nin1) = o(nin1) = 1.00a(nin2) = o(nin2) = 0.50For all subsequent layers, we first calculate the activation function of each neuron as a weighted sum of its inputs, and then apply the sigmoid output function. The first neuron of the hidden layer has the following activation and output values:a(nhid1) = 1.00 · 0.2 + 0.50 · 0.3 = 0.35o(nhid1) = 1 / (1 + e–0,.35) = 0.59The subsequent steps for the remaining two layers are shown in Figure 7 with the activation values printed in each neuron symbol and the output values below, always rounded to two decimal places.Once the values have percolated through the feed-forward network, they will not change until the input values change. Obviously this is not true for networks with feedback connections. Program A shows the implementation of the feedforward process. This program already takes care of two additional so-called “bias neurons”, which are required for backpropagation learning.

● Backpropagation

A large number of different techniques exist for learning in neural networks. These include supervised and unsupervised techniques, depending on whether a “teacher” presents the correct answer to a training case or not, as well as online or off-line learning, depending on whether the system evolves inside or outside the execution environment. Classification networks with the popular backpropagation learning method [3], a supervised off-line technique, can be used to identify a certain situation from the network input and produce a corresponding output signal. The drawback of this method is that a complete set of all relevant input cases together with their solutions have to be presented to the NN. Another popular method requiring only incremental feedback for input/output pairs is reinforcement learning [4]. This on-line technique can be seen as either supervised or unsupervised, since the feedback signal only refers to the network’s current performance and does not provide the desired network output. In the following, the backpropagation method is presented. A feed-forward neural network starts with random weights and is presented a number of test cases called the training set. The network’s outputs are compared with the known correct results for the particular set of input values and any deviations (error function) are propagated back through the net. Having done this for a number of iterations, the NN hopefully has learned the complete training set and can now produce the correct output for each input pattern in the training set. The real hope, however, is that the network is able to generalize,

Figure 7: An example

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 5 of 18

Page 7: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

which means it will be able to produce similar outputs correspondingto similar input patterns it has not seen before. Without the capability of generalization, no useful learning can take place, since we would simply store and reproduce the training set.The backpropagation algorithm works as follows:1. Initialize network with random weights.2. For all training cases:

a. Present training inputs to network and calculate output.b. For all layers (starting with output layer, back to input layer):

i. Compare network output with correct output (error function). ii. Adapt weights in current layer.

For implementing this learning algorithm, we do know what the correct results for the output layer should be, because they are supplied together with the training inputs. However, it is not yet clear for the other layers, so let us do this step by step.

Firstly, we look at the error function. For each output neuron, we compute the difference between the actual output value outi and the desired output dout i.

For the total network error, we calculate the sum of square difference:Eout i = dout i – outi

The next step is to adapt the weights, which is done by a gradient descent approach:

So the adjustment of the weight will be proportional to the contribution of the weight to the error, with the magnitude of change determined by constant n. This can be achieved by the following formulas [3]:diffout i = (o(nout i) – dout i) · (1 – o(nout i)) · o(nout i)Δwout k,i = –2 · n · diffout i · inputk(nout i)= –2 · n · diffout i · o(nhid k)

● Suggested NN for implementation

The project description is not asking for a specific problem to be solved. So we choose the problem of function approximation which is one of the most powerful uses of NN. A major factor of this choice was the lack of training and testing data. In function approximation this problem does mot exist. Sampling a given function will give all the necessary data. The NN would be trained to approximate a one-dimension nonlinear function based on sample data. This 3-layer feedforward architecture with the hidden layer of sigmoidal neurons has been proven to be capable of approximating any function with a finite number of discontinuities, in any dimension (i.e., with any numbers of inputs n) provided “enough” neurons are present in the hidden layer. In general, a more complex function requires more sigmoidal neurons.

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 6 of 18

Page 8: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

Here is a graph of the sample data to be used for training

From the training data, it is clear that the neural network must have one input and one output, as it approximates a scalar function of a scalar variable in the hidden layer. In this case, it is estimated that five sigmoid neurons should suffice. Thus, the neural network is of the form :

Modeling in MATLAB

● The NN is implemented in MATLAB with the following code:

% M-file to train and simulate feed-forward backpropagation neural network.X = -1 : 0.1 : 1;Y = [-0.960 -0.577 -0.073 0.377 0.641 0.660 0.461...0.134 -0.201 -0.434 -0.500 -0.393 -0.165 0.099...0.307 0.396 0.345 0.182 -0.031 -0.219 -0.320];pr = [-1 1]; m1 = 5; m2 = 1;%Initialize 2-layer feed-forward network:net_ff = newff (pr, [m1 m2], {'logsig' 'purelin'});net_ff = init (net_ff); %Default Nguyen-Widrow initialization%Training:

Figure 8: Training Data

Figure 9: Form of the NN

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 7 of 18

Page 9: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

net.trainParam.goal = 0.02;net.trainParam.epochs = 350;net = train (net, X, Y);%Simulation:X_sim = -1 : 0.01 : 1;Y_nn = sim (net_ff, X_sim);figureplot(X, Y, '+'); hold onplot(X_sim, Y_nn, 'r-'); hold off

The NN is trained in MATLAB where the following results are calculated:

inweights=[3.2887,-3.4715,3.4995,3.4946,3.4867]outweights=[0.4737,-0.0108,0.0012,-0.0025,0.0182]

Architecture and Implementation Issues

● Data Representation - Floating Point vs Fixed Point ( area vs. precision)

There are certain design tradeoffs which must be dealt with in order to implement Neural Networks on FPGAs. One major tradeoff is area vs. precision. The problem is how to balance between the need for numeric precision, which is important for network accuracy and speed of convergence, and the cost of more logic areas (i.e. FPGA resources) associated with increased precision. Standard precisions floating-point would be the ideal numeric representation to use because it offers the greatest amount of precision (i.e. Minimal quantization error) and matches the representation used in simulating Neural Networks on general purpose microprocessors. However, due to the limited resources available on an FPGA, standard floating-point may not be as feasible compared to more area-efficient numeric representations, such as 16 or 32 bit fixed-point.

Holt and Baker [5] showed that 16-bit fixed-point was the minimum allowable precision for the backpropagation algorithm. For MLP using the BP algorithm,they showed using simulations and theoretical analysis that 16-bit fixed-point (1 bit sign, 3 bit left and 12 bit right of the radix point) was the minimum allowable range-precision assuming that both input and output were normalized between [0,1] and a sigmoid transfer function was used. Ligon III et al. [6] have also shown the

Figure 10: Matlab Screen Calculating the weights of NN.

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 8 of 18

Page 10: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

density advantage of fixed-point over floating point for older generation Xilinx 4020E FPGAs, by showing that the space/time requirements for 32-bit fixed-point adders and multipliers were less than those of their 32-bit floating-point equivalents.

Since the size of an FPGA-based MLP-BP is proportional to the multiplier used, it is clear that given an FPGA’s finite resources, a 32-bit signed (2’s complement) fixed-point representation will allow larger ANNs to be implemented than could be accommodated when using a 32-bit IEEE (a 32-bit floating point multiplier can be implemented on a Xilinx Virtex-II or Spartan-3 FPGA using four of the dedicated multiplier blocks and CLB resources) floating-point. However, while 32 fixed-point representation allows high processor density implementation, the quantization error of 32 floating point representation is negligible. Validating an architecture on an FPGA using 32-bit floating point arithmetic might be easier than fixed point arithmetic since a software version of the architecture can be run on a Personal Computer with 32-bit floating point arithmetic. As such its use is justifiable if the relative loss in processing density is negligible in comparison.[7]

● Approximation of sigmoid function

There are several methods proposed for the approximation of the sigmoid function, like table-driven methods, CORDIC algorithms and polynomial approximation[8].

An alternative to the Lookup-Table approach for implementing the sigmoid function in digital VLSI technology is Piece-Wise Linear Approximation (PWL). For all the partial and parallel architectures a 3-piece linearly approximated sigmoid function is constructed as shown in Figure 8. The approximation function can be described as follows[9]:

and implemented in VHDL:--- Project neuron (main code):-------------------------------------LIBRARY ieee;USE ieee.std_logic_1164.all;USE ieee.std_logic_arith.all; --package needed for SiGNEDuse ieee.math_real.all;USE ieee.std_logic_signed.ALL;------------------------------------------------------------------------ENTITY neuron IS

PORT ( x: IN std_logic_vector(0 to 31);w: IN std_logic_vector(0 to 31);y: OUT std_logic_vector(0 to 31));

END neuron;----------------------------------------------------------------------------------ARCHITECTURE neuron OF neuron ISBEGIN

PROCESS (x,w)

VARIABLE prod,f: REAL;VARIABLE xr,wr,yr: REAL;VARIABLE yf : std_logic_vector (0 to 31);

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 9 of 18

Page 11: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

BEGIN xr:= real(( conv_INTEGER(x)));wr:= real(( conv_INTEGER(w)));prod := xr*wr;if (prod <= -8.0) then f:=0.0 ;elsif (prod <= -1.0) then f :=(8.0 - abs(prod)) / 64.0;elsif (prod < 1.0) then f:= (prod / 4.0) ;elsif (prod < 8.0) then f:=1.0 - (8.0 - abs(prod)) / 64.0;else f:=1.0;end if;

yf := CONV_STD_LOGIC_VECTOR (integer(f), 32) ;y<=yf;END PROCESS;END neuron;

The vhdl code is not optimal. Since the aim of this course subject is the hardware and software integration a working code would suffice. A more effective vhdl implementation would produce better results.

● FSL Bus

Generally, there are two ways to integrate a customized IP core into a MicroBlaze-based embedded soft processor system. One way is to connect the IP on the On-chip Peripheral Bus (OPB). The OPB is part of the IBM Core ConnectTM on-chip bus standard. The second way is to connect the user IP to the MicroBlaze dedicated Fast Simplex Link (FSL) bus system. If the application is time-critical, the user IP should be connected to the FSL bus system; otherwise, it can be connected as a slave or master on the OPB. If the customized core is connected to the dedicated FSL interface, it is then possible to use predefined C functions (like microblaze_nbread_datafsl(val, id) or microblaze_nbwrite_datafsl(val, id)) to use the user core in the application software.[10]

Furthermore, the Create and Import Peripheral Wizard adds an IP core (VHDL code) and creates an API driver.

Figure 11: API creation in IP import

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 10 of 18

Page 12: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

This is the system produced:

● Serial or Parallel Architecture

One decision to be made is whether the IP core will be the Neural Network or the Neuron. Obviously the implementation of the neural network exploits better the parallelism offered by the hardware/software codesign. The result of the NN is explicitly calculated in the hardware module. In this case of just five neurons the size of reconfigurable fabric would suffice, but if a different function is the approximation target, more neurons would be required. In case of parallel architecture there is an upper limit on the number of neurons defined by the size of RF. Furthermore, in the case of serial architecture a higher degree of hardware software integration is achieved, which is more pedagogically suitable .

Figure 12: Embedded processor system

Figure 13: Parallel Architecture Figure 14: Serial Architecture

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 11 of 18

Page 13: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

● Detailed connection of IP to MicroBlaze

The IP logic is enhanced with the necessary FSL control signals

so the vhdl logic now is:-------------------------------------------------------------------------------- neuron - entity/architecture pair-------------------------------------------------------------------------------- Filename: neuron-- Version: 1.00.a-- Description: Example FSL core (VHDL).-- Date: Mon Jan 14 22:41:17 2008 (by Create and Import Peripheral Wizard)-- VHDL Standard: VHDL'93-------------------------------------------------------------------------------- Naming Conventions:-- active low signals: "*_n"-- clock signals: "clk", "clk_div#", "clk_#x"-- reset signals: "rst", "rst_n"-- generics: "C_*"-- user defined types: "*_TYPE"-- state machine next state: "*_ns"-- state machine current state: "*_cs"-- combinatorial signals: "*_com"-- pipelined or register delay signals: "*_d#"-- counter signals: "*cnt*"-- clock enable signals: "*_ce"-- internal version of output port: "*_i"-- device pins: "*_pin"-- ports: "- Names begin with Uppercase"-- processes: "*_PROCESS"-- component instantiations: "<ENTITY_>I_<#|FUNC>"------------------------------------------------------------------------------

library ieee;use ieee.std_logic_1164.all;use ieee.numeric_std.all;

-------------------------------------------------------------------------------------

-- Definition of Ports-- FSL_Clk : Synchronous clock-- FSL_Rst : System reset, should always come from FSL bus-- FSL_S_Clk : Slave asynchronous clock-- FSL_S_Read : Read signal, requiring next available input to be read-- FSL_S_Data : Input data-- FSL_S_CONTROL : Control Bit, indicating the input data are control word-- FSL_S_Exists : Data Exist Bit, indicating data exist in the input FSL bus-- FSL_M_Clk : Master asynchronous clock-- FSL_M_Write : Write signal, enabling writing to output FSL bus-- FSL_M_Data : Output data-- FSL_M_Control : Control Bit, indicating the output data are contol word-- FSL_M_Full : Full Bit, indicating output FSL bus is full

Figure 15: Detailed connection of IP to MicroBlaze

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 12 of 18

Page 14: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

---------------------------------------------------------------------------------

-------------------------------------------------------------------------------- Entity Section------------------------------------------------------------------------------

entity neuron isport (

-- DO NOT EDIT BELOW THIS LINE ----------------------- Bus protocol ports, do not add or delete. FSL_Clk : in std_logic;FSL_Rst : in std_logic;FSL_S_Clk : out std_logic;FSL_S_Read : out std_logic;FSL_S_Data : in std_logic_vector(0 to 31);FSL_S_Control : in std_logic;FSL_S_Exists : in std_logic;FSL_M_Clk : out std_logic;FSL_M_Write : out std_logic;FSL_M_Data : out std_logic_vector(0 to 31);FSL_M_Control : out std_logic;FSL_M_Full : in std_logic-- DO NOT EDIT ABOVE THIS LINE ---------------------

);

attribute SIGIS : string; attribute SIGIS of FSL_Clk : signal is "Clk"; attribute SIGIS of FSL_S_Clk : signal is "Clk"; attribute SIGIS of FSL_M_Clk : signal is "Clk";

end neuron;

-------------------------------------------------------------------------------- Architecture Section------------------------------------------------------------------------------

-- ENTITY neuron to implement your coprocessor

architecture EXAMPLE of neuron is

-- Total number of input data. constant NUMBER_OF_INPUT_WORDS : natural := 2;

-- Total number of output data constant NUMBER_OF_OUTPUT_WORDS : natural := 1;

type STATE_TYPE is (Idle, Read_Inputs, Write_Outputs);

signal state : STATE_TYPE;

------ kanw signals ta port tou neuralsignal x: std_logic_vector(0 to 31);signal w: std_logic_vector(0 to 31);

-- CLK: IN STD_LOGIC;signal y: std_logic_vector(0 to 31);

begin -- CAUTION: -- The sequence in which data are read in and written out should be -- consistent with the sequence they are written and read in the -- driver's neuron.c file

FSL_S_Read <= FSL_S_Exists when state = Read_Inputs else '0'; FSL_M_Write <= not FSL_M_Full when state = Write_Outputs else '0'; FSL_M_Data <= y; The_SW_accelerator : process (FSL_Clk) is begin -- process The_SW_accelerator

VARIABLE prod,f: REAL;VARIABLE xr,wr,yr: REAL;VARIABLE yf : std_logic_vector (0 to 31);

if FSL_Clk'event and FSL_Clk = '1' then -- Rising clock edge if FSL_Rst = '1' then -- Synchronous reset (active high) else

xr:= real(( conv_INTEGER(x)));wr:= real(( conv_INTEGER(w)));prod := xr*wr;

if (prod <= -8.0)

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 13 of 18

Page 15: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

then f:=0.0 ;elsif (prod <= -1.0)

then f :=(8.0 - abs(prod)) / 64.0;elsif (prod < 1.0)

then f:= (prod / 4.0) ;elsif (prod < 8.0)

then f:=1.0 - (8.0 - abs(prod)) / 64.0;else f:=1.0;

end if;yf := CONV_STD_LOGIC_VECTOR (integer(f), 32) ;

y<=yf; end if; end if;end process The_SW_accelerator;end architecture EXAMPLE;

● API drivers

The neural core is connected to the FSL interface as shown in Figure 16.

For the FSL0 connection, the MicroBlaze is the Master on the FSL bus and the neuron core is the Slave. Thus, MicroBlaze controls the data sent on the FSL0 bus to the neuron core. For the FSL1 bus, it is vice versa, and the neuron core is the Master and the MicroBlaze the Slave. The IDCT controls the data on the FSL1 bus.[10]

So the API drivers are configured as following://////////////////////////////////////////////////////////////////////////////// Filename: D:\TUC\embedded\Proj\drivers/neuron_v1_00_a/examples/neuron_v2_1_0_app.c// Version: 1.00.a// Description: neuron (new FSL core) Driver Example File// Date: Mon Jan 14 22:41:18 2008 (by Create and Import Peripheral Wizard)//////////////////////////////////////////////////////////////////////////////#include "neuron.h"#include "xparameters.h"/** CAUTION:** The sequence of writes and reads in this function should be consistent* with the sequence of reads or writes in the HDL implementation of this* coprocessor.**/// Instance name specific MACROs. Defined for each instance of the peripheral.#define WRITE_NEURON_0(val) write_into_fsl(val, XPAR_FSL_NEURON_0_INPUT_1)#define READ_NEURON_0(val) read_from_fsl(val, XPAR_FSL_NEURON_0_OUTPUT_0)

void neuron_app( float* input_0, /* Array size = 1 */ float* input_1, /* Array size = 1 */

Figure 16: Including the neuron IP via the FSL interface onto MicroBlaze

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 14 of 18

Page 16: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

float* output_0 /* Array size = 1 */ ){ int i;

//Start writing into the FSL bus for (i=0; i<1; i++) { WRITE_NEURON_0(input_0[i]); WRITE_NEURON_0(input_1[i]); }

//Start reading from the FSL bus for (i=0; i<1; i++) { READ_NEURON_0(output_0[i]); }}

main(){

float input_0[1]; float input_1[1]; float output_0[1]; // Call the macro with instance specific slot IDs neuron(/ XPAR_FSL_NEURON_0_INPUT_1,XPAR_FSL_NEURON_0_OUTPUT_0,

input_0, input_1, output_0 );

}

and :

//////////////////////////////////////////////////////////////////////////////// Filename: D:\TUC\embedded\Proj\drivers/neuron_v1_00_a/src/neuron.h// Version: 1.00.a// Description: neuron (new FSL core) Driver Header File// Date: Mon Jan 14 22:41:17 2008 (by Create and Import Peripheral Wizard)//////////////////////////////////////////////////////////////////////////////

#ifndef NEURON_H#define NEURON_H

#include "xstatus.h"

#include "fsl.h" #define write_into_fsl(val, id) putfsl(val, id)#define read_from_fsl(val, id) getfsl(val, id)

/** A macro for accessing FSL peripheral.** This example driver writes all the data in the input arguments* into the input FSL bus through blocking writes. FSL peripheral will* automatically read from the FSL bus. Once all the inputs* have been written, the output from the FSL peripheral is read* into output arguments through blocking reads.** Arguments:* input_slot_id* Compile time constant indicating FSL slot from* which coprocessor read the input data. Defined in* xparameters.h .* output_slot_id* Compile time constant indicating FSL slot into* which the coprocessor write output data. Defined in* xparameters.h .* input_0 An array of unsigned integers. Array size is 1* input_1 An array of unsigned integers. Array size is 1* output_0 An array of unsigned integers. Array size is 1** Caveats:* The output_slot_id and input_slot_id arguments must be* constants available at compile time. Do not pass* variables for these arguments.

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 15 of 18

Page 17: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

** Since this is a macro, using it too many times will* increase the size of your application. In such cases,* or when this macro is too simplistic for your* application you may want to create your own instance* specific driver function (not a macro) using the * macros defined in this file and the slot* identifiers defined in xparameters.h . Please see the* example code (neuron_app.c) for details.*/

#define neuron(\ input_slot_id,\ output_slot_id,\ input_0, \ input_1, \ output_0 \ )\

{\ int i;\\ for (i=0; i<1; i++)\ {\ write_into_fsl(input_0[i], 0); // mb write_into_fsl(input_1[i], 0);\ }\\ for (i=0; i<1; i++)\ {\ read_from_fsl(output_0[i], 1);\ }\}

XStatus NEURON_SelfTest();

#endif

● Implementation in C

A program in ANSI C would define 3 arrays, w1: the weight vector (input to hidden layer), w2: the weight vector (hidden to output layer) and output: the results of the 5 neurons respectively. A simple loop like:

for (i=0;i<5;i++){output[i]=neuron(in,w1[i]);out+=output[i]*w2[i];}

is sufficient to implement the NN. Float out is the result of the Neural Network

Figure 17: Hardware/Software Codesign

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 16 of 18

Page 18: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

Here is the main.c file which is added as a source in MicroBlaze:

/***************************** Include Files *********************************/#include "xparameters.h"#include "stdio.h"#include "xutil.h"#include "xgpio_l.h"//#include "xtmrctr_l.h"#include "xuartlite_l.h"//#include "xintc_l.h"#include "mb_interface.h"#include "neuron.h"/************************** Constant Definitions *****************************//************************** Variable Definitions ******************************/volatile unsigned int exit_command = 0;float w1[5]={3.2887,-3.4715,3.4995,3.4946,3.4867};float w2[5]={0.4737,-0.0108,0.0012,-0.0025,0.0182};float output[5]={0.0,-0.0,0.0,-0.0,0.0};float in,out;int i;void uart_int_handler(void *baseaddr_p) { char c; /* * While UART receive FIFO has data */ while (!XUartLite_mIsReceiveEmpty(XPAR_RS232_BASEADDR)) { /* * Read a character */ c = XUartLite_RecvByte(XPAR_RS232_BASEADDR); switch (c) { case 's': /* SET command */ xil_printf("dose input:");

scanf("%f",&in); out=0.00; for (i=0;i<5;i++){ output[i]=neuron(in,w1[i]); out+=output[i]*w2[i]; xil_printf("output: %f",out);

break; case 'x': /* EXIT command */ exit_command = 1; } }}void InterruptTest(void){ /* * Enable the Interrupts on the Microblaze */ microblaze_enable_interrupts(); /* * Enable the UartLite Interrupt. */ XUartLite_mEnableIntr(XPAR_RS232_BASEADDR); /* * Wait for interrupts to occur until exit_command received from UART ISR */ while (!exit_command) ; /* * Enable the Interrupts on the Microblaze */ microblaze_disable_interrupts();}/* * End user-supplied interrupt test routine *///====================================================int main (void) { InterruptTest(); return 0;}

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 17 of 18

Page 19: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

References

1. Dollas Apostolos, “Embedded Microprocessor Systems”- Lecture Notes v. 1.1, 20062. Krose Ben, Patrick van der Smagt, “An Introduction to Neural Networks”, Eighth edition,

19963. Rumelhart, D., Mccleland, J. (Eds.), “Parallel Distributed Processing”, 2 vols., MIT Press,

Cambridge MA, 19864. Sutton, R., Barto, A., “Reinforcement Learning: An Introduction”, MIT Press, Cambridge

MA, 19985. J.L Holt and T.E Baker, “Backpropagation simulations using limited precision calculations”,

in International Joint Conference on Neural Networks (IJCNN-91), volume 2, pp. 121 – 126, Seattle, WA, USA, July 8-14 1991.

6. Ligon et al, ”A Re-evaluation of the practicality of floating-point operations on FPGAs”, FCCM, 1998.

7. Medhat Moussa, Shawki Areibi, Kristian Nichols, “On the arithmetic precision for implementing back-propagation networks on FPGA: a case study”, Chapter 2 of “FPGA Implementations of Neural Networks ,A. R. Omondi and J. C. Rajapakse (eds.)”, p. 37–61, 2006

8. Amos R. Omondi, Jagath C. Rajapakse and Mariusz Bajger, “FPGA Neurocomputers”, Chapter 1 of “FPGA Implementations of Neural Networks ,A. R. Omondi and J. C. Rajapakse (eds.)”, p. 1–35, 2006

9. Vijay Pandya, Shawki Areibi and Medhat Moussa, “A Handel-C Implementation of the Back-Propagation Algorithm On Field Programmable Gate Arrays”

10. Hans-Peter Rosinger, “Connecting Customized IP to the MicroBlaze Soft Processor Using the Fast Simplex Link (FSL) Channel”, XAPP529

Implementation of a 3-layer feedforward backpropagation Neural Network in Xilinx Spartan-3 Starter-Board

by Laoulakos Ch. and Tzanoudakis Th. Page 18 of 18

Page 20: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

TABLE OF CONTENTS Overview

Block Diagram

External Ports

Processor microblaze_0 memory map

Busses dlmb fsl_v20_0 fsl_v20_1 ilmb mb_opb

Memory lmb_bram opb_bram_if_cntlr_1_bram

Peripherals RS232 neuron_0

IP dcm_0

Timing Information

Overview TOC

Overview

Generated on Fri Jan 18 01:42:50 2008 EDK Version 9.1.02FPGA Family spartan3Device xc3s200ft256-4# IP Instantiated 14# Processors 1# Busses 5

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (1 of 20)18/01/2008 02:01:05

Page 21: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

Block Diagram TOC

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (2 of 20)18/01/2008 02:01:05

Page 22: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (3 of 20)18/01/2008 02:01:05

Page 23: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

External Ports TOC

EXTERNAL PORTS

These are the external ports defined in the MHS file.

Attributes Key The attributes are obtained from the SIGIS and IOB_STATE parameters set on the PORT in the MHS file CLK indicates Clock ports, (SIGIS = CLK) INTR indicates Interrupt ports,(SIGIS = INTR) RESET indicates Reset ports, (SIGIS = RST) BUF or REG Indicates ports that instantiate or infer IOB primitives, (IOB_STATE = BUF or REG)

# NAME DIR [LSB:MSB] SIG ATTRIBUTES

0GLB sys_rst_pin I 1 sys_rst_s RESET

1A fpga_0_RS232_RX_pin I 1 fpga_0_RS232_RX

2A fpga_0_RS232_TX_pin O 1 fpga_0_RS232_TX

3B sys_clk_pin I 1 dcm_clk_s CLK

Processors TOC microblaze_0 MicroBlaze The MicroBlaze 32 bit soft processor

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 INTERRUPT I 1 RS232_InterruptBus Interfaces

MASTERSHIP NAME STD BUS P2P

MASTER DLMB LMB dlmb dlmb_cntlr

MASTER ILMB LMB ilmb ilmb_cntlr

MASTER DOPB OPB mb_opb NA

MASTER IOPB OPB mb_opb NA

MASTER MFSL0 FSL fsl_v20_0 neuron_0

SLAVE SFSL0 FSL fsl_v20_1 neuron_0

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (4 of 20)18/01/2008 02:01:05

Page 24: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

General

IP Core microblaze

Version 6.00.b

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_FAMILY Device Family spartan3

C_INSTANCE Instance Name microblaze_0

C_DCACHE_BASEADDR D-Cache Base Address 0x00000000

C_DCACHE_HIGHADDR D-Cache High Address 0x3FFFFFFF

C_ICACHE_BASEADDR I-Cache Base Address

0x00000000

C_ICACHE_HIGHADDR I-Cache High Address

0x3FFFFFFF

C_ADDR_TAG_BITS Number of I-Cache Address Tag Bits 0

C_ALLOW_DCACHE_WR Enable D-Cache Writes 1

C_ALLOW_ICACHE_WR Enable I-Cache Writes 1

Name Value

C_ICACHE_LINE_LEN Instructon Cache Line Length 4

C_ICACHE_USE_FSL Enable Xilinx Cache Links for I-Cache 1

C_ILL_OPCODE_EXCEPTION Enable Illegal Instruction Exception 0

C_INTERRUPT_IS_EDGE Sense Interrupt on Edge vs. Level

1

C_IOPB_BUS_EXCEPTION Enable Instruction-side OPB Exception 0

C_I_LMB Use instruction-side Local Memory Bus 1

C_I_OPB Use instruction-side On-Chip Peripheral

Bus 1

C_NUMBER_OF_PC_BRK Number of PC Breakpoints

1

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (5 of 20)18/01/2008 02:01:05

Page 25: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

C_AREA_OPTIMIZED Select implementation to optimize area

(with lower instruction throughput) 1

C_CACHE_BYTE_SIZE Size of the I-Cache in Bytes 8192

C_DATA_SIZE C_DATA_SIZE 32

C_DCACHE_ADDR_TAG Number of D-Cache Address Tag Bits 0

C_DCACHE_BYTE_SIZE Size of D-Cache in Bytes 8192

C_DCACHE_LINE_LEN Data Cache Line Length 4

C_DCACHE_USE_FSL Enable Xilinx Cache Links for D-Cache 1

C_DEBUG_ENABLED Enable MicroBlaze Debug Module

Interface 0

C_DIV_ZERO_EXCEPTION Enable Integer Divide-by-zero

Exception 0

C_DOPB_BUS_EXCEPTION Enable Data-side OPB Exception 0

C_DYNAMIC_BUS_SIZING C_DYNAMIC_BUS_SIZING 1

C_D_LMB Use data-side Local Memory Bus 1

C_D_OPB Use data-side On-Chip Peripheral Bus 1

C_EDGE_IS_POSITIVE Sense Interrupt on Rising vs. Falling Edge

1

C_FPU_EXCEPTION Enable Floating Point Unit Exceptions 0

C_FSL_DATA_SIZE FSL Link Data Width

32

C_FSL_LINKS Number of FSL Links

1

C_NUMBER_OF_RD_ADDR_BRK Number of Read Address Watchpoints

0

C_NUMBER_OF_WR_ADDR_BRK Number of Write Address Watchpoints

0

C_OPCODE_0x0_ILLEGAL 0C_PVR Specifies Processor Version Register 0

C_PVR_USER1 Specify USER1 Bits in Processor

Version Register 0x00

C_PVR_USER2 Specify USER2 Bits in Processor

Version Registers 0x00000000

C_RESET_MSR Specify Reset Value for Select MSR Bits 0x00000000

C_SCO C_SCO 0

C_UNALIGNED_EXCEPTIONS Enable Unaligned Data Exception 0

C_USE_BARREL Enable Barrel Shifter 0

C_USE_DCACHE Enable Data Cache 0

C_USE_DIV Enable Integer Divider 0

C_USE_FPU Enable Floating Point Unit 0

C_USE_HW_MUL Enable Integer Multiplier 1

C_USE_ICACHE Enable Instruction Cache

0

C_USE_MSR_INSTR Enable Additional Machine Status

Register Instructions 1

C_USE_PCMP_INSTR Enable Pattern Comparator 1

MEMORY MAP

D=DATA ADDRESSABLE I=INSTRUCTION ADDRESSABLE

D I BASE HIGH MODULE

■ 0x00000000 0x0000FFFF C_BASEADDR:C_HIGHADDRdlmb_cntlr

■ 0x00000000 0x0000FFFF C_BASEADDR:C_HIGHADDRilmb_cntlr

■ ■ 0x40600000 0x4060FFFF C_BASEADDR:C_HIGHADDRRS232

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (6 of 20)18/01/2008 02:01:05

Page 26: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

■ ■ 0x40610000 0x4061FFFF c_baseaddr:c_highaddropb_bram_if_cntlr_1

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate synthesis information.

Busses TOCdlmb Local Memory Bus (LMB) 1.0 'The LMB is a fast, local bus for connecting MicroBlaze I and D ports to peripherals and BRAM'

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 SYS_Rst I 1 sys_rst_s2 LMB_Clk I 1 sys_clk_s

Bus Connections

TYPE NAME BIF

MASTER microblaze_0 DLMB

SLAVE dlmb_cntlr SLMB

General

IP Core lmb_v10

Version 1.00.a

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_EXT_RESET_HIGH Active High External Reset 1

C_LMB_AWIDTH LMB Address Bus Width

32

C_LMB_DWIDTH LMB Data Bus Width

32

C_LMB_NUM_SLAVES Number of Bus Slaves

1

Post Synthesis Device Utilization

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (7 of 20)18/01/2008 02:01:05

Page 27: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

Device utilization information is not available for this IP. Run platgen to generate synthesis information.

fsl_v20_0 Fast Simplex Link (FSL) Bus Fast Simplex Link (FSL) is a fast uni-directional point to point communication channel

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 FSL_Clk I 1 sys_clk_s2 SYS_Rst I 1 sys_rst_s

Bus Connections

TYPE NAME BIF

MASTER microblaze_0 MFSL0

SLAVE neuron_0 SFSL

General

IP Core fsl_v20

Version 2.10.a

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_ASYNC_CLKS FIFO in FSL operates

Asynchronously 0

C_EXT_RESET_HIGH External Reset Active High 1

C_FSL_DEPTH FSL FIFO Depth 16

C_FSL_DWIDTH FSL Data Bus Width 32

C_IMPL_STYLE Use BRAMs to Implement FIFO 0

C_USE_CONTROL Propagate Control Bit 1

Post Synthesis Device Utilization

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (8 of 20)18/01/2008 02:01:05

Page 28: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

Device utilization information is not available for this IP. Run platgen to generate synthesis information.

fsl_v20_1 Fast Simplex Link (FSL) Bus Fast Simplex Link (FSL) is a fast uni-directional point to point communication channel

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 SYS_Rst I 1 sys_rst_s2 FSL_Clk I 1 sys_clk_s

Bus Connections

TYPE NAME BIF

MASTER neuron_0 MFSL

SLAVE microblaze_0 SFSL0

General

IP Core fsl_v20

Version 2.10.a

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_ASYNC_CLKS FIFO in FSL operates

Asynchronously 0

C_EXT_RESET_HIGH External Reset Active High 1

C_FSL_DEPTH FSL FIFO Depth 16

C_FSL_DWIDTH FSL Data Bus Width 32

C_IMPL_STYLE Use BRAMs to Implement FIFO 0

C_USE_CONTROL Propagate Control Bit 1

Post Synthesis Device Utilization

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (9 of 20)18/01/2008 02:01:05

Page 29: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

Device utilization information is not available for this IP. Run platgen to generate synthesis information.

ilmb Local Memory Bus (LMB) 1.0 'The LMB is a fast, local bus for connecting MicroBlaze I and D ports to peripherals and BRAM'

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 SYS_Rst I 1 sys_rst_s2 LMB_Clk I 1 sys_clk_s

Bus Connections

TYPE NAME BIF

MASTER microblaze_0 ILMB

SLAVE ilmb_cntlr SLMB

General

IP Core lmb_v10

Version 1.00.a

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_EXT_RESET_HIGH Active High External Reset 1

C_LMB_AWIDTH LMB Address Bus Width

32

C_LMB_DWIDTH LMB Data Bus Width

32

C_LMB_NUM_SLAVES Number of Bus Slaves

1

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (10 of 20)18/01/2008 02:01:05

Page 30: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

mb_opb On-chip Peripheral Bus (OPB) 2.0 OPB_V20 On-Chip Peripheral Bus V2.0 with OPB Arbiter (OPB_V20)

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 SYS_Rst I 1 sys_rst_s2 OPB_Clk I 1 sys_clk_s

Bus Connections

TYPE NAME BIF

MASTER microblaze_0 DOPB

MASTER microblaze_0 IOPB

SLAVE RS232 SOPB

SLAVE opb_bram_if_cntlr_1 SOPB

General

IP Core opb_v20

Version 1.10.c

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_BASEADDR Base Address 0xFFFFFFFF

C_HIGHADDR High Address 0x00000000

C_DEV_BLK_ID Device Block ID

0

C_DEV_MIR_ENABLE Enable Module Identification Register (MIR)

0

C_DYNAM_PRIORITY Use Dynamic Instead of Fixed Priority Bus Arbitration

0

C_EXT_RESET_HIGH External Reset High 1

C_NUM_MASTERS Number of Bus Masters

2

Name Value

C_NUM_SLAVES Number of Bus Slaves

2

C_OPB_AWIDTH OPB Address Bus Width 32

C_OPB_DWIDTH OPB Data Bus Width 32

C_PARK Support Bus Parking

0

C_PROC_INTRFCE Enable Access To OPB Arbiter Registers

0

C_REG_GRANTS Use Registered Instead of Combinational Grant Outputs

1

C_USE_LUT_OR Use Only LUTs for OR Structure 1

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (11 of 20)18/01/2008 02:01:06

Page 31: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

Memory TOClmb_bram Block RAM (BRAM) Block The BRAM Block is a configurable memory module that attaches to a variety of BRAM Interface Controllers.

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

Bus Interfaces

MASTERSHIP NAME STD BUS P2P

TARGET PORTA XIL ilmb_port ilmb_cntlr

TARGET PORTB XIL dlmb_port dlmb_cntlr

General

IP Core bram_block

Version 1.00.a

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_FAMILY Device Family spartan3

C_MEMSIZE Size of BRAM(s) in Bytes 0x10000

C_NUM_WE Number of Byte Write Enables 4

C_PORT_AWIDTH Address Width of Port A and B 32

C_PORT_DWIDTH Data Width of Port A and B 32

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (12 of 20)18/01/2008 02:01:06

Page 32: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

opb_bram_if_cntlr_1_bram Block RAM (BRAM) Block The BRAM Block is a configurable memory module that attaches to a variety of BRAM Interface Controllers.

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNALBus Interfaces

MASTERSHIP NAME STD BUS P2P

TARGET PORTA XIL opb_bram_if_cntlr_1_port opb_bram_if_cntlr_1

General

IP Core bram_block

Version 1.00.a

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_FAMILY Device Family spartan3

C_MEMSIZE Size of BRAM(s) in Bytes 0x10000

C_NUM_WE Number of Byte Write Enables 4

C_PORT_AWIDTH Address Width of Port A and B 32

C_PORT_DWIDTH Data Width of Port A and B 32

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (13 of 20)18/01/2008 02:01:06

Page 33: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

Memory Controllers TOCdlmb_cntlr LMB BRAM Controller Local Memory Bus (LMB) Block RAM (BRAM) Interface Controller connects to an lmb bus

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

Bus Interfaces

MASTERSHIP NAME STD BUS P2P

INITIATOR BRAM_PORT XIL dlmb_port lmb_bram

SLAVE SLMB LMB dlmb microblaze_0

General

IP Core lmb_bram_if_cntlr

Version 2.00.a

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_BASEADDR LMB BRAM Base Address 0x00000000

C_HIGHADDR LMB BRAM High Address 0x0000FFFF

C_LMB_AWIDTH LMB Address Bus Width

32

C_LMB_DWIDTH LMB Data Bus Width

32

C_MASK LMB Address Decode Mask 0x00200000

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (14 of 20)18/01/2008 02:01:06

Page 34: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

ilmb_cntlr LMB BRAM Controller Local Memory Bus (LMB) Block RAM (BRAM) Interface Controller connects to an lmb bus

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

Bus Interfaces

MASTERSHIP NAME STD BUS P2P

INITIATOR BRAM_PORT XIL ilmb_port lmb_bram

SLAVE SLMB LMB ilmb microblaze_0

General

IP Core lmb_bram_if_cntlr

Version 2.00.a

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_BASEADDR LMB BRAM Base Address 0x00000000

C_HIGHADDR LMB BRAM High Address 0x0000FFFF

C_LMB_AWIDTH LMB Address Bus Width

32

C_LMB_DWIDTH LMB Data Bus Width

32

C_MASK LMB Address Decode Mask 0x00200000

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (15 of 20)18/01/2008 02:01:06

Page 35: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

opb_bram_if_cntlr_1 OPB BRAM Controller Attaches BRAM to the OPB

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNALBus Interfaces

MASTERSHIP NAME STD BUS P2P

INITIATOR PORTA XIL opb_bram_if_cntlr_1_port opb_bram_if_cntlr_1_bram

SLAVE SOPB OPB mb_opb NA

General

IP Core opb_bram_if_cntlr

Version 1.00.a

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

c_baseaddr 0x40610000

c_highaddr 0x4061FFFF

c_include_burst_support 0

c_opb_awidth 32

c_opb_clk_period_ps 20000

c_opb_dwidth 32

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (16 of 20)18/01/2008 02:01:06

Page 36: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

Peripherals TOCRS232 OPB UART (Lite) Generic UART (Universal Asynchronous Receiver/Transmitter) for OPB bus.

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 RX I 1 fpga_0_RS232_RX2 TX O 1 fpga_0_RS232_TX3 Interrupt O 1 RS232_Interrupt

Bus Interfaces

MASTERSHIP NAME STD BUS P2P

SLAVE SOPB OPB mb_opb NA

General

IP Core opb_uartlite

Version 1.00.b

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_BASEADDR Base Address 0x40600000

C_HIGHADDR High Address 0x4060FFFF

C_BAUDRATE UART Lite Baud Rate

9600

C_CLK_FREQ OPB Clock Frequency

50000000

C_DATA_BITS Number of Data Bits in a Serial

Frame 8

C_ODD_PARITY Parity Type

0

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (17 of 20)18/01/2008 02:01:06

Page 37: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

C_OPB_AWIDTH OPB Address Bus Width 32

C_OPB_DWIDTH OPB Data Bus Width 32

C_USE_PARITY Use Parity

0

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

neuron_0

PORT LIST

The ports listed here are only those connected in the MHS file.

# NAME DIR [LSB:MSB] SIGNAL

Bus Interfaces

MASTERSHIP NAME STD BUS P2P

MASTER MFSL FSL fsl_v20_1 microblaze_0

SLAVE SFSL FSL fsl_v20_0 microblaze_0

General

IP Core neuron

Version 1.00.a

Driver API

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate

synthesis information.

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (18 of 20)18/01/2008 02:01:06

Page 38: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

IP TOC dcm_0 Digital Clock Manager (DCM) The digital clock manager module is a wrapper around the DCM primitive which allows it to be used in the EDK tool suite.

PORT LIST

The ports listed here are only those connected in the MHS file. Refer to the IP documentation for complete information about module ports.

# NAME DIR [LSB:MSB] SIGNAL

1 CLKIN I 1 dcm_clk_s2 CLKFB I 1 sys_clk_s3 RST I 1 net_gnd4 CLK0 O 1 sys_clk_s5 LOCKED O 1 dcm_0_lock

General

IP Core dcm_module

Version 1.00.c

Driver API

ParametersThese are parameters set for this module. Refer to the IP documentation for complete information about module parameters. Parameters marked with yellow indicate parameters set by the user. Parameters marked with blue indicate parameters set by the system.

Name Value

C_FAMILY Device Family spartan3

C_CLK0_BUF Insert a BUFG for CLK0 TRUE

C_CLK180_BUF Insert a BUFG for CLK180 FALSE

C_CLK270_BUF Insert a BUFG for CLK270 FALSE

C_CLK2X180_BUF Insert a BUFG for CLK2X180 FALSE

C_CLK2X_BUF Insert a BUFG for CLK2X FALSE

C_CLK90_BUF Insert a BUFG for CLK90 FALSE

Name Value

C_CLKIN_BUF Insert a BUFG for CLKIN FALSE

C_CLKIN_DIVIDE_BY_2 CLKIN Divide By 2 FALSE

C_CLKIN_PERIOD Input Clock Period 20.000000

C_CLKOUT_PHASE_SHIFT Controls Use of Phase Shift NONE

C_CLK_FEEDBACK Clock Feedback Input 1X

C_DESKEW_ADJUST Amount of Delay in the Feedback Path SYSTEM_SYNCHRONOUS

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (19 of 20)18/01/2008 02:01:06

Page 39: Implementation of a 3-layer feedforward backpropagation ...read.pudn.com/...3_NeuralNetwork_3-layer_feedforward_backpropag… · Implementation of a 3-layer feedforward backpropagation

EDK PROJECT REPORT

C_CLKDV_BUF Insert a BUFG for CLKDV FALSE

C_CLKDV_DIVIDE CLKDV Divisor 2.0

C_CLKFB_BUF Insert a BUFG for CLKFB FALSE

C_CLKFX180_BUF Insert a BUFG for CLKFX180 FALSE

C_CLKFX_BUF Insert a BUFG for CLKFX FALSE

C_CLKFX_DIVIDE Divisor for the CLKFX Output 1

C_CLKFX_MULTIPLY Multiply Value of the CLKFX Output 4

C_DFS_FREQUENCY_MODE Digital Frequency Synthesizer Clock

Frequency Mode LOW

C_DLL_FREQUENCY_MODE Delay Locked Loop Frequency Mode LOW

C_DSS_MODE DSS Mode NONE

C_DUTY_CYCLE_CORRECTION Duty Cycle Correction TRUE

C_EXT_RESET_HIGH Reset Polarity 1

C_PHASE_SHIFT Phase Shift 0

C_STARTUP_WAIT Configuration Startup Wait FALSE

Post Synthesis Device UtilizationDevice utilization information is not available for this IP. Run platgen to generate synthesis information.

Timing Information TOC

Post Synthesis Clock Limits

No clocks could be identified in the design. Run platgen to generate synthesis information.

line

Fri Jan 18 01:42:50 2008 www.xilinx.com 1-800-255-7778

file:///D|/TUC/embedded/Proj/report/ds_MainNF.html (20 of 20)18/01/2008 02:01:06


Recommended