+ All Categories
Home > Documents > nerosolution.docx

nerosolution.docx

Date post: 02-Oct-2015
Category:
Upload: dipak-kale
View: 226 times
Download: 1 times
Share this document with a friend
Popular Tags:
23
Experiment No:- TITLE: To study NeuroSolutions 6.0. Software Required : Operating System Windows XP / Vista / 7 Memory 512MB RAM (2GB recommended) Hard Drive 500MB free space Video 800x600 (1024x768 recommended) AIM : To study NeuroSolutions 6.0. THEORY : What is NeuroSolutions? NeuroSolutions is the premier neural network simulation environment. NeuroSolutions has one document type, the breadboard. Simulations are constructed and run on breadboards. With NeuroSolutions, designing a neural network is very similar to prototyping an electronic circuit. With an electronic circuit, components such as resistors, capacitors and inductors are first arranged on a breadboard. NeuroSolutions instead uses neural
Transcript

Experiment No:-

TITLE: To study NeuroSolutions 6.0.Software Required:

Operating SystemWindows XP / Vista / 7

Memory512MB RAM (2GB recommended)

Hard Drive500MB free space

Video800x600 (1024x768 recommended)

AIM: To study NeuroSolutions 6.0.THEORY: What is NeuroSolutions?NeuroSolutions is the premier neural network simulation environment.NeuroSolutions has one document type, the breadboard. Simulations are constructed and run on breadboards. With NeuroSolutions, designing a neural network is very similar to prototyping an electronic circuit. With an electronic circuit, components such as resistors, capacitors and inductors are first arranged on a breadboard. NeuroSolutions instead uses neural components such as axons, synapses and probes. The components are then connected together to form a circuit. The electronic circuit passes electrons between its components. The circuit (i.e., neural network) of NeuroSolutions passes activity between its components, and is termed a data flow machine. Finally, the circuit is tested by inputting data and probing the systems' response at various points. An electronic circuit would use an instrument, such as an oscilloscope, for this task.Networks are constructed on a breadboard by selecting components from the palettes, stamping them on the breadboard, and then interconnecting them to form a network topology. Once the topology is established and its components have been configured, a simulation can be run.New breadboards are created by selecting New from the File menu. This will create a blank breadboard titled "Breadboard1.nsb". The new breadboard can later be saved. Saving a breadboard saves the topology, the configuration of each component and (optionally) their weights. Therefore, a breadboard may be saved at any point during training and then restored later. The saving of weights is a parameter setting for each component that contains adaptive weights. This parameter can be set for all components on the breadboard or just selected ones. An example of a functional breadboard is illustrated in the figure below.

Fig 1 : Breadboard What is a neural network?A neural network is an adaptable system that can learn relationships through repeated presentation of data, and is capable of generalizing to new, previously unseen data. Some networks are supervised, in that a human must determine what the network should learn from the data. Other networks are unsupervised, in that the way they organize information is hard-coded into their architecture. What do you use a neural network for?Neural networks are used for both regression and classification. In regression, the outputs represent some desired, continuously valued transformation of the input patterns. In classification, the objective is to assign the input patterns to one of several categories or classes, usually represented by outputs restricted to lie in the range from 0 to 1, so that they represent the probability of class membership. Why are neural networks so powerful?For regression, it can be shown that neural networks can learn any desired input-output mapping if they have sufficient numbers of processing elements in the hidden layer(s). For classification, neural networks can learn the Bayesian posterior probability of correct classification. How does NeuroSolutions implement neural networks?NeuroSolutions adheres to the so-called local additive model. Under this model, each component can activate and learn using only its own weights and activations, and the activations of its neighbors. This lends itself very well to object orientated modeling, since each component can be a separate object that sends and receives messages. This in turn allows for a graphical user interface (GUI) with icon based construction of networks.

Fig 2 : Main window of NeuroSolution

Neural ComponentsEach neural component encapsulates the functionality of a particular piece of a neural network. A working neural network simulation requires the interconnection of many different components.As mentioned above, the NeuralBuilder utility automates the construction process of many popular neural networks. There may be times when you will want to create a network from scratch, or to add components to a network created by the NeuralBuilder. This is done by selecting components from palettes and stamping them onto the breadboard. 1. Axon component :This is a layer of PE's (processing elements) with identity transfer function. Primary Usage, Can act as a placeholder for the File component at the input layer, or as a linear output layer.

Fig 3 : Axon component2. Connectors :Constructing a network topology is equivalent to assigning the order in which data flows through network components. Data flow connections are made using male and female connectors. These connectors have the icons illustrated in the figure below.

Fig 4 : Male Connector Female ConnectorConnections are formed by dragging a MaleConnector over a FemaleConnector and releasing the mouse button (dropping). The cursor will turn to a move cursor when a male is dragged over an available female. Otherwise the forbidden sign will show up. The icon for a male connected to a female is shown in the figure below.

Fig 5 : ConnectionThere is also a shortcut for making a connection between two components. First, select the source component (by single-clicking the left mouse button), then single click the destination component with the right mouse button to bring up the Component Menu. Select the "Connect to" menu item.The connection may be broken by simply dragging the MaleConnector to an empty spot on the breadboard and performing a Cut operation. If a connection is valid, a set of lines will be drawn indicating that data will flow between the components. Data flows from the male to the female. The valid connection of two Axons is shown in the figure below.

Fig 6 : Valid connection between two axonsNotice that a new FemaleConnector has appeared on the right Axon and a new MaleConnector was created on the left Axon. This indicates that axons have a summing junction at their input and a splitting node at their output. Since the axons contain a vector of processing elements, the summing junctions and splitting nodes are multivariate. Axons can accept input from an arbitrary number of simulation components, which are summed together. The axons output can feed the inputs of multiple components.An invalid connection between two components for any reason will look like the image shown in the figure below.

Fig 7 : Invalid connection between two axonsThe reason for an invalid connection will appear in an alert panel. Most often the mismatch is due to incompatible component dimensions. The user must alter the dimensions of one of the components to correct the situation.3. Stacking :Connectors are normally used to specify the network topology by defining the data flow. There are many situations where components should be connected to a network without altering its topology. Examples of such situations are probing, noise injection and attaching learning dynamics. Stacking allows this form of connection.The figure below illustrates the use of stacking to probe the activity flowing through an Axon. Notice that stacking does not use the male and female connectors.

Fig 8 : A probe stacked on top of an Axon.Cabling : Cabling is a graphical option to interconnect components that have connectors. A cable is started by dropping the MaleConnector at any empty location on the breadboard. Hold down the Shift key while dragging this connector again. This second drag operation will create a new segment of the connection (cable). With each successive move of the MaleConnector, a line (cable segment) will be drawn to show the connection path. This process may be repeated indefinitely.Single-clicking on the MaleConnector will highlight all breakpoints along the cable. A breakpoint may then be moved by dragging and dropping. If a breakpoint is Cut from the breadboard, then it is removed from the cable. Double-clicking on a breakpoint will insert an additional breakpoint next to it in the cable. Cabling is particularly useful when forming recurrent connections. An example of cabling between two axons is shown in the figure below.

Fig 9 : Example of a cable between two axonsNeuroSolutions verifies that all connections make sense as they are formed. This was already evident in the visual indication of incompatibility between components. In cabling, if the output of an element is brought to its input, an alert panel will be displayed complaining that an infinite loop was detected. It is up to the user to modify the cabling (e.g., making sure that the recurrent connection is delayed by at least one time step).4. Probes :Probes are one family of components that speak the Access protocol. Each probe provides a unique way of visualizing the data provided by access points. Consider an access point presenting a fully connected matrix of weights. You could view this data instantaneously as a matrix of numbers or you could view this data over time as weight tracks. What is important here is that NeuroSolutions provides an extensive set of visualization tools that can be attached to any data within the network.The DataStorage component collects multi-channel data into a circular buffer, which is then presented as the Buffered Activity access point. Temporal probes, such as the MegaScope, can only stack on top of a DataStorage component (directly or indirectly). In the configuration illustrated in the figure below, the MegaScope is used to display the Axon's activity over time. Used in this manner, the MegaScope/DataStorage combination functions as an oscilloscope.The DataStorage component may also be used in conjunction with the DataStorageTransmitter, allowing data from different locations in the topology to be displayed on a single probe. This is also illustrated in the figure below.

Fig 10 : Probing the output and input of a network using the same scope.5. Data Input/Output :Probes attach to access points to examine data within the network. Network data can also be altered through access points. This provides an interface for using input files, output files and other I/O sources. Illustrated in the figure below is a FunctionGenerator stacked on top of the left Axon. This will inject samples of a user defined function into the network's data flow.

Fig 11 : FunctionGenerator as the input to a network

To read data from the file system, the File component must be used. This component accepts straight ASCII, column-formatted ASCII, binary, and bitmap files. Several files of any type may beopened at the same time and input sequentially to the network. Segmentation and normalization of the data contained in the files is also provided by this component. The figure below shows a File component attached to the left Axon.Any network data can be captured and saved to a binary or ASCII file using the DataWriter probe. The figure below also shows a DataWriter attached to an output Axon to capture its activity as itflows through the network.

Fig 12 : A File component to read data in and a DataWriter probe to write data out6. Transmitters and Receivers :Both connectors and stacking provide local communication between components. There are situations where a global communication channel is necessary, either to send/receive data or for control. NeuroSolutions provides a family of components, called Transmitters, to implement global communications. These components use access points to globally transmit data, or to send global messages based on local decisions. Several components can receive data or control messages that alter their normal operation. This allows very sophisticated tasks to be implemented, such as adaptive learning rates, nonuniform relaxation, and error-based stop criteria.Here are some of the most common component

Kohonen family :1. DiamondKohonen :

The DiamondKohonen implements a 2D self-organizing feature map (SOFM) with a diamond neighborhood. The dimensions of the map are dictated by the rows and columns of the axon that the DiamondKohonen feeds. The neighborhood size is selected from the components inspector.

Fig 13 : DiamondKohonenNeighborhood Figure (Size=2):

2. LineKohonen DLL Implementation :The LineKohonen component implements a 1D self-organizing feature map (SOFM) with a linear neighborhood. The dimensions of the map are dictated by the vector length (outRows*outCols) of the axon that the LineKohonen feeds. The neighborhood size is defined by the user within the components inspector.

Fig 14 : LineKohonen DLL ImplementationNeighborhood Figure (Size=2):

3. SquareKohonen DLL Implementation :

The SquareKohonen component implements a 2D self-organizing feature map (SOFM) with a square neighborhood. The dimensions of the map are dictated by the dimensions of the axon that the LineKohonen feeds (outRows and outCols). The neighborhood size is defined by the user within the components inspector.

Fig 15 : SquareKohonen DLL Implementation

Neighborhood Figure (Size=2):

KOHONEN FEATURE MAP :Kohonon's SOMs are a type of unsupervised learning. The goal is to discover some underlying structure of the data. However, the kind of structure we are looking for is very different than, say, PCA or vector quantization.Kohonen's SOM is called a topology-preserving map because there is a topological structure imposed on the nodes in the network. A topological map is simply a mapping that preserves neighborhood relations.In the nets we have studied so far, we have ignored the geometrical arrangements of output nodes. Each node in a given layer has been identical in that each is connected with all of the nodes in the upper and/or lower layer. We are now going to take into consideration that physical arrangement of these nodes. Nodes that are "close" together are going to interact differently than nodes that are "far" apart.The Kohonen Feature Map tries to put the cluster centers in places that minimize the overall distance between records and their cluster centers. Euclidean distance is used to determine the distance between a record and the centers. The separation of clusters is not taken into account. The center vectors are arranged in a map with a certain number of columns and rows. These vectors are interconnected so that, when a record is assigned to a cluster, not only the winning center vector that is closest to a training record is adjusted, but also the vectors in its neighborhood. However, the further away the other centers are, the less they are adjusted.

Fig 16 : Main window of Kohonen Feature Maps

Fig 17 : Activity of input Axon

Fig 18 : Self-organizing map

Fig 19 : Annealing the neighbourhood width

Fig 20 : Varying the number of PEs

Fig 21 : 1-D kohonen Feature Maps

Fig 22 : A dimensional mismatch

Fig 23 : Square Kohonen

Fig 24 : 2-D problem

Fig 25 : summary