+ All Categories
Home > Documents > Artificial Neural Networks - 123seminarsonly.com Networks Components – biological plausibility...

Artificial Neural Networks - 123seminarsonly.com Networks Components – biological plausibility...

Date post: 10-Mar-2018
Category:
Upload: vuongtram
View: 220 times
Download: 0 times
Share this document with a friend
55
Artificial Neural Networks for process control Puneet Kr Singh Mtech (FT) 1 st Yr P K Singh, F O E, D E I http://pksingh.webstarts.com/student_community.html
Transcript

Artificial Neural Networks for process control

Puneet Kr SinghMtech (FT)

1st Yr

P K Singh, F O E, D E I http://pksingh.webstarts.com/student_community.html

What is a Neural Network?

•Biologically motivated approach to machine learning

Modern digital computers outperform human in the domain of numeric computation & related symbol manipulation

However humans can effortlessly solve complex perceptual problems….

like Recognizing a man in a crowd from a mere glimpse of his face at such a high speed & extent as to dwarf the world’s fastest computers

P K Singh, F O E, D E I

ELECTRON MICROGRAPH OF A REAL NEURON

P K Singh, F O E, D E I

NN as an model of brain-like Computer

An artificial neural network (ANN) is a massively parallel distributed processor that has a natural propensity for storing experimental knowledge and making it available for use. It means that:

Knowledge is acquired by the network through a learning (training) process;

ANN as a Brain-Like Computer

through a learning (training) process; The strength of the interconnections between neurons is implemented by means of the synaptic weights used to store the knowledge.

The learning process is a procedure of the adapting the weights with a learning algorithm in order to capture the knowledge. On more mathematically, the aim of the learning process is to map a given relation between inputs and output (outputs) of the network.

Brain

The human brain is still not well understood and indeed its behavior is very complex!There are about 10 billion neurons in the human cortex and 60 trillion synapses of connectionsThe brain is a highly complex, nonlinear and parallel computer(information-processing system)P K Singh, F O E, D E I

P K Singh, F O E, D E I

A Neuron

1x

nx

1( ,..., )nxf x...

φ(z)

0 1 1 ... n nz w wx w x

1 0 1 1( ,..., ) ( ... )n n nf x x F w w x w x

Where f is a function to be earned.

are the inputs.

φ is the activation function.

nx 0 1 1 ... n nz w wx w x

1,..., nx x

Z is the weighted sumP K Singh, F O E, D E I

z z

Linear activation Logistic activation

1

1 zz

e

zz

1

Artificial Neuron:Classical Activation Functions

Threshold activation Hyperbolic tangent activation

u

u

e

eutanhu

2

2

1

1

1, 0,

sign( )1, 0.

if zz z

if z

z

z z

-1

1

0

0

Σ

1

-1

P K Singh, F O E, D E I

Neural Network

Neural Network learns by adjusting the weights so as to be able to correctly classify the training data and hence, after testing phase, to classify unknown data.

Neural Network needs long time for training. Neural Network needs long time for training.

Neural Network has a high tolerance to noisy and incomplete data

P K Singh, F O E, D E I

Learning The procedure that consists in estimating the parameters of neurons (setting up

the weights) so that the whole network can perform a specific task. 2 types of learning

Supervised learning Unsupervised learning

Supervised learning which incorporates an external teacher, so that each output Supervised learning which incorporates an external teacher, so that each output unit is told what its desired response to input signals ought to be.

Unsupervised learning uses no external teacher and is based upon only local information. It is also referred to as self-organization, in the sense that it self-organizes data presented to the network and detects their emergent collective properties.

P K Singh, F O E, D E I

Threshold Neuron (Perceptron)

• Output of a threshold neuron is binary, while inputs may be either binary or continuous

• If inputs are binary, a threshold neuron implements a Boolean function

• The Boolean alphabet {1, -1} is usually used in neural networks • The Boolean alphabet {1, -1} is usually used in neural networks theory instead of {0, 1}.

• Correspondence with the classical Boolean alphabet {0, 1} is established as follows:

1 2 (0 1 {0 1){1 1}}1 1 1 y; ; , x= - y- ,-y , x

P K Singh, F O E, D E I

Threshold Boolean Functions: Geometrical Interpretation

“OR” (Disjunction) is an example of the threshold (linearly separable) Boolean function: “-1s” are separated from “1” by a line

XOR is an example of the non-threshold (not linearly separable) Boolean function: it is impossible separate “1s” from “-1s” by any single line

(-1, 1) (1, 1)

(-1, 1) (1, 1)

• 1 1 1• 1 -1 -1• -1 1 -1• -1 -1 -1

• 1 1 1• 1 -1 -1• -1 1 -1• -1 -1 1

(-1,-1) (1,-1)

(-1,-1) (1,-1)

P K Singh, F O E, D E I

Threshold Neuron: Learning A main property of a neuron and of a neural network is their

ability to learn from its environment, and to improve its performance through learning.

A neuron (a neural network) learns about its environment through A neuron (a neural network) learns about its environment through an iterative process of adjustments applied to its synaptic weights.

Ideally, a network (a single neuron) becomes more knowledgeableabout its environment after each iteration of the learning process.

P K Singh, F O E, D E I

Threshold Neuron: Learning Let T be a desired output of a neuron (of a network) for a certain

input vector and

Y be an actual output of a neuron.

If T=Y, there is nothing to learn. If T=Y, there is nothing to learn.

If T≠Y, then a neuron has to learn, in order to ensure that after adjustment of the weights, its actual output will coincide with a desired output

P K Singh, F O E, D E I

Error-Correction Learning If T≠Y , then is the error . A goal of learning is to adjust the weights in such a way that for a new

actual output we will have the following: That is, the updated actual output must coincide with the desired

output. The error-correction learning rule determines how the weights must

T Y

Y TY The error-correction learning rule determines how the weights must be adjusted to ensure that the updated actual output will coincide with the desired output:

α is a learning rate (should be equal to 1 for the threshold neuron, when a function to be learned is Boolean)

0

0 1 1

0

, , ..., ; , ...,

; 1, ...,

n n

iii

W w w w X

w

ww

x

x n

w

x

i

P K Singh, F O E, D E I

A Simplest Network

1x Neuron 1

Neuron 3

2x Neuron 2

P K Singh, F O E, D E I

Solving XOR problem using the simplest network

1x N1

N3 1-3

),(),( 212211212121 xxfxxfxxxxxx

2x N2

N3

3

1-3

3 3

-1

-1

3

3

P K Singh, F O E, D E I

Solving XOR problem using the simplest network

#

Inputs

Neuron 1 Neuron 2 Neuron 3

XOR=

Z Z Z 21 xx

)3,3,1(~ W )1,3,3(

~W )3,3,1(

~W

x x )(sign z)(sign z )(sign zZ

outputZ

outputZ

output

1) 1 1 1 1 5 1 5 1 1

2) 1 -1 -5 -1 7 1 -1 -1 -1

3) -1 1 7 1 -1 -1 -1 -1 -1

4) -1 -1 1 1 1 1 5 1 1

21 xx 1x x2

)(sign z)(sign z )(sign z

P K Singh, F O E, D E I

Neural Networks Components – biological plausibility

Neurone / node Synapse / weight

Feed forward networks Unidirectional flow of information Good at extracting patterns,

generalisation and predictiongeneralisation and prediction Distributed representation of data Parallel processing of data Training: Backpropagation Not exact models, but good at

demonstrating principles

Recurrent networks Multidirectional flow of information Memory / sense of time Complex temporal dynamics (e.g. CPGs) Various training methods (Hebbian, evolution) Often better biological models than FFNs

P K Singh, F O E, D E I

P K Singh, F O E, D E I

P K Singh, F O E, D E I

BACK PROPAGATION

Back Propagation learns by iteratively processing a set of training data (samples).

For each sample, weights are modified to minimize the error between network’s classification and actual classification.

P K Singh, F O E, D E I

Steps in Back propagation Algorithm

STEP ONE: initialize the weights and biases.

The weights in the network are initialized to random numbers from the interval [-1,1].

Each unit has a BIAS associated with it

The biases are similarly initialized to random numbers from the interval [-1,1].

STEP TWO: feed the training sample.

P K Singh, F O E, D E I

Steps in Back propagation Algorithm ( cont..)

STEP THREE: Propagate the inputs forward; we compute the net input and output of each unit in the hidden and output layers.

STEP FOUR: back propagate the error. STEP FOUR: back propagate the error.

STEP FIVE: update weights and biases to reflect the propagated errors.

STEP SIX: terminating conditions.

P K Singh, F O E, D E I

Output nodes

Output vector

))(1( kkkkk OTOOErr

jkk

kjjj wErrOOErr )1(IjO

1

Back propagation Formula

Input nodes

Hidden nodes

Input vector: xi

wij i

jiijj OwI

k

ijijij OErrlww )(jjj Errl)(

jIje

O

1

P K Singh, F O E, D E I

Example of Back propagation

Initialize weights :

Input = 3, Hidden Neuron = 2 Output =1

Random Numbers from -1.0 to 1.0

x1 x2 x3 w14 w15 w24 w25 w34 w35 w46 w56

1 0 1 0.2 -0.3 0.4 0.1 -0.5 0.2 -0.3 -0.2

Initial Input and weight

P K Singh, F O E, D E I

Example ( cont.. )

Bias added to Hidden + Output nodes Initialize Bias Random Values from -1.0 to 1.0 -1.0 to 1.0

Bias ( Random )

θ4 θ5 θ6

-0.4 0.2 0.1

Example: Voice Recognition Task: Learn to discriminate between two different voices

saying “Hello”

Data Sources Sources

Steve Simpson David Raubenheimer

Format Frequency distribution (60 bins) Analogy: cochlea

P K Singh, F O E, D E I

Network architecture Feed forward network

60 input (one for each frequency bin) 6 hidden 2 output (0-1 for “Steve”, 1-0 for “David”)

P K Singh, F O E, D E I

Presenting the dataSteve

David

P K Singh, F O E, D E I

Presenting the data (untrained network)Steve

0.43

0.26

David

0.73

0.55

P K Singh, F O E, D E I

Calculate errorSteve

0.43 – 0 = 0.43

0.26 –1 = 0.74

David

0.73 – 1 = 0.27

0.55 – 0 = 0.55

P K Singh, F O E, D E I

Backprop error and adjust weightsSteve

0.43 – 0 = 0.43

0.26 – 1 = 0.74

1.17

David

0.73 – 1 = 0.27

0.55 – 0 = 0.55

1.17

0.82P K Singh, F O E, D E I

Presenting the data (trained network)Steve

0.01

0.99

David

0.99

0.01

P K Singh, F O E, D E I

Results –Voice Recognition

Performance of trained network

Discrimination accuracy between known “Hello”s

100%

Discrimination accuracy between new “Hello”’s

100%

P K Singh, F O E, D E I

Neural Network as Function Approximation

P K Singh, F O E, D E I

Stabilizing Controller This scheme has been applied to the control of robot arm trajectory, where a

proportional controller with gain was used as the stabilizing feedback controller.

We can see that the total input that enters the plant is the sum of the feedback control signal and the feed-forward control signal, which is calculated from the inverse dynamics model (neural network).calculated from the inverse dynamics model (neural network).

That model uses the desired trajectory as the input and the feedback control as an error signal. As the NN training advances, that input will converge to zero.

The neural network controller will learn to take over from the feedback controller. The advantage of this architecture is that we can start with a stable system, even though the neural network has not been adequately trained.

P K Singh, F O E, D E I

Stabilizing Controller

P K Singh, F O E, D E I

Image Recognition: Decision Rule and Classifier Is it possible to formulate (and formalize!) the decision rule, using

which we can classify or recognize our objects basing on the selected features?

Can you propose the rule using which we can definitely decide is it a tiger or a rabbit?

P K Singh, F O E, D E I

Image Recognition: Decision Rule and classifier

Once we know our decision rule, it is not difficult to develop a classifier, which will perform classification/recognition using the selected features and the decision rule.

However, if the decision rule can not be formulated and formalized, we should use a classifier, which can develop the rule from the learning processshould use a classifier, which can develop the rule from the learning process

In the most of recognition/classification problems, the formalization of the decision rule is very complicated or impossible at all.

A neural network is a tool, which can accumulate knowledge from the learning process.

After the learning process, a neural network is able to approximate a function, which is supposed to be our decision rule

P K Singh, F O E, D E I

Why neural network?

1( ,..., )nf x x - unknown multi-factor decision rule

Learning process using a representative learning set

0 1( , ,..., )nw w w

1

0 1 1

ˆ ( ,..., )

( ... )n

n n

f x x

P w w x w x

- a set of weighting vectors is the result of the learning process

- a partially defined function, which is an approximation of the decision rule function

P K Singh, F O E, D E I

mp

m1

m2

m3

xi

yi

n

tf

f pn

F

:

p

1. Quantization of pattern space into p decision classes

Mathematical Interpretation of Classification in Decision Making

m3

Input Patterns Response:

1

1

2

1

1

nx

x

x

ix

1

12

11

ny

y

y

iy

2. Mathematical model of quantization:

“Learning by Examples”

P K Singh, F O E, D E I

Application of Artificial Neural Network in Fault Detection Study of Batch Esterification Process The complexity of most chemical industry always tends to create a problem in

monitoring and supervision system.

Prompt fault detection and diagnosis is a best way to handle and tackle this problem.

There are different methods tackling different angle. One of the popular methods is artificial neural network which is a powerful tool in fault detection system.

In this, a production of ethyl acetate by a reaction of acetic acid and ethanol in a batch reactor is applied. batch reactor is applied.

The neural network with normal and faulty event is executed on the data collected from the experiment.

The relationship between normal-faulty events is captured by training network topology.

The ability of neural network to detect any process faults is based on their ability to learn from example and requiring little knowledge about the system structure.

P K Singh, F O E, D E I

CONCLUSION Fault diagnosis for pilot-plant batch esterification process is investigated in this work by a feed forward neural model by implementing multilayer perceptron. The effect of catalyst concentration and catalyst volume are studied and classified successfully using the neural process model. The results displayed that neural network is able to detect and isolate two fault studies with a nice pattern classification. P K Singh, F O E, D E I

Temperature control in fermenters: application of neural nets and feedback control in breweries The main objective of on-line quality control in fermentation is to perform the production

processes as reproducible as possible.

Since temperature is the main control parameter in the fermentation process of beer breweries, it is of primary interest to keep it close to the predefined set point. Here, we report on a model-supported temperature controller for large production-scale beer fermenters.

The dynamic response of the temperature in the tank on temperature changes in the cooling The dynamic response of the temperature in the tank on temperature changes in the cooling elements has been modeled by means of a difference equation.

The heat production within the tank Is taken into account by means of a model for the substrate degradation.

Any optimization requires a model to predict the consequences of actions. Instead of using a conventional mathematical model of the fermentation kinetics, an artificial neural network approach has been used.

The set point profiles for the temperature control have been dynamically optimized in order to minimize the production cost while meeting the constraints posed by the product quality requirements.

P K Singh, F O E, D E I

P K Singh, F O E, D E I

P K Singh, F O E, D E I

P K Singh, F O E, D E I

P K Singh, F O E, D E I

P K Singh, F O E, D E I

Artificial

IntelligentControl

s

Technical Diagnistic

sIntelligent

Data Analysis and Signal

Advance Robotics

Machine Vision

Applications of Artificial Neural Networks

Artificial Intellect with

Neural Networks

and Signal Processing

Vision

Image & Pattern

Recognition

Intelligent Security Systems

Devices

Intelligentl

Medicine Devices

Intelligent Expert

Systems

P K Singh, F O E, D E I

Applications: ClassificationBusiness

•Credit rating and risk assessment •Insurance risk evaluation •Fraud detection •Insider dealing detection •Marketing analysis •Signature verification •Inventory control

Security•Face recognition •Speaker verification •Fingerprint analysis

Medicine•Inventory control

Engineering•Machinery defect diagnosis •Signal processing •Character recognition •Process supervision •Process fault analysis •Speech recognition •Machine vision •Speech recognition •Radar signal classification

Medicine•General diagnosis •Detection of heart defects

Science•Recognising genes •Botanical classification •Bacteria identification

P K Singh, F O E, D E I

Applications: Modeling

Business•Prediction of share and commodity prices •Prediction of economic indicators •Insider dealing detection •Marketing analysis •Signature verification •Inventory control Science

Engineering•Transducer linerisation •Colour discrimination •Robot control and navigation •Process control •Aircraft landing control •Car active suspension control •Printed Circuit auto routing •Integrated circuit layout •Image compression

Science•Prediction of the performance of drugs from the molecular structure •Weather prediction •Sunspot prediction

Medicine•. Medical imaging and image processing

P K Singh, F O E, D E I

Applications: Forecasting

•Future sales •Production Requirements •Market Performance •Economic Indicators •Energy Requirements •Energy Requirements •Time Based Variables

P K Singh, F O E, D E I

Applications: Novelty Detection

•Fault Monitoring •Performance Monitoring •Fraud Detection •Detecting Rate Features •Different Cases

P K Singh, F O E, D E I

Thank you

For any suggestion …..

http://pksingh.webstarts.com/student_community.html

P K Singh, F O E, D E I


Recommended