Date post: | 30-Dec-2015 |
Category: |
Documents |
Upload: | orla-greene |
View: | 46 times |
Download: | 0 times |
Non-Linear Modelling and Chaotic Neural Networks
Evolutionary and Neural Computing GroupCardiff University
SBRN 2000
Overview
• The Freeman model• The Gamma Test• Non-Linear Modelling• Delayed Feedback Control• Synchronisation
The Freeman Model
• Freeman [1991] studied the olfactory bulb of rabbits
• In the rest state, the dynamics of this neural cluster are chaotic
• When presented with a familiar scent, the neural system rapidly simplifies its behaviour
• The dynamics then become more orderly, more nearly periodic than when in the rest state
Questions...
• How can we construct chaotic neural networks?
• How can we control such networks so that they stabilise onto an unstable periodic orbit (characteristic of the applied stimulus) when a stimulus is presented?
• We are looking for biologically plausible mechanisms
The Gamma Test
www.cs.cf.ac.uk/wingamma
Principal Contributors
Antonia J Jones Ana Oliveria
Nenad Končar Steve Margetts
Aðalbjörn Stefánsson
Peter Durrant
Alban Tsui Dafydd Evans
An introduction to theGamma Test
• Assume a relationship of the form
where:• f is smooth function (bounded derivatives)• y is a measured variable possibly dependent
on measured variables x1,…,xm
• r is a random noise component which we may as well assume has mean zero
rxxxfy m ),...( 2,1
Question:What is the noise variance
Var(r)?• The Gamma test estimates this
directly from the observed data (despite the fact that the underlying smooth non-linear function is unknown)
• It runs in O(M log M) time, where M is the number of data points
• We can deal with vector y at little extra computational cost
The Details
2
1 [ , ]
2
1 [ , ]
( , ) ( )
1 1( ) ( ( ) ( ))
( [ , ])
1 1( ) ( ( ) ( ))
2 ( [ , ])
M
i j N i p
M
i j N i p
N i p x i
p x i x jM L N i p
p y i y jM L N i p
d
g
= Î
= Î
= -
= -
å å
å å
is thelist of thenear-neighbours to
Under reasonableconditions, onecanshow
that withprobabili
Var( ) ( )r A o Mg d d= + + ® ¥
tyone
as
The Algorithm1
( )
1
( ) ( )
( ), ( )
1 ,
i M
P x i
p P
p p
p p
p P
d g
d g
=
=
£ £
to do
Compute and
max
max
max
For to do
Compute the near-neighbours of eachinputpoint
Endfor
For
Endfor
PerformLeast-Squares fit on( )
where toget, say
,
y Ax
A
= +G
G
Return
An Example
y = sin(4 Pi x)
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.2 0.4 0.6 0.8 1
x
y
1000 sampled data points with
noise variance Var(r)=0.01y = sin(4 Pi x) + r
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.2 0.4 0.6 0.8 1
x
y
Probabilistic asymptotic convergence of to Var(r)
Asymptotic convergence of gamma
0
0.1
0.2
0.3
0.4
0.5
0 200 400 600 800 1000
M
gam
ma
Using The Gamma Test forNon-Linear Modelling
• Embedding Dimension• Irregular Embeddings• Modelling a particular chaotic
system
Question:What use is the Gamma
Test?• We can calculate the embedding
dimension– the number of past values required to
calculate the next point
• We can compute irregular embeddings – the best combination of past values for a
given embedding dimension
Choosing an Embedding Dimension
• Time-series ...x(t-3), x(t-2), x(t-1), x(t)...
• Task is to predict x(t) given some number of previous values
• Take x(t) as output, and x(t-d),...,x(t-1) as inputs, then run the Gamma Test
• Increase d until the noise estimate reaches a local minimum
• This value of d is an estimate for the embedding dimension
An ExampleThe Mackey-Glass Series
• Time-delayed differential equation
• Dataset created by integrating from t=0 to t=8000 and taking points where t=10,20,30,....,8000
10
0.2 ( )0.1 ( )
1 ( )
30 (0)
dx x tx t
dt x t
x
tt
t
-+ =
+ -
= =where and
2
The Mackey-Glass Time Series
Finding the Embedding Dimension
Lags v Gamma
Gamma
Lags10987654321
Gamm
a
0.050
0.045
0.040
0.035
0.030
0.025
0.020
0.015
0.010
0.005
Dimension Gamma SE1 0.05264 0.0022612 0.025271 0.0013123 0.006531 0.0005974 0.001199 0.0002485 0.000851 0.0003366 0.000869 0.0003167 0.00064 0.000518 0.000236 0.0004399 8.67E-06 0.000409
10 0.000105 0.000508
Dimension 6 gives a suitably small gamma
Finding Irregular Embeddings
• Given a data set with m inputs, we can select which combination of inputs produces the best model even if there is no noise– This gives us an irregular embedding
• Omitting a relevant input produces pseudo-noise
Pseudo-noise of aConical function
-20
0
20x
0
1
2
3
y
0
10
20z
-20
0
20x
Gamma Test Analysis• Given the conical function, pseudo-noise is
apparent if we leave out either x or y from the model of z
• Var(r) is the estimate for pseudo-noise variance (M=500)
Var(r) estimate
inputs Mask
0.44217 xy 1114.76 x 1052.569 y 01
An ExampleThe Mackay-Glass Time
SeriesVar(r) Estimate Embedding Mask
0.00033 111100
0.00044 101101
0.00048 111101
0.00056 111110
0.00070 101111
0.00075 101110
Gamma Scatter Plot for Embedding 111100
Gamma Scatter Plot
delta0.350.30.250.20.150.10.050
ga
mm
a
0.14
0.13
0.12
0.11
0.10
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0.00
Model Construction
• Neural Network (4-8-8-1) using input mask 111100
• Trained using the BFGS algorithm on 800 samples to the MSE predicted by the Gamma Test (0.00032)
• MSE on 100 unseen samples 0.00040
Iterating the Network Model
TimeDelay
=6
=5
=4
=3
( )x t
Phase-Space Comparison
Original Time Series Neural Network Model
Control via Delayed Feedback
=6
=5
=4
=3
( )x t
Delayed Feedback:k(x(t-6-)-x(t-6))
k=5, =0.414144
Stimulus
Controlling the Neural Network
With no stimulus the stabilised orbit depends on the initial conditions.
Varying the Stimulus
The same stimulus gives the same periodic behaviour.
A Generic Model for a Chaotic Neural Network
x n( )
( - ) + ( ( - - ) - )x n d k x n dd x n -d( )
F e e d fo rw a rdn e u ra l n e tw o rk
H id d en la y er
i te r a tiv e fe e d b ac k
Tim e d elay edcon tro l feedb ack
m od u le
d e la y d
d e la y 1
d e la y 2
d e la y 3
( ( -1 - ) - )k x n1 x n -( 1 )
( ( -2 - ) - )k x n2 x n -( 2 )
( ( -3 - ) - )k x n3 x n -( 3 )
( ( - - ) - )k x n dd x n -d( )
C o ntro lfe ed b ac kfo r ea ch
d e lay e d lin e
C o n tro lle d n e u ra l in p u tsw ith n o e x te rn a l s tim u lu s
( -3 ) + ( ( -3 - ) - )x n k x n3 x n( -3 )
( -2 ) + ( ( -2 - ) - )x n k x n2 x n( -2 )
( -1 ) + ( ( -1 - ) - )x n k x n1 x n( -1 )
E x te rn a ls tim u lu s
S w itc h s ig n a l
observa tionpoin ts
K eysobserva tionpoin t
Synchronisation Method
Results of Synchronization
The graph of maximum Lyapunovexponent of the difference (with time delay) against k averaged over 10 sets of initial conditions
Two Mackey Glass NeuralNetworks synchronized with
k = 1.1
Conclusions• Given a chaotic time series we can use the Gamma
Test to determine an appropriate embedding dimension and then a suitable irregular embedding
• We then train a feedforward network, using the irregular embedding to determine the number of inputs, so that the output gives an accurate one-step prediction
• By iterating the network with the appropriate time delays we can accurately reproduce the original dynamics
The significance of time delayed feedback
• Finally by adding a time delayed feedback (activated in the presence of a stimulus) we can stabilise the iterative network onto an unstable periodic orbit
• The particular orbit stabilised depends on the applied stimulus
• The entire artificial neural system accurately reproduces the phenomenon described by Freeman
Synchronisation
• Results shown by Skarda and Freeman [Skarda 1987] support the hypothesis that neural dynamics are heavily dependent on chaotic activity
• Nowadays it is believed that synchronization plays a crucial role in information processing in living organisms and could lead to important applications in speech and image processing [Ogorzallek 1993]
• We have shown that time delayed feedback also offers a biologically plausible mechanism for neural synchronisation
SBRN2000 Group Picture