+ All Categories
Home > Technology > Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter and its Applications to Neuroscience

Date post: 07-May-2015
Category:
Upload: mehtapgresearch
View: 717 times
Download: 0 times
Share this document with a friend
Description:
3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems Santa Barbara, Sep 14-15, 2012
138
Feedback Particle Filter and its Applications to Neuroscience 3rd IFAC Workshop on Distributed Estimation and Control in Networked Systems Santa Barbara, Sep 14-15, 2012 Prashant G. Mehta Department of Mechanical Science and Engineering and the Coordinated Science Laboratory University of Illinois at Urbana-Champaign Research supported by NSF and AFOSR
Transcript
Page 1: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter and itsApplications to Neuroscience

3rd IFAC Workshop onDistributed Estimation and Control in Networked Systems

Santa Barbara, Sep 14-15, 2012

Prashant G. Mehta

Department of Mechanical Science and Engineeringand the Coordinated Science Laboratory

University of Illinois at Urbana-Champaign

Research supported by NSF and AFOSR

Page 2: Feedback Particle Filter and its Applications to Neuroscience

Background

Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule

Signal (hidden): X X ∼ P(X ), (prior, known)

Solution

Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior

∝ P(Y |X )P(X )︸ ︷︷ ︸Prior

This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!

2

Page 3: Feedback Particle Filter and its Applications to Neuroscience

Background

Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule

Signal (hidden): X X ∼ P(X ), (prior, known)

Observation: Y (known)

Solution

Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior

∝ P(Y |X )P(X )︸ ︷︷ ︸Prior

This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!

2

Page 4: Feedback Particle Filter and its Applications to Neuroscience

Background

Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule

Signal (hidden): X X ∼ P(X ), (prior, known)

Observation: Y (known)

Observation model: P(Y |X ) (known)

Solution

Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior

∝ P(Y |X )P(X )︸ ︷︷ ︸Prior

This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!

2

Page 5: Feedback Particle Filter and its Applications to Neuroscience

Background

Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule

Signal (hidden): X X ∼ P(X ), (prior, known)

Observation: Y (known)

Observation model: P(Y |X ) (known)

Problem: What is X ?

Solution

Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior

∝ P(Y |X )P(X )︸ ︷︷ ︸Prior

This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!

2

Page 6: Feedback Particle Filter and its Applications to Neuroscience

Background

Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule

Signal (hidden): X X ∼ P(X ), (prior, known)

Observation: Y (known)

Observation model: P(Y |X ) (known)

Problem: What is X ?

Solution

Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior

∝ P(Y |X )P(X )︸ ︷︷ ︸Prior

This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!

2

Page 7: Feedback Particle Filter and its Applications to Neuroscience

Background

Bayesian Inference/FilteringMathematics of prediction: Bayes’ rule

Signal (hidden): X X ∼ P(X ), (prior, known)

Observation: Y (known)

Observation model: P(Y |X ) (known)

Problem: What is X ?

Solution

Bayes’ rule: P(X |Y )︸ ︷︷ ︸Posterior

∝ P(Y |X )P(X )︸ ︷︷ ︸Prior

This talk is about implementing Bayes’ rule indynamic, nonlinear, non-Gaussian settings!

2

Page 8: Feedback Particle Filter and its Applications to Neuroscience

Background

ApplicationsEngineering applications

Filtering is important to:

Air moving target indicator (AMTI) systems, Space situationalawareness

Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys

Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)

3

Page 9: Feedback Particle Filter and its Applications to Neuroscience

Background

ApplicationsEngineering applications

Filtering is important to:

Air moving target indicator (AMTI) systems, Space situationalawareness

Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys

Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)

3

Page 10: Feedback Particle Filter and its Applications to Neuroscience

Background

ApplicationsEngineering applications

Filtering is important to:

Air moving target indicator (AMTI) systems, Space situationalawareness

Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys

Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)

3

Page 11: Feedback Particle Filter and its Applications to Neuroscience

Background

ApplicationsEngineering applications

Filtering is important to:

Air moving target indicator (AMTI) systems, Space situationalawareness

Remote sensing and surveillance: Air traffic management, weathersurveillance, geophysical surveys

Autonomous navigation & robotics: Simultaneous localization andmap building (SLAM)

3

Page 12: Feedback Particle Filter and its Applications to Neuroscience

Background

Applications in BiologyBayesian model of sensory signal processing

4

Page 13: Feedback Particle Filter and its Applications to Neuroscience

Background

Applications in BiologyBayesian model of sensory signal processing

4

Page 14: Feedback Particle Filter and its Applications to Neuroscience

Part I

Theory: Nonlinear Filtering

Page 15: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Nonlinear FilteringMathematical Problem

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)

Posterior is an information state

P(Xt ∈ A|Zt) =∫A

p∗(x , t)dx

E(Xt |Zt) =∫R

xp∗(x , t)dx

6

Page 16: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Nonlinear FilteringMathematical Problem

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)

Observation model: dZt = h(Xt)dt + dWt

Posterior is an information state

P(Xt ∈ A|Zt) =∫A

p∗(x , t)dx

E(Xt |Zt) =∫R

xp∗(x , t)dx

6

Page 17: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Nonlinear FilteringMathematical Problem

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)

Observation model: dZt = h(Xt)dt + dWt

Problem: What is Xt ? given obs. till time t =: Zt

Posterior is an information state

P(Xt ∈ A|Zt) =∫A

p∗(x , t)dx

E(Xt |Zt) =∫R

xp∗(x , t)dx

6

Page 18: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Nonlinear FilteringMathematical Problem

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)

Observation model: dZt = h(Xt)dt + dWt

Problem: What is Xt ? given obs. till time t =: Zt

Answer in terms of posterior: P(Xt |Zt) =: p∗(x , t).

Posterior is an information state

P(Xt ∈ A|Zt) =∫A

p∗(x , t)dx

E(Xt |Zt) =∫R

xp∗(x , t)dx

6

Page 19: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Nonlinear FilteringMathematical Problem

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)

Observation model: dZt = h(Xt)dt + dWt

Problem: What is Xt ? given obs. till time t =: Zt

Answer in terms of posterior: P(Xt |Zt) =: p∗(x , t).

Posterior is an information state

P(Xt ∈ A|Zt) =∫A

p∗(x , t)dx

E(Xt |Zt) =∫R

xp∗(x , t)dx

6

Page 20: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Nonlinear FilteringMathematical Problem

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)

Observation model: dZt = h(Xt)dt + dWt

Problem: What is Xt ? given obs. till time t =: Zt

Answer in terms of posterior: P(Xt |Zt) =: p∗(x , t).

Posterior is an information state

P(Xt ∈ A|Zt) =∫A

p∗(x , t)dx

E(Xt |Zt) =∫R

xp∗(x , t)dx

6

Page 21: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Pretty Formulae in MathematicsMore often than not, these are simply stated

Euler’s identity

e iπ =−1

Euler’s formula

v − e + f = 2

Pythagoras theorem

x2 + y 2 = z2

Kenneth Chang “What Makes an Equation Beautiful” in The New York Times on October 24, 2004

7

Page 22: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain

[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 23: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain

[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 24: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

-

+

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain

[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 25: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

-

+

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain

[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 26: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

-

+

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain

[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 27: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

-

+

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain

[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 28: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

-

+

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain

[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 29: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filterSolution in linear Gaussian settings

dXt = αXt dt + dBt (1)

dZt = γXt dt + dWt (2)

Kalman filter: p∗ = N(Xt ,Σt)

dXt = αXt dt + K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

-

+

Kalman Filter

Observation: dZt = γXt dt + dWt

Prediction: dZt = γXt dt

Innov. error: dIt = dZt − dZt

= dZt − γXt dt

Control: dUt = K dIt

Gain: Kalman gain[?] R. E. Kalman, Trans. ASME, Ser. D: J. Basic Eng.,1961 8

Page 30: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filter

dXt = αXt dt︸ ︷︷ ︸Prediction

+ K (dZt − γXt dt)︸ ︷︷ ︸Update

This illustrates the key features of feedback control:

1 Use error to obtain control (dUt = K dIt)

2 Negative gain feedback serves to reduce error (K =γ

σ2W︸︷︷︸

SNR

Σt)

Simple enough to be included in the first undergraduate course on control

9

Page 31: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filter

dXt = αXt dt︸ ︷︷ ︸Prediction

+ K (dZt − γXt dt)︸ ︷︷ ︸Update

This illustrates the key features of feedback control:

1 Use error to obtain control (dUt = K dIt)

2 Negative gain feedback serves to reduce error (K =γ

σ2W︸︷︷︸

SNR

Σt)

Simple enough to be included in the first undergraduate course on control

9

Page 32: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Kalman filter

dXt = αXt dt︸ ︷︷ ︸Prediction

+ K (dZt − γXt dt)︸ ︷︷ ︸Update

Kalman Filter

-

+

This illustrates the key features of feedback control:1 Use error to obtain control (dUt = K dIt)

2 Negative gain feedback serves to reduce error (K =γ

σ2W︸︷︷︸

SNR

Σt)

Simple enough to be included in the first undergraduate course on control

9

Page 33: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Filtering ProblemNonlinear Model: Kushner-Stratonovich PDE

Signal & Observations dXt = a(Xt)dt + σB dBt , (1)

dZt = h(Xt)dt + σW dWt (2)

Posterior distribution p∗ is a solution of a stochastic PDE:

dp∗ = L †(p∗)dt +1

σ2W

(h− h)(dZt − h dt)p∗

where h = E[h(Xt)|Zt ] =∫

h(x)p∗(x , t)dx

L †(p∗) =−∂ (p∗ ·a(x))

∂ x+

1

2B

∂ 2p∗

∂ x2

No closed-form solution in general. Closure problem.

[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964

10

Page 34: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Filtering ProblemNonlinear Model: Kushner-Stratonovich PDE

Signal & Observations dXt = a(Xt)dt + σB dBt , (1)

dZt = h(Xt)dt + σW dWt (2)

Posterior distribution p∗ is a solution of a stochastic PDE:

dp∗ = L †(p∗)dt +1

σ2W

(h− h)(dZt − h dt)p∗

where h = E[h(Xt)|Zt ] =∫

h(x)p∗(x , t)dx

L †(p∗) =−∂ (p∗ ·a(x))

∂ x+

1

2B

∂ 2p∗

∂ x2

No closed-form solution in general. Closure problem.

[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964

10

Page 35: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Filtering ProblemNonlinear Model: Kushner-Stratonovich PDE

Signal & Observations dXt = a(Xt)dt + σB dBt , (1)

dZt = h(Xt)dt + σW dWt (2)

Posterior distribution p∗ is a solution of a stochastic PDE:

dp∗ = L †(p∗)dt +1

σ2W

(h− h)(dZt − h dt)p∗

where h = E[h(Xt)|Zt ] =∫

h(x)p∗(x , t)dx

L †(p∗) =−∂ (p∗ ·a(x))

∂ x+

1

2B

∂ 2p∗

∂ x2

No closed-form solution in general. Closure problem.

[?] R. L. Stratonovich, SIAM Theory Probab. Appl., 1960. [?] H. J. Kushner, SIAM J. Control, 1964

10

Page 36: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Particle FilterAn algorithm to solve nonlinear filtering problem

Approximate posterior in terms of particles p∗(x , t) =1

N

N

∑i=1

δX it(x)

Algorithm outline

1 Initialization at time 0: X i0 ∼ p∗0(·)

2 At each discrete time step:

Importance sampling (Bayes update step)Resampling (for variance reduction)

11

Page 37: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Particle FilterAn algorithm to solve nonlinear filtering problem

Approximate posterior in terms of particles p∗(x , t) =1

N

N

∑i=1

δX it(x)

Algorithm outline

1 Initialization at time 0: X i0 ∼ p∗0(·)

2 At each discrete time step:

Importance sampling (Bayes update step)Resampling (for variance reduction)

11

Page 38: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Particle FilterAn algorithm to solve nonlinear filtering problem

Approximate posterior in terms of particles p∗(x , t) =1

N

N

∑i=1

δX it(x)

Algorithm outline

1 Initialization at time 0: X i0 ∼ p∗0(·)

2 At each discrete time step:

Importance sampling (Bayes update step)Resampling (for variance reduction)

11

Page 39: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Particle FilterAn algorithm to solve nonlinear filtering problem

Approximate posterior in terms of particles p∗(x , t) =1

N

N

∑i=1

δX it(x)

Algorithm outline

1 Initialization at time 0: X i0 ∼ p∗0(·)

2 At each discrete time step:

Importance sampling (Bayes update step)Resampling (for variance reduction)

e.g. dZt = Xt dt + small noise11

Page 40: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Particle FilterAn algorithm to solve nonlinear filtering problem

Approximate posterior in terms of particles p∗(x , t) =1

N

N

∑i=1

δX it(x)

Algorithm outline

1 Initialization at time 0: X i0 ∼ p∗0(·)

2 At each discrete time step:

Importance sampling (Bayes update step)Resampling (for variance reduction)

e.g. dZt = Xt dt + small noise11

Page 41: Feedback Particle Filter and its Applications to Neuroscience

Nonlinear Filtering

Particle FilterAn algorithm to solve nonlinear filtering problem

Approximate posterior in terms of particles p∗(x , t) =1

N

N

∑i=1

δX it(x)

Algorithm outline

1 Initialization at time 0: X i0 ∼ p∗0(·)

2 At each discrete time step:

Importance sampling (Bayes update step)Resampling (for variance reduction)

Innovation error, feedback?And most importantly, is this pretty?11

Page 42: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Research goal: Bringing pretty back!

102 103

10−3

10−2

10−1

N (number of particles)

Bootstrap (BPF)

Feedback (FPF)

MSE

Control-Oriented Approach to Particle Filtering

12

Page 43: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Feedback Particle Filter

Signal & Observations dXt = a(Xt)dt + σB dBt (1)

dZt = h(Xt)dt + σW dWt (2)

Controlled system (N particles):

dX it = a(X i

t )dt + σB dB it + dU i

t︸︷︷︸mean field control

, i = 1, ...,N (3)

{B it}Ni=1 are ind. standard white noises.

Objective: Choose control U it , as a function of history

{Zs ,Xis : 0≤ s ≤ t}, such that the two posteriors coincide:∫

x∈Ap∗(x , t) dx = P{Xt ∈ A |Zt}∫

x∈Ap(x , t) dx = P{X i

t ∈ A |Zt}

Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)13

Page 44: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Feedback Particle Filter

Signal & Observations dXt = a(Xt)dt + σB dBt (1)

dZt = h(Xt)dt + σW dWt (2)

Controlled system (N particles):

dX it = a(X i

t )dt + σB dB it + dU i

t︸︷︷︸mean field control

, i = 1, ...,N (3)

{B it}Ni=1 are ind. standard white noises.

Objective: Choose control U it , as a function of history

{Zs ,Xis : 0≤ s ≤ t}, such that the two posteriors coincide:∫

x∈Ap∗(x , t) dx = P{Xt ∈ A |Zt}∫

x∈Ap(x , t) dx = P{X i

t ∈ A |Zt}

Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)13

Page 45: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Feedback Particle Filter

Signal & Observations dXt = a(Xt)dt + σB dBt (1)

dZt = h(Xt)dt + σW dWt (2)

Controlled system (N particles):

dX it = a(X i

t )dt + σB dB it + dU i

t︸︷︷︸mean field control

, i = 1, ...,N (3)

{B it}Ni=1 are ind. standard white noises.

Objective: Choose control U it , as a function of history

{Zs ,Xis : 0≤ s ≤ t}, such that the two posteriors coincide:∫

x∈Ap∗(x , t) dx = P{Xt ∈ A |Zt}∫

x∈Ap(x , t) dx = P{X i

t ∈ A |Zt}

Motivation: Work of Huang, Caines and Malhame on Mean-field games (IEEE TAC 2007)13

Page 46: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

FPF SolutionLinear model

Controlled system: for i = 1, ...N:

dX it = αX i

t dt + σB dB it︸ ︷︷ ︸

Prediction

+ K[

dZt − γX it + µt

2dt]

︸ ︷︷ ︸Update (via mean field control)

(3)

Feedback Particle Filter

-

+

14

Page 47: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

FPF SolutionLinear model

Controlled system: for i = 1, ...N:

dX it = αX i

t dt + σB dB it︸ ︷︷ ︸

Prediction

+ K[

dZt − γX it + µt

2dt]

︸ ︷︷ ︸Update (via mean field control)

(3)

Feedback Particle Filter

-

+

14

Page 48: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

FPF Update StepsLinear model

Feedback particle filter Kalman filter

Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt

Prediction: dZ it = 1

2

(γX i

t + γµt

)dt dZt = γXt dt

Innovation error: dI it = dZt − dZ it dIt = dZt − dZt

= dZt − γX it +µt

2 dt = dZt − γXt dt

Control: dU it = K dI it dUt = K dIt

Gain: K is the Kalman gain

15

Page 49: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

FPF Update StepsLinear model

Feedback particle filter Kalman filter

Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt

Prediction: dZ it = 1

2

(γX i

t + γµt

)dt dZt = γXt dt

Innovation error: dI it = dZt − dZ it dIt = dZt − dZt

= dZt − γX it +µt

2 dt = dZt − γXt dt

Control: dU it = K dI it dUt = K dIt

Gain: K is the Kalman gain

15

Page 50: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

FPF Update StepsLinear model

Feedback particle filter Kalman filter

Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt

Prediction: dZ it = 1

2

(γX i

t + γµt

)dt dZt = γXt dt

Innovation error: dI it = dZt − dZ it dIt = dZt − dZt

= dZt − γX it +µt

2 dt = dZt − γXt dt

Control: dU it = K dI it dUt = K dIt

Gain: K is the Kalman gain

15

Page 51: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

FPF Update StepsLinear model

Feedback particle filter Kalman filter

Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt

Prediction: dZ it = 1

2

(γX i

t + γµt

)dt dZt = γXt dt

Innovation error: dI it = dZt − dZ it dIt = dZt − dZt

= dZt − γX it +µt

2 dt = dZt − γXt dt

Control: dU it = K dI it dUt = K dIt

Gain: K is the Kalman gain

15

Page 52: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

FPF Update StepsLinear model

Feedback particle filter Kalman filter

Observation: dZt = γXt dt + σW dWt dZt = γXt dt + σW dWt

Prediction: dZ it = 1

2

(γX i

t + γµt

)dt dZt = γXt dt

Innovation error: dI it = dZt − dZ it dIt = dZt − dZt

= dZt − γX it +µt

2 dt = dZt − γXt dt

Control: dU it = K dI it dUt = K dIt

Gain: K is the Kalman gain

15

Page 53: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Linear Feedback Particle FilterMean field model is the Kalman filter

Feedback particle filter:

dX it = αX i

t dt + σB dB it + K

[dZt −

γ

2

(X it +

1

N

N

∑j=1

X jt

)dt]

(3)

X i0 ∼ p∗(x ,0) = N(µ(0),Σ(0))

Mean-field model: Kalman filter! Let p denote cond. dist. of X it given

Zt . Then p = N(µt ,Σt) where

dµt = αµt dt +γΣt

σ2W

(dZt − γµt dt)

dΣt =

(2αΣt + σ

2B −

γ2Σ2t

σ2W

)dt

As N → ∞, the empirical distribution approximates the posterior p∗

16

Page 54: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Linear Feedback Particle FilterMean field model is the Kalman filter

Feedback particle filter:

dX it = αX i

t dt + σB dB it + K

[dZt −

γ

2

(X it +

1

N

N

∑j=1

X jt

)dt]

(3)

X i0 ∼ p∗(x ,0) = N(µ(0),Σ(0))

Mean-field model: Kalman filter! Let p denote cond. dist. of X it given

Zt . Then p = N(µt ,Σt) where

dµt = αµt dt +γΣt

σ2W

(dZt − γµt dt)

dΣt =

(2αΣt + σ

2B −

γ2Σ2t

σ2W

)dt

As N → ∞, the empirical distribution approximates the posterior p∗

16

Page 55: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Linear Feedback Particle FilterMean field model is the Kalman filter

Feedback particle filter:

dX it = αX i

t dt + σB dB it + K

[dZt −

γ

2

(X it +

1

N

N

∑j=1

X jt

)dt]

(3)

X i0 ∼ p∗(x ,0) = N(µ(0),Σ(0))

Mean-field model: Kalman filter! Let p denote cond. dist. of X it given

Zt . Then p = N(µt ,Σt) where

dµt = αµt dt +γΣt

σ2W

(dZt − γµt dt)

dΣt =

(2αΣt + σ

2B −

γ2Σ2t

σ2W

)dt

As N → ∞, the empirical distribution approximates the posterior p∗

16

Page 56: Feedback Particle Filter and its Applications to Neuroscience

Control-Oriented Approach to Particle Filtering

Variance ReductionFiltering for simple linear model.

Mean-square error:1

T

∫ T

0

(N)t −Σt

Σt

)2

dt

102 103

10−3

10−2

10−1

N (number of particles)

Bootstrap (BPF)

Feedback (FPF)

MSE

17

Page 57: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Methodology: Variational FormulationHow do we derive the feedback particle filter?

Time-stepping procedure:

Signal, observ. process:

dXt = a(Xt)dt + σB dBt

Ztn = h(Xtn) + Wtn

Feedback Particle filter

Filter: dX it = a(X i

t )dt + σB dB it

Control: X itn = X i

t−n+ u(X i

t−n)︸ ︷︷ ︸

controlConditional distributions:

p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt

Variational problem: minu

D (pn(u) ‖ p∗n)

As ∆t→ 0:

Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.

18

Page 58: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Methodology: Variational FormulationHow do we derive the feedback particle filter?

Time-stepping procedure:

Signal, observ. process:

dXt = a(Xt)dt + σB dBt

Ztn = h(Xtn) + Wtn

Feedback Particle filter

Filter: dX it = a(X i

t )dt + σB dB it

Control: X itn = X i

t−n+ u(X i

t−n)︸ ︷︷ ︸

controlConditional distributions:

p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt

Variational problem: minu

D (pn(u) ‖ p∗n)

As ∆t→ 0:

Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.

18

Page 59: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Methodology: Variational FormulationHow do we derive the feedback particle filter?

Time-stepping procedure:

Signal, observ. process:

dXt = a(Xt)dt + σB dBt

Ztn = h(Xtn) + Wtn

Feedback Particle filter

Filter: dX it = a(X i

t )dt + σB dB it

Control: X itn = X i

t−n+ u(X i

t−n)︸ ︷︷ ︸

controlConditional distributions:

p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt

Variational problem: minu

D (pn(u) ‖ p∗n)

As ∆t→ 0:

Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.

18

Page 60: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Methodology: Variational FormulationHow do we derive the feedback particle filter?

Time-stepping procedure:

Signal, observ. process:

dXt = a(Xt)dt + σB dBt

Ztn = h(Xtn) + Wtn

Feedback Particle filter

Filter: dX it = a(X i

t )dt + σB dB it

Control: X itn = X i

t−n+ u(X i

t−n)︸ ︷︷ ︸

controlConditional distributions:

p∗n(·): cond. pdf of Xt |Zt pn(·;u): cond. pdf of X it |Zt

Variational problem: minu

D (pn(u) ‖ p∗n)

As ∆t→ 0:

Optimal control, u = u◦, yields the feedback particle filter,Nonlinear filter is the gradient flow and u◦ is the optimal transport.

18

Page 61: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Feedback Particle FilterFiltering in nonlinear non-Gaussian settings

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)Observation model: dZt = h(Xt)dt + dWt

FPF: dX it = a(X i

t )dt + dB it + K(X i

t )◦ dI it︸ ︷︷ ︸Update

Innovations: dI it =:dZt −1

2(h(X i

t ) + h)dt, with cond. mean h = 〈p,h〉.

19

Page 62: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Feedback Particle FilterFiltering in nonlinear non-Gaussian settings

Signal model: dXt = a(Xt)dt + dBt , X0 ∼ p∗0(·)Observation model: dZt = h(Xt)dt + dWt

FPF: dX it = a(X i

t )dt + dB it + K(X i

t )◦ dI it︸ ︷︷ ︸Update

Innovations: dI it =:dZt −1

2(h(X i

t ) + h)dt, with cond. mean h = 〈p,h〉.

19

Page 63: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Update StepHow does feedback particle filter implement Bayes’ rule?

Feedback particle filter Linear case

Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt

Prediction: dZ it = h(X i

t )+h2 dt dZ i

t = γX it +γµt

2 dt

h = 1N ∑

Ni=1 h(X i

t )

Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i

t

= dZt − h(X it )+h2 dt = dZt − γ

X it +µt

2 dt

Control: dU it = K (X i

t )◦ dI it dU it = K (X i

t )◦ dI it

Gain: K is a solution of a linear BVP K is the Kalman gain

20

Page 64: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Update StepHow does feedback particle filter implement Bayes’ rule?

Feedback particle filter Linear case

Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt

Prediction: dZ it = h(X i

t )+h2 dt dZ i

t = γX it +γµt

2 dt

h = 1N ∑

Ni=1 h(X i

t )

Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i

t

= dZt − h(X it )+h2 dt = dZt − γ

X it +µt

2 dt

Control: dU it = K (X i

t )◦ dI it dU it = K (X i

t )◦ dI it

Gain: K is a solution of a linear BVP K is the Kalman gain

20

Page 65: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Update StepHow does feedback particle filter implement Bayes’ rule?

Feedback particle filter Linear case

Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt

Prediction: dZ it = h(X i

t )+h2 dt dZ i

t = γX it +γµt

2 dt

h = 1N ∑

Ni=1 h(X i

t )

Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i

t

= dZt − h(X it )+h2 dt = dZt − γ

X it +µt

2 dt

Control: dU it = K (X i

t )◦ dI it dU it = K (X i

t )◦ dI it

Gain: K is a solution of a linear BVP K is the Kalman gain

20

Page 66: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Update StepHow does feedback particle filter implement Bayes’ rule?

Feedback particle filter Linear case

Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt

Prediction: dZ it = h(X i

t )+h2 dt dZ i

t = γX it +γµt

2 dt

h = 1N ∑

Ni=1 h(X i

t )

Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i

t

= dZt − h(X it )+h2 dt = dZt − γ

X it +µt

2 dt

Control: dU it = K (X i

t )◦ dI it dU it = K (X i

t )◦ dI it

Gain: K is a solution of a linear BVP K is the Kalman gain

20

Page 67: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Update StepHow does feedback particle filter implement Bayes’ rule?

Feedback particle filter Linear case

Observation: dZt = h(Xt)dt + dWt dZt = γXt dt + dWt

Prediction: dZ it = h(X i

t )+h2 dt dZ i

t = γX it +γµt

2 dt

h = 1N ∑

Ni=1 h(X i

t )

Innov. error: dI it = dZt − dZ it dI it = dZt − dZ i

t

= dZt − h(X it )+h2 dt = dZt − γ

X it +µt

2 dt

Control: dU it = K (X i

t )◦ dI it dU it = K (X i

t )◦ dI it

Gain: K is a solution of a linear BVP K is the Kalman gain

20

Page 68: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case: Nonlinear case:

21

Page 69: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case: Nonlinear case:

21

Page 70: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case:

Nonlinear case:

21

Page 71: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case:

Nonlinear case:

21

Page 72: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case:

Nonlinear case:

21

Page 73: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case: Nonlinear case:

21

Page 74: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case: Nonlinear case:

21

Page 75: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case: Nonlinear case:

21

Page 76: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Boundary Value ProblemEuler-Lagrange equation for the variational problem

Multi-dimensional boundary value problem

∇ · (Kp) =−(h− h)p

solved at each time-step.

Linear case: Nonlinear case:

21

Page 77: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

ConsistencyFeedback particle filter is exact

p∗ : conditional pdf of Xt given Zt ,

dp∗ = L †(p∗)dt + (h− h)(σ2W )−1(dZt − h dt)p∗

p : conditional pdf of X it given Zt ,

dp = L †(p)dt− ∂

∂ x(Kp) dZt −

∂ x(up) dt +

σ2W

2

∂ 2

∂ x2

(pK2

)dt

Consistency Theorem

Consider the two evolution equations for p and p∗.Provided the FPF is initialized with p(x ,0) = p∗(x ,0), then

p(x , t) = p∗(x , t) for all t ≥ 0

22

Page 78: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

ConsistencyFeedback particle filter is exact

p∗ : conditional pdf of Xt given Zt ,

dp∗ = L †(p∗)dt + (h− h)(σ2W )−1(dZt − h dt)p∗

p : conditional pdf of X it given Zt ,

dp = L †(p)dt− ∂

∂ x(Kp) dZt −

∂ x(up) dt +

σ2W

2

∂ 2

∂ x2

(pK2

)dt

Consistency Theorem

Consider the two evolution equations for p and p∗.Provided the FPF is initialized with p(x ,0) = p∗(x ,0), then

p(x , t) = p∗(x , t) for all t ≥ 0

22

Page 79: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Kalman Filter

Kalman Filter

-

+

Innovation Error:

dIt = dZt −h(X )dt

Gain Function:

K = Kalman Gain

Feedback Particle Filter

Feedback Particle Filter

-

+

Innovation Error:

dI it = dZt−1

2

(h(X i

t ) + ht

)dt

Gain Function:

K is solution of a linear BVP.

23

Page 80: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Kalman Filter

Kalman Filter

-

+

Innovation Error:

dIt = dZt −h(X )dt

Gain Function:

K = Kalman Gain

Feedback Particle Filter

Feedback Particle Filter

-

+

Innovation Error:

dI it = dZt−1

2

(h(X i

t ) + ht

)dt

Gain Function:

K is solution of a linear BVP.

23

Page 81: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Kalman Filter

Kalman Filter

-

+

Innovation Error:

dIt = dZt −h(X )dt

Gain Function:

K = Kalman Gain

Feedback Particle Filter

Feedback Particle Filter

-

+

Innovation Error:

dI it = dZt−1

2

(h(X i

t ) + ht

)dt

Gain Function:

K is solution of a linear BVP.

23

Page 82: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Kalman Filter

Kalman Filter

-

+

Innovation Error:

dIt = dZt −h(X )dt

Gain Function:

K = Kalman Gain

Feedback Particle Filter

Feedback Particle Filter

-

+

Innovation Error:

dI it = dZt−1

2

(h(X i

t ) + ht

)dt

Gain Function:

K is solution of a linear BVP.

23

Page 83: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Kalman Filter

Kalman Filter

-

+

Innovation Error:

dIt = dZt −h(X )dt

Gain Function:

K = Kalman Gain

Feedback Particle Filter

Feedback Particle Filter

-

+

Innovation Error:

dI it = dZt−1

2

(h(X i

t ) + ht

)dt

Gain Function:

K is solution of a linear BVP.

23

Page 84: Feedback Particle Filter and its Applications to Neuroscience

Feedback Particle Filter

Kalman Filter

Kalman Filter

-

+

Innovation Error:

dIt = dZt −h(X )dt

Gain Function:

K = Kalman Gain

Feedback Particle Filter

Feedback Particle Filter

-

+

Innovation Error:

dI it = dZt−1

2

(h(X i

t ) + ht

)dt

Gain Function:

K is solution of a linear BVP.

23

Page 85: Feedback Particle Filter and its Applications to Neuroscience

Part II

Neural Rhythms, Bayesian Inference

Page 86: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Normal Form ReductionDerivation of oscillator model

CdV

dt=−gT ·m2

∞(V ) ·h · (V −ET )

−gh · r · (V −Eh)− . . . . . .

dh

dt=

h∞(V )−h

τh(V )

dr

dt=

r∞(V )− r

τr (V )

[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 2004

25

Page 87: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Normal Form ReductionDerivation of oscillator model

CdV

dt=−gT ·m2

∞(V ) ·h · (V −ET )

−gh · r · (V −Eh)− . . . . . .

dh

dt=

h∞(V )−h

τh(V )

dr

dt=

r∞(V )− r

τr (V )

[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 200425

Page 88: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Normal Form ReductionDerivation of oscillator model

CdV

dt=−gT ·m2

∞(V ) ·h · (V −ET )

−gh · r · (V −Eh)− . . . . . .

dh

dt=

h∞(V )−h

τh(V )

dr

dt=

r∞(V )− r

τr (V )

Normal form reduction−−−−−−−−−−−−→

dθi (t) = ωi dt + ui (t) ·Φ(θi (t))dt

[?] J. Guckenheimer, J. Math. Biol., 1975; [?] J. Moehlis et al., Neural Computation, 200425

Page 89: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Collective Dynamics of a Large Number of OscillatorsSynchrony, Neural rhythms

26

Page 90: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Functional Role of Neural RhythmsIs synchronization useful? Does it have a functional role?

Books/review papers:

Buzsaki, Destexhe, Ermentrout, Izhikevich, Kopell, Trout and Whittington (2009), Llinas and

Ribary (2001), Pareti and Palma (2004), Sejnowski and Paulsen (2006), Singer (1993)...

Computations: Computing with intrinsic network states

Destexhe and Contreras (2006); Izhikevich (2006); Zhang and Ballard (2001).

Synaptic plasticity: Neurons that fire together wire together

And several other hypotheses:

Communication and information flow (Laughlin and Sejnowski); Binding by synchrony (Singer);

Memory formation (Jutras and Fries); Probabilistic decision making (Wang); Stimulus

competition and attention selection (Kopell); Sleep/wakefulness/disease (Steriade)

27

Page 91: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

PredictionBrain as a reality emulator

“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”

“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”

28

Page 92: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

PredictionBrain as a reality emulator

“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”

“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”

28

Page 93: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

PredictionBrain as a reality emulator

“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”

“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”

28

Page 94: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

PredictionBrain as a reality emulator

“[Prediction] is the primary function of the neocortex,and the foundation of intelligence. If we want tounderstand how your brain works, and how to buildintelligent machines, we must understand the nature ofthese predictions and how the cortex makes them.”

“The capacity to predict the outcome of future events –critical to successful movement – is, most likely, theultimate and most common of all brain functions.”

28

Page 95: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Filtering in Brain?Bayesian model of sensory signal processing

Theory:

Lee and Mumford, Hierarchical Bayesian inference Framework (2003)

Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)

Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)

Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)

Kording and Wolpert. Bayesian decision theory (2006)

And others: See

Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)

Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)

29

Page 96: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Filtering in Brain?Bayesian model of sensory signal processing

Theory:

Lee and Mumford, Hierarchical Bayesian inference Framework (2003)

Rao; Rao and Ballard; Rao and Sejnowski. Predictive coding framework (2002)

Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)

Ma, Beck, Latham and Pouget. Probabilistic population codes (2006)

Kording and Wolpert. Bayesian decision theory (2006)

And others: See

Doya, Ishii, Pouget and Rao, Bayesian Brain, MIT Press (2007)

Rao, Olshausen & Lewicki, Probabilistic Models of Brain, MIT Press (2002)

29

Page 97: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Filtering in Brain?Bayesian model of sensory signal processing

Experiments (see reviews):

Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)

R. T. Knight, Neural networks debunk phrenology, Science (2007)

Such theories naturally feed into computer vision & more generally on howto make computer “intelligent”

30

Page 98: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Filtering in Brain?Bayesian model of sensory signal processing

Experiments (see reviews):

Gold & Shadlen, The neural basis of decision making, Ann. Rev. of Neurosci. (2007)

R. T. Knight, Neural networks debunk phrenology, Science (2007)

Such theories naturally feed into computer vision & more generally on howto make computer “intelligent”

30

Page 99: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Bayesian Inference in NeuroscienceLee and Mumford’s hierarchical Bayesian inference framework

. . .Bayes’ rule Bayes’ rule Bayes’ rule

Similar ideas also appear in:

1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)

2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)

3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)

31

Page 100: Feedback Particle Filter and its Applications to Neuroscience

Oscillators in Biology

Bayesian Inference in NeuroscienceLee and Mumford’s hierarchical Bayesian inference framework

. . .Bayes’ rule Bayes’ rule Bayes’ rule

. . .Part. Filter Part. Filter Part. Filter

Similar ideas also appear in:

1 Dayan, Hinton, Neal and Zemel. The Helmholtz machine (1995)

2 Lewicki and Sejnowski. Bayesian unsupervised learning (1995)

3 Rao and Ballard; Rao and Sejnowski. Predictive coding framework (1999;2002)

31

Page 101: Feedback Particle Filter and its Applications to Neuroscience

Part III

Application: Filtering with Rhythms

Page 102: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleBiological Rhythm

33

Page 103: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleBiological Rhythm

33

Page 104: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleBiological Rhythm

33

Page 105: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleBiological Rhythm

33

Page 106: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleBiological Rhythm

33

Page 107: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleBiological Rhythm

33

Page 108: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleBiological Rhythm

33

Page 109: Feedback Particle Filter and its Applications to Neuroscience

Application: Ankle-foot OrthosesEstimation of gait cycle using sensor measurements

Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.

Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance

Sensors: heel, toe, and ankle joint

Compressed CO2

Actuator

Solenoid valves:control the �ow of CO2 to the actuator

AFO system components: Power supply,

Valves, Actuator, Sensors.Professor Liz Hsiao-Wecksler

Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.

34

Page 110: Feedback Particle Filter and its Applications to Neuroscience

Application: Ankle-foot OrthosesEstimation of gait cycle using sensor measurements

Ankle-foot orthoses (AFOs) : For lower-limb neuromuscular impairments.

Provides dorsiflexor (toe lift) and plantarflexor (toe push) torque assistance

Sensors: heel, toe, and ankle joint

Compressed CO2

Actuator

Solenoid valves:control the �ow of CO2 to the actuator

AFO system components: Power supply,

Valves, Actuator, Sensors.Professor Liz Hsiao-Wecksler

Acknowledgement: Professor Liz Hsiao-Wecksler for sharing the AFO device picture and sensor data.

34

Page 111: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 112: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 113: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 114: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 115: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 116: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 117: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 118: Feedback Particle Filter and its Applications to Neuroscience

Gait CycleSignal model

Stance phase Swing phase

Model (Noisy oscillator)

dθt = ω0 dt︸ ︷︷ ︸natural frequency

+ noise

35

Page 119: Feedback Particle Filter and its Applications to Neuroscience

Problem: Estimate Gait Cycle θtSensor model

Observation model: dZt = h(θt)dt+ noise

Problem: What is θt given noisy observations?

36

Page 120: Feedback Particle Filter and its Applications to Neuroscience

Problem: Estimate Gait Cycle θtSensor model

Observation model: dZt = h(θt)dt+ noise

Problem: What is θt given noisy observations?

36

Page 121: Feedback Particle Filter and its Applications to Neuroscience

Problem: Estimate Gait Cycle θtSensor model

Observation model: dZt = h(θt)dt+ noise

Problem: What is θt given noisy observations?36

Page 122: Feedback Particle Filter and its Applications to Neuroscience

Problem: Estimate Gait Cycle θtSensor model

Observation model: dZt = h(θt)dt+ noise

Problem: What is θt given noisy observations?36

Page 123: Feedback Particle Filter and its Applications to Neuroscience

Problem: Estimate Gait Cycle θtSensor model

Observation model: dZt = h(θt)dt+ noise

Problem: What is θt given noisy observations?36

Page 124: Feedback Particle Filter and its Applications to Neuroscience

Solution: Particle FilterAlgorithm to approximate posterior distribution

“Large number of oscillators”

Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ

it in interval (φ1,φ2)

Circuit:dθ

it = ωi dt︸︷︷︸

natural freq. of ith oscillator

+ noisei + dU it︸︷︷︸

mean-field control

, i = 1, . . . ,N

Feedback Particle Filter: Design control law U it

37

Page 125: Feedback Particle Filter and its Applications to Neuroscience

Solution: Particle FilterAlgorithm to approximate posterior distribution

“Large number of oscillators”

Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ

it in interval (φ1,φ2)

Circuit:dθ

it = ωi dt︸︷︷︸

natural freq. of ith oscillator

+ noisei + dU it︸︷︷︸

mean-field control

, i = 1, . . . ,N

Feedback Particle Filter: Design control law U it

37

Page 126: Feedback Particle Filter and its Applications to Neuroscience

Solution: Particle FilterAlgorithm to approximate posterior distribution

“Large number of oscillators”

Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ

it in interval (φ1,φ2)

Circuit:dθ

it = ωi dt︸︷︷︸

natural freq. of ith oscillator

+ noisei + dU it︸︷︷︸

mean-field control

, i = 1, . . . ,N

Feedback Particle Filter: Design control law U it

37

Page 127: Feedback Particle Filter and its Applications to Neuroscience

Solution: Particle FilterAlgorithm to approximate posterior distribution

“Large number of oscillators”

Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ

it in interval (φ1,φ2)

Circuit:dθ

it = ωi dt︸︷︷︸

natural freq. of ith oscillator

+ noisei + dU it︸︷︷︸

mean-field control

, i = 1, . . . ,N

Feedback Particle Filter: Design control law U it

37

Page 128: Feedback Particle Filter and its Applications to Neuroscience

Solution: Particle FilterAlgorithm to approximate posterior distribution

“Large number of oscillators”

Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ

it in interval (φ1,φ2)

Circuit:dθ

it = ωi dt︸︷︷︸

natural freq. of ith oscillator

+ noisei + dU it︸︷︷︸

mean-field control

, i = 1, . . . ,N

Feedback Particle Filter: Design control law U it

37

Page 129: Feedback Particle Filter and its Applications to Neuroscience

Solution: Particle FilterAlgorithm to approximate posterior distribution

“Large number of oscillators”

Posterior distribution:P(φ1 < θt < φ2|Sensor readings) = Fraction of θ

it in interval (φ1,φ2)

Circuit:dθ

it = ωi dt︸︷︷︸

natural freq. of ith oscillator

+ noisei + dU it︸︷︷︸

mean-field control

, i = 1, . . . ,N

Feedback Particle Filter: Design control law U it

37

Page 130: Feedback Particle Filter and its Applications to Neuroscience

Filtering for Oscillators

Signal & Observations dθt = ω dt + dBt mod 2π

dZt = h(θt)dt + dWt

− π 0 π

Particle evolution,

dθit = ωi dt + dB i

t + K(θit )◦ [dZt −

1

2(h(θ

it ) + h)dt] mod 2π, i = 1, ...,N.

where ωi is sampled from a distribution.

38

Page 131: Feedback Particle Filter and its Applications to Neuroscience

Filtering for Oscillators

Signal & Observations dθt = ω dt + dBt mod 2π

dZt = h(θt)dt + dWt

− π 0 π

Particle evolution,

dθit = ωi dt + dB i

t + K(θit )◦ [dZt −

1

2(h(θ

it ) + h)dt] mod 2π, i = 1, ...,N.

where ωi is sampled from a distribution.

38

Page 132: Feedback Particle Filter and its Applications to Neuroscience

Filtering for Oscillators

Signal & Observations dθt = ω dt + dBt mod 2π

dZt = h(θt)dt + dWt

− π 0 π

Particle evolution,

dθit = ωi dt + dB i

t + K(θit )◦ [dZt −

1

2(h(θ

it ) + h)dt] mod 2π, i = 1, ...,N.

where ωi is sampled from a distribution.

Feedback Particle Filter

-

+

38

Page 133: Feedback Particle Filter and its Applications to Neuroscience

Simulation ResultsSolution of the Estimation of Gait Cycle Problem

[Click to play the movie]

39

Page 134: Feedback Particle Filter and its Applications to Neuroscience

Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework

. . .Part. Filter Part. Filter Part. Filter

Prior

Noisy input

. . .Part. Filter Part. Filter

40

Page 135: Feedback Particle Filter and its Applications to Neuroscience

Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework

. . .Part. Filter Part. Filter Part. Filter

Prior

Noisy input

. . .Part. Filter Part. Filter

Noisy

measurements

Rhythmic

movement

Prior

Mumford’s

box with

neurons

Normal form

reduction

Normal form

reduction

Estimate

Mumford’s box with

oscillators

40

Page 136: Feedback Particle Filter and its Applications to Neuroscience

Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework

. . .Part. Filter Part. Filter Part. Filter

Prior

Noisy input

. . .Part. Filter Part. Filter

Noisy

measurements

Rhythmic

movement

Prior

Mumford’s

box with

neurons

Normal form

reduction

Normal form

reduction

Estimate

Mumford’s box with

oscillators

40

Page 137: Feedback Particle Filter and its Applications to Neuroscience

Filtering of Biological Rhythms with Brain RhythmsConnection to Lee and Mumford’s hierarchical Bayesian inference framework

. . .Part. Filter Part. Filter Part. Filter

Prior

Noisy input

. . .Part. Filter Part. Filter

Noisy

measurements

Rhythmic

movement

Prior

Mumford’s

box with

neurons

Normal form

reduction

Normal form

reduction

Estimate

Mumford’s box with

oscillators

40

Page 138: Feedback Particle Filter and its Applications to Neuroscience

Acknowledgement

Adam Tilton Tao Yang Huibing Yin Liz Hsiao-Wecksler Sean Meyn

1 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter with mean-field coupling. In Procs. of IEEE Conf. onDecision and Control, December 2011.

2 T. Yang, P. G. Mehta, and S. P. Meyn. A mean-field control-oriented approach to particle filtering. In Procs. ofAmerican Control Conference, June 2011.

3 A. Tilton, E. Hsiao-Wecksler, P. G. Mehta. Filtering with rhythms: Application to estimation of gait cycle. In Procs. ofAmerican Control Conference, 2012.

4 T. Yang, G. Huang and P. G. Mehta. Joint probabilistic data association-feedback particle filter with applications tomultiple target tracking. In Procs. of American Control Conference, 2012.

5 A. Tilton, T. Yang, H. Yin and P. G. Mehta. Feedback particle filter-based multiple target tracking using bearing-onlymeasurements. In Procs. of Information Fusion, 2012.

6 T. Yang, R. Laugesen, P. G. Mehta, and S. P. Meyn. Multivariable feedback particle filter. To appear in IEEE Conf. onDecision and Control, 2012.

7 T. Yang, P. G. Mehta, and S. P. Meyn. Feedback particle filter. Conditionally accepted to IEEE Transactions onAutomatic Control.


Recommended