+ All Categories
Home > Documents > Surya Ganguli Neural Information ... - Stanford University

Surya Ganguli Neural Information ... - Stanford University

Date post: 28-Dec-2021
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
44
A theory of neural dimensionality, dynamics and measurement Surya Ganguli Dept of Applied Physics And, by courtesy, Neurobiology Electrical Engineering collaboration with Shenoy Lab Stanford University Peiran Gao Eric Trautmann
Transcript
Page 1: Surya Ganguli Neural Information ... - Stanford University

A theory of neural dimensionality, dynamics and measurement!

!Surya Ganguli!

!Dept of Applied Physics!

!And, by courtesy,!

!Neurobiology!

Electrical Engineering! !

collaboration with Shenoy Lab!!

Stanford University!!!Peiran Gao ! Eric

Trautmann !

Page 2: Surya Ganguli Neural Information ... - Stanford University

An exponential Moore’s Law for the number of recorded neurons!

Multielectrode recordings allow us to record from 102 to 103 neurons.!

Stevenson & Kording, 2011!

Mammalian circuits controlling complex behaviors contain > 106 to 109 neurons.!

Are we in an anti-Goldilocks moment? (122 years to get 5 orders more)!! Too many neurons so that data analysis is not easy.! Not enough neurons to really understand circuit computation?!

Page 3: Surya Ganguli Neural Information ... - Stanford University

An example dataset: the single neuron view!

Trial averaged firing rates from 3 neurons while a monkey is reaching to targets!at 7 directions, two lengths and two speeds (red/green)!!!There are about 100 more neurons like these.!!How are such datasets analyzed?!!

Churchland and Shenoy, J. Neurophys. 2007!

Page 4: Surya Ganguli Neural Information ... - Stanford University

Ahrens et al., 2012!

zebra fish, whole brain

Machens et al., 2010!

monkey, PFC

Mazor & Laurent, 2005!

locust, antenna lobe

Yu et al., 2009!

monkey, motor/pre-motor cortex

Dynamical portraits of circuit computation via dim reduction!

Page 5: Surya Ganguli Neural Information ... - Stanford University

In a wide variety of neuronal recordings, measured neuronal!dimensionality is far less than the number of neurons. !

What is the origin of this underlying simplicity? What is the interpretation of this empirical observation?!

What (if anything) can we learn about large dynamical !networks at such an overwhelming level of under sampling?!

While we now record from many neurons (O(100)); brain circuits controlling behavior have many more unrecorded neurons (O( 1 billion ) in primate motor cortex).!

Fundamental conceptual questions!

How would the dimensionality change if we recorded more neurons?

How would the dynamical portraits change if we recorded more neurons? Can we trust them with such small numbers of neurons?

Page 6: Surya Ganguli Neural Information ... - Stanford University

Center hold! Target Appears! Go Cue! Reach!

~400ms! 400 - 1000ms! ~250ms!

Acquire!

~300ms! Adopted from!Yu et al, 2007!

Dataset 2 (Monkey A)!7 directions, 2 speeds and 2 distances!

28 task conditions!single electrode, 64 preparatory recordings!

M1! PMd!

Churchland et al. 2007!

Extra-cellular recordings from PMd and M1:!

Dataset 1 (Monkey H)!8 directions!

8 task conditions!multi-electrode array, 109 single units!

!

M1! PMd!

Yu et al. 2007!

Example dataset:!! !

Page 7: Surya Ganguli Neural Information ... - Stanford University

! In primate motor cortex there are ! O(1 billion) neurons controlling O(650) skeletal muscles.!! In these experiments, O(100) neurons were recorded.!! The PCA dimensionality (~ 70% variance explained) across all! 8 reaches is 7.!! The PCA dimensionality (~ 70% variance explained) for one ! reach is 3.3.!! Where do these numbers come from – how large could they! possibly be? !! New mathematical definition of neuronal task complexity:! 1) Upper bound dimensionality.! 2) Tell us how many neurons we need to record.!! ! !

The need for a theory of dimensionality and dynamics!

Page 8: Surya Ganguli Neural Information ... - Stanford University

New definition of neural task complexity!

Neural measurement!

Conditions for accurate recovery of dynamic portraits!

Neural dimensionality!

Theorem:!dimensionality ≤ task complexity!

Future experiments:!recording more neurons w/o increase in task complexity ≠

richer datasets!

Past results:!existing dynamic portraits are likely to be accurate despite recording

few neurons!

Motor cortical data is as high dimensional as possible given

task complexity!

Random projection theory:!# of neurons required !

~ log(neural task complexity)!

Page 9: Surya Ganguli Neural Information ... - Stanford University

Neural Dimensionality and Task Complexity: Intuition!

Yu et al., 2007!

fix angle, vary time ! fix time, vary angle ! vary both angle/time!max dim! ~ T / τ! ~ 2π / Δ ! ~ T / τ × 2π / Δ !

n2!

n3!

{t, θ}!

{r1, r2, r3,…}!

T!

Firin

g Ra

te!

τ!

τ! τ!

τ!

neuron 2!ne

uron

3!

Page 10: Surya Ganguli Neural Information ... - Stanford University

Our theory provides:!

1)  A way to quantitatively extract neural correlation length parameters λ1, λ2,…, λK!

2)  and the proportionality constant c, such that we can prove…!

A theorem:!neural dimensionality*!

≤!min(task complexity, # of recorded neurons)!

*participation ratio of!PCA eigen spectrum!~70% var explained!

Task parameters: p1, p2, …, pK (time, speed, angle, distance etc.)!Over ranges: L1, L2, …, LK!With neural correlation lengths: λ1, λ2,…, λK"

Define task complexity: !

Neural Dimensionality and Task Complexity: Theory!

Page 11: Surya Ganguli Neural Information ... - Stanford University

Neural Dimensionality in Motor Cortex!

Yu et al., 2007!

Implication: neural dimensionality not small; but almost as large as possible given task and smoothness constraints !

Dimensionality! Neural Task complexity!

Single reach! 3.3! 4.2!

Multiple reaches! 7! 10!

Prediction #1: vary task complexity by varying T, dimensionality should vary linearly with T!

Prediction #2: vary # of neurons in the dataset, dimensionality should be unchanged !

109 neurons!

Page 12: Surya Ganguli Neural Information ... - Stanford University

Neural Dimensionality in Motor Cortex!Implication: task complexity, not # of neurons, is the main limit on neural dimensionality !

Prediction #1: vary task complexity by varying T, dimensionality should vary linearly with T!

Prediction #2: vary # of neurons in the dataset, dimensionality should be unchanged !

Task Complexity (T / 𝝉)!

Dim

ensi

onal

ity!

N=109!N=70!N=30 !

0! 5!1!

4!

Task Complexity (T / 𝝉)!

Dim

ensi

onal

ity!

0! 5!1!

4!

Page 13: Surya Ganguli Neural Information ... - Stanford University

!Theoretical upper bounds on dimensionality motivate a conceptual revision in the way we think about neural complexity: motor cortical data is as high dimensional as possible given smoothness constraints!and task complexity (behaves like a factorized stationary random walk)!!

Generalizable Lessons so Far !

!The complexity (dimensionality) of many neural datasets is limited by task complexity, not number of neurons; recording more neurons without a concomitant increase in task complexity will not necessarily lead to richer datasets.!

!From now on, the dimensionality of neural data should be compared to its upper bound – not the number of neurons. ! If equal: then the richness of your dataset is constrained by task design ! If less than: you have discovered intrinsic dynamical constraints in the! circuit above and beyond those imposed by the task!!But beyond dimensionality alone - what about the shape of neural trajectories?!

Page 14: Surya Ganguli Neural Information ... - Stanford University

Measuring the Dynamic Portrait under Sub-sampling !

When are portraits from relatively few neurons = those from all neurons?!

When patterns of neural activity are distributed across neurons, we can

accurately recover dynamic portraits despite subsampling!

neuron 1 !

neur

on 2!

neuron 1!

neur

on 2! good!

recovery!

poor!recovery!

Page 15: Surya Ganguli Neural Information ... - Stanford University

The act of neuronal measurement as a random projection!

An experiment we can do: measure "a random subset of M neurons"""" is equivalent to""""An experiment we cannot yet do: measure "M random linear combinations "(i.e. random projections) of all neurons !

If neural manifold is randomly oriented: !

0.1 !! 0.3 !!0.05 !!0.2 !! 0.4 !!0.1 !!0.01 !! 0.2 !!

random subset!

random projection!

Page 16: Surya Ganguli Neural Information ... - Stanford University

A larger context: random projections

x = As is a random projection from a N dim space down to an M dim space

Data / interesting signals live on a K-dim submanifold in N-dim space

When will the geometry of this manifold be preserved under a random proj. ?

Distortion: Dab = ( || Asa – Asb ||2 - || sa – sb ||2 ) / || sa – sb ||2

Page 17: Surya Ganguli Neural Information ... - Stanford University

A larger context: random projections

Manifold of K-sparse signals = Union of N choose K K-dim hyperplanes

As long as M > O (1/ε2 * K log N/K), then maxab |Dab| = O(ε) with high prob over random choice of projection A Baraniuk et. al. 2008

K-dim manifold N-dim space

Random proj To M-dim space

Deterministic result: for any projection A with small distortion, one can reconstruct sparse signal from its projection (i.e. compute its pre-image)

Tao, Candes

Page 18: Surya Ganguli Neural Information ... - Stanford University

A larger context: random projections

Point cloud = Union of P points in N-dim space

As long as M > O (1/ε2 * log P), then maxab |Dab| = O(ε) with high prob over random choice of projection A Johnson-Lindenstrauss Lemma

(K-dim) manifold N-dim space

Random proj To M-dim space

Compressed computation: with so few measurements, one cannot recover high-dim points, but any algorithm which depends on pairwise distances can be applied in low dim space

Page 19: Surya Ganguli Neural Information ... - Stanford University

A larger context: random projections

Arbitrary K-dim manifold in N dim space

As long as M > O (1/ε2 * K log [C*Vol]), where C is related to curvature, then maxab |Dab| = O(ε) with high prob over random choice of projection A Johnson-Lindenstrauss Lemma

(K-dim) manifold N-dim space

Random proj To M-dim space

Recovery is difficult (nonconvex) but compressed computation still applies

Page 20: Surya Ganguli Neural Information ... - Stanford University

A consequence of neuronal measurement as a random projection!

To keep the same level of desired distortion, # of neurons need only scale logarithmically with task complexity (good news!)!

By adapting random projection theory: !# neurons" needed ! =!

1 !distortion2 ! (c1 log( task complexity) + c2) !

dist in"all nrns!

dist in"observed !

distortion = fractional error in pairwise distances in the space of observed neurons compared to those in the space of all the neurons!

0.1 !! 0.3 !!0.05 !!0.2 !! 0.4 !!0.1 !!0.01 !! 0.2 !!

random subset! random projection!

Page 21: Surya Ganguli Neural Information ... - Stanford University

To maintain accuracy of the recovered portraits,!# of neurons required ~ log(task complexity)!

task complexity !

# of recorded"neurons !

50!

25!

101 ! 102 !

distortion:"fractional error"in pairwise distances !

distortion contours of motor cortical data!

Page 22: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"

III. Single-trial: trial averaging unlikely for natural & complex behaviors" Conditions to recover the dynamic portraits and its dimensionality using single-trial datasets?"

Page 23: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"

Data = "K by P"

?"

Low-D activities"

# recorded activity patterns"from trials of different

behaviors"

Page 24: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"

Data = "K by P"

Low-D activities"

N by K"

Embedding"

X" ?"

# of behavioral relevant neurons"

# recorded activity patterns"

Page 25: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"

Data = "K by P"

Low-D activities"

N by K"

Embedding"

X"X"

Subsampling"

M by N"

?"

# recorded neurons"

# of behavioral relevant neurons"

# recorded activity patterns"

Page 26: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"

Data = "K by P"

Low-D activities"

N by K"

Embedding"

X"X"

Subsampling"

M by N"

?"

+"

Observation Noise"

M by P"

Before recovering the low-dimensional activities, can we even infer the dimensionality K?"

Page 27: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Data = "

K by P"

Low-D activities"

N by K"

Embedding" *"*"Subsampling"

M by N"

?"+" Noise"

M by P"

N (# relevant nrns) > P (# recorded patterns) > M (# record nrns) > K"

Sufficient condition to infer K correctly:"

Noise Power"Signal Power" >"

Page 28: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Data = "

K by P"

Low-D activities"

N by K"

Embedding" *"*"Subsampling"

M by N"

?"+" Noise"

M by P"

N (# relevant nrns) > P (# recorded patterns) > M (# record nrns) > K"

Sufficient condition to infer K correctly:"

Noise Power"Signal Power"Minimal feature"

>"

Page 29: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Data = "

K by P"

Low-D activities"

N by K"

Embedding" *"*"Subsampling"

M by N"

?"+" Noise"

M by P"

N (# relevant nrns) > P (# recorded patterns) > M (# record nrns) > K"

Sufficient condition to infer K correctly:"

Noise Power"Signal Power" >"Trial Gain" Minimal feature"

Page 30: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Data = "

K by P"

Low-D activities"

N by K"

Embedding" *"*"Subsampling"

M by N"

?"+" Noise"

M by P"

N (# relevant nrns) > P (# recorded patterns) > M (# record nrns) > K"

Trial Gain"Recording Gain"

Sufficient condition to infer K correctly:"

Noise Power"Signal Power" >"

average firing rate of neurons"

Minimal feature"

Random matrix theory: Marchenko & Pastur 1967"

deflation by embedding/

sampling"

Page 31: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Data = " *"*" +" Noise"

M by P"

N (# relevant nrns) > P (# recorded patterns) > M (# record nrns) > K"

Trial Gain"Recording Gain"

Sufficient condition to infer K correctly:"

Noise Floor"Signal Power"Worst-case Power" Input referred"

>"

N by K"

Embedding"Subsampling"

M by N"K by P"

Low-D activities"?"

High-dimensional statistics:"Benaych-Georges & Nadakuditi 2012"

Gavish & Donoho 2013"

Page 32: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Data = "

K by P"

Low-D activities"

N by K"

Embedding" *"*"Subsampling"

M by N"

?"+" Noise"

M by P"

N (# relevant nrns) > P (# recorded patterns) > M (# record nrns) > K"

Trial Gain"Recording Gain"

Sufficient condition to infer K correctly:"

Noise Power"Signal Power"Worst-case Power" Input referred"

>"

Page 33: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Data = "

K by P"

Low-D activities"

N by K"

Embedding" *"*"Subsampling"

M by N"

?"+" Noise"

M by P"

N (# relevant nrns) > P (# recorded patterns) > M (# record nrns) > K"

Sufficient condition to infer K correctly:"

The mean firing rate of neurons may scale with brain sizes (N)"

Hippocampal data: Buzsaki & Mizuseki, 2014!Energy considerations: Howarth et al., 2012!

Page 34: Surya Ganguli Neural Information ... - Stanford University

Single-trial Data"Sufficient condition to infer K correctly:"

•  Low-D activities from 10-D sphere with radius 1"

•  N = 5000, σ2 = 0.5"•  Infer K as the # of data

principal components above threshold:"

Gavish & Donoho 2013"

•  Outperforms simple cross-validated factor analysis"

Page 35: Surya Ganguli Neural Information ... - Stanford University

Discovering structure in subsampled neural dynamics"•  Consider a high dimensional neural circuit with N neurons. "

•  We can only record M of them for a finite amount of time T."

•  What can we correctly infer about the circuit dynamics when M << N and T is not too large?"

•  In general - nothing! "

•  However if we might assume an underlying simplicity, for example low dimensional dynamics of dimension K."

•  For what regimes of M, N, T and K can we correctly recover dynamical properties of the circuit?"

Model Linear Neural Network"

N-dimensional state"

neuronal property"

rank-K connectivity"

input"

random sampling"

M-dimensional observation"

Page 36: Surya Ganguli Neural Information ... - Stanford University

Data often modeled using latent linear dynamical systems"

nonlinear and stochastic transforms of y may be used to model spikes directly"

When are the eigenvalues of fitted (SSID) latent dynamics A close to those of slowest modes of the generative model?"

A gap is required in the eigenvalue spectrum of the observation Y’s covariance matrix."

gap"

Accurate recovery, M = 200, T = 2000"

no gap"

Poor recovery, M = 50, T = 200"

N = 1000, K = 4, 𝛕 = 0.4, 𝛕slow = 9.5" = 0.4, 𝛕slow = 9.5"

Page 37: Surya Ganguli Neural Information ... - Stanford University

Data = "

Dynamics"Embedding"Subsampling" *"*"

M by N"N by T"

K slow"modes"

N - K fast"modes"

K"

N"

N - K"

N"

To understand the spectrum of the covariance matrix, we factorize the data"

# recorded neurons" N by N"# of behavioral

relevant neurons" # of time points recorded"

X = sampling * (Xslow + Xfast)"

*"*"

Signal" Noise"Similar setup to factor analysis, but the noise is correlated across time"

Page 38: Surya Ganguli Neural Information ... - Stanford University

Xslow + Xfast"

Data can be thought of as a low-rank perturbation of a random noise matrix"

Benaych-Georges & Nadakuditi, 2012"

Eigenvalue spectrum of correlated noise deviates from the Marchenko-Pastur law"

Marchenko & Pastur, 1967"Bai et al., 2008, Yao, 2014"

Theoretical eigenvalue spectrum for N = 1000, T = 2000 noise matrix"

b(N, T, 𝛕) ) noise floor"

Page 39: Surya Ganguli Neural Information ... - Stanford University

inpu

t-ref

erre

d no

ise

floor"

covariance noise floor"

Slow-mode signal power > Input-referred noise floor"

Xslow + Xfast"

Data can be thought of as a low-rank perturbation of a random noise matrix"

For a gap in the covariance matrix’s eigenvalue spectrum:"Benaych-Georges & Nadakuditi, 2012"

N = 1000, M = 200, T = 2000, K = 1, 𝛕 = 0.4, 𝛕slow in (0.4, 19.5)" = 0.4, 𝛕slow in (0.4, 19.5)"

inverse signal power transfer function"scaling by

subsampling"slow mode power"

simulated signal power transfer function"

Page 40: Surya Ganguli Neural Information ... - Stanford University

Slow-mode signal power > Input-referred noise floor"For a gap in the covariance matrix’s eigenvalue spectrum:"

• The rank, K, can be inferred if all slow-mode signal power above the input-referred noise floor"

• K = # data eigenvalues above the covariance noise floor"

N = 1000,"K = 4,"𝛕 = 0.4," = 0.4,"𝛕slow = 9.5"

Page 41: Surya Ganguli Neural Information ... - Stanford University

Conclusions!

Neural measurements (optimistic messages):!

To recover dynamic portraits from more complex experiments, no need for many more neurons.!

Neural dimensionality:!

compare measured dimensionality to our

upper bounds!

dimensionality constrained by task

complexity!=!

additional intrinsic dynamic constraint in the data beyond task

complexity!<!

Subsampling can recover accurate dynamic portraits when neural activities are highly distributed!

Task Complexity (T / 𝝉)!

Dim

ensi

onal

ity!

N=109!N=70!N=30 !

0! 5!1

4

Page 42: Surya Ganguli Neural Information ... - Stanford University

What does it take to get random neural manifolds?!

A sufficient condition: every neuron has complex tuning for every task parameter. !

The bane of existence for those thinking along the lines of single unit neurophysiology.!!We cannot easily understand and classify single!neurons L.!

The saving grace of our ability to understand the brain!!!With random trajectories, we can record from a relatively small number of neurons and infer the correct state space description of neural data!!!Understanding what individual neurons do becomes the wrong question. We should focus instead on the collective. !

Old paradigm:!Single units!

New paradigm:!Collective behavior!

Page 43: Surya Ganguli Neural Information ... - Stanford University

Speculations: Functional reasons for random motor cortical neural manifolds?!

Degeneracy between motor cortex and muscles.!!!Analogy between the neuron and the neuroscientist.!!!

Additional slides:!

How many neurons do we need if the data is not random?!!!How many neurons do we need for single trial analysis via factor analysis?!!!

Page 44: Surya Ganguli Neural Information ... - Stanford University

Acknowledgements!Ganguli Lab!

!Peiran Gao!Subhaneli Lahiri!Jascha Sohl-Dickstein!Niru Maheswaranathan!Madhu Advani!Ben Poole!Jay Sarkar!Kiah Hardcastle!Andre Esteva!

Shenoy Lab!!

Krishna Shenoy!Eric Trautmann!Byron Yu!Gopal Santhanam!Stephen Ryu!!!Funding!!Bio-X Neuroventures!Burroughs Wellcome!Genentech Foundation!James S. McDonnell Foundation!National Science Foundation !Office of Naval Research!Simons Foundation!Sloan Foundation!Swartz Foundation!


Recommended