+ All Categories
Home > Documents > Information Capacity and Transmission are Maximized in Balanced Cortical Networks with Neural...

Information Capacity and Transmission are Maximized in Balanced Cortical Networks with Neural...

Date post: 03-Jan-2016
Category:
Upload: veronica-oliver
View: 214 times
Download: 0 times
Share this document with a friend
Popular Tags:
18
Information Capacity and Transmission are Maximized in Balanced Cortical Networks with Neural Avalanches Shew et al, 2011
Transcript

Information Capacity and Transmission are Maximized in Balanced Cortical Networks with

Neural Avalanches

Shew et al, 2011

Entropy

• A measure of ‘disorder’ or ‘surprise’ or ‘self-information’• The intuition underlying Shannon’s definition for entropy:

– High probability of occurrence means less surprise (‘Duh! What’s new!’)– Occurrence of more independent events, means more surprise. (‘Ah! What a coincidence!’)– -log2 p fits the bill. Average surprise is

• Not all entropy in a neural response (yellow in the cartoon) is attributable to variety in the stimulus (blue in the cartoon). ‘Mutual Information’ is the response entropy with the noise entropy subtracted.

Low signal entropyLow noise entropyLow Mutual Information

High signal entropyConsiderable noise entropyMost commonly encountered

High signal entropyLow noise entropyHigh Mutual Information (ideal)

Illustration courtesy Prof. N M Grzywacz

An Information Theoretic illustrationGedeon et al (2001)

• “If we model the coding problem as a correspondence between the elements of an input set X and an output set Y , these three tasks are: finding the spaces X and Y and the correspondence between them.”

• Information-Theoretic quantities– Entropy (H) : H = -Ex log p(x) Measure of ‘surprise’ and ‘self information’

– Mutual Information (I) : I(X;Y) = log Ex,y {p(x;y)/[p(x)p(y)]} Measure of ‘degree of dependence’ of two random variables

– Relative Entropy (KL) : KL(p||q) = Ep log { p(x) /q(x)} Measures how ‘different two probability measures are’

• Optimal Quantizer Formulation to recover neural coding schemes:– A quantizer is a stochastic map q(yN|y) of the neural representation Y into a coarser

representation with in a smaller event space YN.

– Treating X Y YN as a Markov Chain, an approximation of the neural coding scheme is q(yN|x) = ∑y q(yN|y) p(y|x)

– The optimal quantizer is found by minimizing a distortion function as the expected KL divergence over all pairs (y, yN).

argmin yN E y, yN argmin KL { p(x|yN) || p(x|y) }Slide from Vision Journal Club presentation 13th May 2010

4

‘Glossary’

• Sparse code: A format of representing stimulus information with increased periods of total quiescence (of the encoding population of neurons) and concentrating information into briefer periods of common activity.

• High-order correlation: ‘For example, given recordings from three neurons, we can ask if the frequency of triplet firing (pattern 111) is predicted by the frequency of pairs of neurons firing (the patterns ‘011’, ‘110’ and ‘101’)’

• Fine-scale cortical network: Fine-scale networks (50 to 100 um) display specific non-random connectivity. Neurons with similar response properties are grouped into functional columns that span several hundred micrometers and long-range horizontal connections link neurons together over several millimeters.

Slide from Vision Journal Club presentation 14th November 2010

Intuition behind E/I balance

Increased correlation

Low neural activity

‘Just right’

Maximally informativeabout stimulus

6

An aside: Local Field Potentials

LFP: A neurophysiological signal that is obtained by low-pass filtering extracellular recordings. It represents the mean field potential generated by the slow components of synaptic and neural events in the vicinity of the recording electrode.

From Quian Quiroga and Panzeri, 2009

Slide from Vision Journal Club presentation 12th August 2010

Multielectrode Recordings and Pharmacology

• Recording : 8x8 multi-electrode array with 200 um inter-electrode spacing

• Stimulation: Stimuli are biphasic electric shocks (10 different amplitudes applied pseudorandomly)

• Local Field Potentials : 50 Hz cutoff• In vitro preparations : Cortex-ventral

tegmental area organotypic co-cultures

• In vivo preparations: Monkey premotor cortex (arm representation), rat barrel cortex

• Pharmacology (for in vitro):– DNQX+AP-5 : AMPA and kainate receptor

antagonist (reduced excitation)– PTX : GABAa receptor antagonist

(reduced inhibition)

Event recording and representation

• Events:– Large range negative LFP fluctuations (i.e. -3 SD) at least tau ms (time

threshold) apart– Each event is identified by: Time stamp ti, Amplitude ai and electrode ei

• The size s of a population event is defined as the sum of all ai that occur during the event.

• In default networks with unaltered E/I, this size obeys the following distribution:

– As expected from sparsity consideration, smaller population events are more numerous.

– ‘Neural avalanche’ is a name that is drawn perhaps from the typical ‘sliding’ nature of the distribution

A graded measure of neural avalanches• k assesses closeness of a population event size distribution to

the avalanche signature distribution. Away from the critical E/I either way, avalanches disappear.

k<<1Hypoexcitable network

k>>1Hyperexcitable network

Response entropy peaks at optimal k i.e. in conditions of neural avalanche

Plots with coarse binning (lowered spatial resolution) address the concern that the peak maybe an artifact of undersampling effects of the original high-dimensional space.

Mutual Information also peaks at optimal k i.e. in conditions of neural avalanche

Accounting for the Mutual Information peakWhat changes with the E/I ratio?

• N : Number of patterns observed during recording– Used for quadratic extrapolation of entropy :– Imposes an upper bound on entropy:

• L : Likelihood that a given site participates in a pattern– Fraction of patterns in which a site q participated:

– Imposes an upper bound on entropy :

• ∆H : Difference in entropy between measured and shuffled data (i.e. measure of interaction contributions)

An explanation of E/I imbalance effects-Low entropy at both extremes-

• Participation likelihoods are low (tending to lower entropy)

• COMPETING EFFECTS Low activity effects win and lower the entropy

• Unit interactions are low (tending to increase entropy)

• Participation likelihoods are high (tending to lower entropy)

• CO-OPERATING EFFECTS Both effects serve to lower entropy

• Unit interactions are higher (tending to lower entropy)

An explanation of E/I balance effects-Peaking entropy at center-

• Interaction-free entropy (black trace), obeys bounds set by unit count.

• Measured entropy (green trace) peaks correspond to a k in which …

• Activity is not too depressed (L = 0.25)

• Interactions are at an intermediate value (MI = 0.2)

A network model to account for the data

• 16 binary sites (each representing a ‘population’ rather than a ‘neuron’

• Activity propagation is modeled through a connectivity matrix: pij is the probability that i will be activated as a result of j being activated in the previous time step.

• Changes in E/I are modeled by varying pij through the range 0.006 to 0.1.

• Stochastic temporal dynamics of activation at at site i

• The model predictions reproduce qualitative features of the measurements.

Agreement of in vivo and in vitro results

In vivo event size distributionsshow the signature neural avalanche.

In avalanche settings, entropy is high and mutual information is moderate.

Discussion

• Entropy (‘capacity’) and mutual information (‘transmission’) are maximized at an E/I ratio which is also the one at which neuronal avalanches emerge.

• This is a key result in the context of the working hypothesis that that entropy maximization is an organizing principle of neural information processing systems.

• This is one of many recent studies linking entropy maximization to the dynamical systems concept of ‘criticality’ (between ‘seizures’ and ‘quiescence’)

• The results seem to support known electrophysiological properties of cortical neurons ( fixed ratio between excitatory and inhibitory current amplitudes)

THANK YOUQuestions?


Recommended