+ All Categories
Home > Documents > Networks

Networks

Date post: 18-Apr-2017
Category:
Upload: demian-pereira
View: 212 times
Download: 0 times
Share this document with a friend
8
Investigating complex networks with inverse models Vincent Wens * Laboratoire de Cartographie Fonctionnelle du Cerveau, UNI – ULB Neurosciences Institute, Universit´ e Libre de Bruxelles (ULB), Brussels, Belgium (Dated: January 7, 2014) Recent advances in neuroscience have motivated the study of network organization in spatially distributed dynamical systems from indirect measurements. However, the associated connectiv- ity estimation, when combined with inverse modeling, is strongly affected by spatial leakage. We formulate this problem in a general framework and develop a new approach to model spatial leak- age and limit its effects. It is analytically compared to existing regression-based methods used in electrophysiology, which are shown to yield biased estimates of amplitude and phase couplings. Investigating the dynamical organization of spatially extended systems represents a challenging problem aris- ing in many scientific disciplines, from physics to biology. In this context, network theory has emerged as a widely used phenomenological tool [1]. It is based upon a rep- resentation of complex systems as graphs [2], which are built by selecting the dynamical subunits defining nodes and a coupling measure defining edges. These modeling steps are known to affect the ensuing graph topology, which can lead to misinterpretating properties such as ‘small-world-ness’ [3, 4]. Node selection is especially dif- ficult for distributed systems accessible through indirect or incomplete observations only. Directly using sensors as nodes may be of limited value, insofar as measurements miss or mix relevant subunits, leading to graphs that de- pend on the experimental set-up [4]. A natural way out is to reconstruct those subunits from observations, a prob- lem that may not have a unique solution and thus require the introduction of an inverse model [5, 6]. Yet, model assumptions can strongly affect the resulting graph and generate spurious topological features [7]. To illustrate this, we shall hereafter consider a continu- ous system whose state is described by an unknown scalar field Ψ 0 (r,t). The physics of the problem determines the direct operator L mapping this state to observed data μ(t) R m via μ(t)=(LΨ 0 )(t)+ ε(t), where ε(t) denotes measurement noise. An inverse model then provides a reconstruction Ψ(r,t)=(W r μ)(t) of Ψ 0 (r,t) by appli- cation of the inverse operator W r on data. This typically yields a deformed representation of Ψ 0 (r,t) [Figs. 1(a,c)] and spurious connectivity patterns [Figs. 1(b,d)]. We shall refer to this effect as spatial leakage [8]. In neuroscience, this issue affects the mapping of brain networks from scalp electrophysiological data [9]. Indeed, their limited spatial resolution typically leads to smooth reconstructions [Fig. 1(c)] of neural electric current flows, and connectivity maps between activity at r 0 and the rest of the brain display spurious coupling patterns, especially around r 0 [Fig. 1(d)]. In practice, the latter often domi- nates and hides true long-range interactions. Eliminating spurious local connectivity then allows to emphasize the large-scale structure of brain networks. Some techniques were introduced to that effect, such as imaginary coher- (a) (c) (e) (b) (d) (f) FIG. 1. (color) Spatial leakage. (a) Snapshot of a real scalar field Ψ0(r,t) representing two correlated sources. The dots in- dicate sensors. (b) Its connectivity map from node r0, where corr[f,g] denotes temporal correlation between f (t) and g(t). (c,d) Similar plots for the reconstructed field Ψ(r,t) obtained using a minimum L2-norm inverse model, and (e,f) for the leakage-corrected field Φ(r,t). ence [10] or phase lag index [11, 12] for phase coupling, and orthogonalization [13, 14] for amplitude correlation. However, an approach valid for any type of connectivity indices has not yet been considered. In this Letter, we discuss the concept of spatial leakage in the general framework of inverse problems. We explic- itly model spatial leakage from a given location r 0 and introduce a general correction scheme suppressing spuri- ous coupling around r 0 and emphasizing true long-range connectivity [Figs. 1(e,f)]. We analytically compare it to orthogonalization by deriving a simple formula connect- ing these two approaches. This also yields a comparison with imaginary coherence and phase lag index as well, which appear as orthogonalized phase coupling estimates. Finally, we highlight the biases in connectivity estimation due to orthogonalization and absent in our new method, e.g., the effects of phase coupling on orthogonalized am- plitude correlation. arXiv:1401.0444v2 [q-bio.NC] 6 Jan 2014
Transcript
Page 1: Networks

Investigating complex networks with inverse models

Vincent Wens∗

Laboratoire de Cartographie Fonctionnelle du Cerveau, UNI – ULB Neurosciences Institute,Universite Libre de Bruxelles (ULB), Brussels, Belgium

(Dated: January 7, 2014)

Recent advances in neuroscience have motivated the study of network organization in spatiallydistributed dynamical systems from indirect measurements. However, the associated connectiv-ity estimation, when combined with inverse modeling, is strongly affected by spatial leakage. Weformulate this problem in a general framework and develop a new approach to model spatial leak-age and limit its effects. It is analytically compared to existing regression-based methods used inelectrophysiology, which are shown to yield biased estimates of amplitude and phase couplings.

Investigating the dynamical organization of spatiallyextended systems represents a challenging problem aris-ing in many scientific disciplines, from physics to biology.In this context, network theory has emerged as a widelyused phenomenological tool [1]. It is based upon a rep-resentation of complex systems as graphs [2], which arebuilt by selecting the dynamical subunits defining nodesand a coupling measure defining edges. These modelingsteps are known to affect the ensuing graph topology,which can lead to misinterpretating properties such as‘small-world-ness’ [3, 4]. Node selection is especially dif-ficult for distributed systems accessible through indirector incomplete observations only. Directly using sensors asnodes may be of limited value, insofar as measurementsmiss or mix relevant subunits, leading to graphs that de-pend on the experimental set-up [4]. A natural way out isto reconstruct those subunits from observations, a prob-lem that may not have a unique solution and thus requirethe introduction of an inverse model [5, 6]. Yet, modelassumptions can strongly affect the resulting graph andgenerate spurious topological features [7].

To illustrate this, we shall hereafter consider a continu-ous system whose state is described by an unknown scalarfield Ψ0(r, t). The physics of the problem determines thedirect operator L mapping this state to observed dataµ(t) ∈ Rm via µ(t) = (LΨ0)(t)+ε(t), where ε(t) denotesmeasurement noise. An inverse model then provides areconstruction Ψ(r, t) = (Wrµ)(t) of Ψ0(r, t) by appli-cation of the inverse operator Wr on data. This typicallyyields a deformed representation of Ψ0(r, t) [Figs. 1(a,c)]and spurious connectivity patterns [Figs. 1(b,d)]. Weshall refer to this effect as spatial leakage [8].

In neuroscience, this issue affects the mapping of brainnetworks from scalp electrophysiological data [9]. Indeed,their limited spatial resolution typically leads to smoothreconstructions [Fig. 1(c)] of neural electric current flows,and connectivity maps between activity at r0 and the restof the brain display spurious coupling patterns, especiallyaround r0 [Fig. 1(d)]. In practice, the latter often domi-nates and hides true long-range interactions. Eliminatingspurious local connectivity then allows to emphasize thelarge-scale structure of brain networks. Some techniqueswere introduced to that effect, such as imaginary coher-

(a)

(c)

(e)

(b)

(d)

(f)

FIG. 1. (color) Spatial leakage. (a) Snapshot of a real scalarfield Ψ0(r, t) representing two correlated sources. The dots in-dicate sensors. (b) Its connectivity map from node r0, wherecorr[f, g] denotes temporal correlation between f(t) and g(t).(c,d) Similar plots for the reconstructed field Ψ(r, t) obtainedusing a minimum L2-norm inverse model, and (e,f) for theleakage-corrected field Φ(r, t).

ence [10] or phase lag index [11, 12] for phase coupling,and orthogonalization [13, 14] for amplitude correlation.However, an approach valid for any type of connectivityindices has not yet been considered.

In this Letter, we discuss the concept of spatial leakagein the general framework of inverse problems. We explic-itly model spatial leakage from a given location r0 andintroduce a general correction scheme suppressing spuri-ous coupling around r0 and emphasizing true long-rangeconnectivity [Figs. 1(e,f)]. We analytically compare it toorthogonalization by deriving a simple formula connect-ing these two approaches. This also yields a comparisonwith imaginary coherence and phase lag index as well,which appear as orthogonalized phase coupling estimates.Finally, we highlight the biases in connectivity estimationdue to orthogonalization and absent in our new method,e.g., the effects of phase coupling on orthogonalized am-plitude correlation.

arX

iv:1

401.

0444

v2 [

q-bi

o.N

C]

6 J

an 2

014

Page 2: Networks

2

Spatial structure of inverse operators. Spatialleakage refers to the spurious connectivity patterns in-duced by the spatial profile of inverse operators, inde-pendently of the system state. Therefore, it does not re-flect dynamical features but merely structural propertiesdepending on the direct model and reconstruction priorsonly, a basic fact that has seemingly not been used untilnow. This concept can be illustrated clearly using

(LΨ0)(t) =

∫d3r `(r) Ψ0(r, t) . (1)

The kernel `(r) ∈ Rm is typically quite smooth in appli-cations. For notational convenience, we shall hereafteruse real scalar fields, the generalization to vector fieldsbeing straightforward. Let us consider prototypal inversemodels based on minimizing the sum of the prediction er-ror 1

2 ||µ(t)− (LΨ)(t)||2 and a term 1p

∫d3r |Ψ(r, t)|p im-

posing a least Lp-norm constraint (p > 1) [6]. Althoughan explicit analytical form can not be obtained generi-cally, implying the need for numerical optimization, animplicit equation for the solution Ψ(r, t) = (Wrµ)(t) canbe derived (see Supp. Mat. [15]),

Ψ(r, t) =`(r)T

(µ(t)− (LΨ)(t)

)∣∣`(r)T(µ(t)− (LΨ)(t)

)∣∣αp, (2)

where αp = (p− 2)/(p− 1). This shows that the spatialprofile of the inverse operator is strongly controlled bythe kernel `(r). In particular, when `(r) is continuousat r0, Eq. 2 shows that Wr shares this property. Thisleads to spurious local connectivity around r0 since wehave Ψ(r, t) ≈ Ψ(r0, t), for r in some neighborhood ofr0, independently of µ(t). Only the precise shape andsize of this neighborhood depends on data µ(t) and pa-rameter p. Likewise, long-distance correlations in `(r)induce spurious couplings. For example, `(r) = κ`(r0)implies Wr = κ1/(p−1) Wr0

by Eq. 2, so that Ψ(r, t) andΨ(r0, t) are temporally correlated whatever µ(t).

More generally, various assumptions and parameterscan affect the spatial properties of inverse operators. Yet,the issue of spatial leakage is of quite general concernand applies to other inverse models [16–18] as well [15].For example, the whole class of models based on unitaryinvariant constraints, as in the case p = 2 above, presenta spatial profile Wr = `(r)TA, with A independent of r,leading to spatial leakage effects [15]. We now show howto correct for the spurious couplings emanating from agiven location r0 by modeling spatial leakage.

Structural model. Let us first consider an idealstate localized at r0, i.e., Ψ0(r, t) = δ(r − r0)f(t).Its reconstruction Ψ(r, t) = (Wrµ)(t) from noiselessobservations µ(t) = (LΨ0)(t) is a functional of f(t),Ψ(r, t) = (Rr,r0f)(t), which defines the resolving op-erator Rr,r0

[6] and describes spatial leakage from r0.For example, Rr,r0

= Wr `(r0) in the case of Eq. 1.Assuming Rr0,r0

invertible, the unknown function f(t)

can be reformulated in terms of calculable quantitiesas (R−1

r0,r0Ψ(r0, ·))(t). This leads to a closed expression

Ψ(r, t) = Λ(r, t) for this “pure leakage” field, where

Λ(r, t) =(Rr,r0

R−1r0,r0

Ψ(r0, ·))(t) . (3)

When no assumptions are made about Ψ0(r, t), we shalluse Eq. 3 as a model for the contribution of spatial leak-age from r0 to the reconstruction Ψ(r, t) = (Wrµ)(t),and define a corrected field

Φ(r, t) = Ψ(r, t)− Λ(r, t) . (4)

It can also be written as Φ(r, t) = (Wnewr µ)(t) in terms

of a new inverse operator

Wnewr = Wr − Rr,r0R

−1r0,r0

Wr0 (5)

depending on the direct and inverse models only. Equa-tions 4, 5 form the analytical basis of our new method.

Although it can be applied to any state and inversemodel, its strict validity presents two restrictions. First,this subtractive scheme requires spatial leakage to lin-early contribute to the reconstruction. This assumptionholds locally when the inverse operator is continuous atr0, since then Ψ(r, t) ≈ Ψ(r0, t) = Λ(r0, t) ≈ Λ(r, t) forr close enough to r0. It is otherwise invalid, except whenthe resolving operator is linear. Second, the node r0 mustbe isolated from the other activated regions, i.e., Ψ0(r, t)assumes the form δ(r − r0)f(t) + g(r, t) with g(r, t) 6= 0only where spatial leakage to r0 is negligible, Rr0,r ≈ 0.Otherwise, g(r, t) would also contribute to the expression(R−1

r0,r0Ψ(r0, ·))(t) used for f(t) to derive Eq. 3. Explic-

itly, in the case of Eq. 1,

R−1r0,r0

Ψ(r0, ·) = f +

∫d3r R−1

r0,r0Rr0,rg(r, ·) . (6)

The neighborhood of r0 where Rr0,rg(r, ·) 6= 0 beingtypically of non-zero but limited size, local leakage isgenerally overcorrected, as illustrated by the identityWnew

r0= 0, whereas the reconstruction at sufficiently long

distances remains unchanged, since Wnewr ≈ Wr wher-

ever leakage from r0 is negligible, Rr,r0 ≈ 0. Note alsothat spatial leakage between r and r′ away from r0 isleft uncorrected [Fig. 1(e)]. At the level of connectivitymaps between Ψ(r0, t) and Φ(r, t), this means that localcoupling is suppressed whereas true long-distance inter-actions are preserved, although they are still deformedby spatial leakage [Fig. 1(f)]. Despite this limitation, thecorrection scheme given by Eqs. 4, 5 is useful to empha-size long-range connectivity. This makes it suitable, inparticular, for studies of large-scale brain networks.

In this application, Eq. 1 is used and inverse mod-els are taken of the form (Wrµ)(t) = w(r, t)Tµ(t) withw(r, t) ∈ Rm. It then follows from the definitions that(Rr,r0

f)(t) = w(r, t)T`(r0)f(t) and

Φ(r, t) = Ψ(r, t)− γ(r, r0, t) Ψ(r0, t) , (7)

γ(r, r0, t) = w(r, t)T`(r0) /w(r0, t)T`(r0) . (8)

Page 3: Networks

3

In the rest of this Letter, we shall use this particular caseand compare this new method to alternative techniquesused in electrophysiology.

Comparison with orthogonalization. The philos-ophy behind these techniques is to consider connectiv-ity estimates insensitive to any instantaneous linear cou-pling, of which spatial leakage (Eqs. 7, 8) is a particularcase. To implement this idea, we still consider Eq. 7 forcomplex scalar fields but now determine the real coeffi-cient γ(r, r0, t) via linear regression. This yields

Φ(r, t) = Ψ(r, t)− γ(r, r0, t) Ψ(r0, t) , (9)

γ(r, r0, t) = Re[Ψ(r, t)Ψ(r0, t)∗] / |Ψ(r0, t)|2 . (10)

The orthogonalization method [14] uses these equationsto remove spatial leakage from r0. Geometrically, Φ(r, t)coincides with the orthogonal projection of Ψ(r, t) ontothe direction perpendicular to Ψ(r0, t). This implies thatΦ(r, t) is left invariant by the addition to Ψ(r, t) of anycomplex number co-linear with Ψ(r0, t). It thus followsfrom Eqs. 7, 8 that Ψ(r, t) can be replaced by Φ(r, t) inthe definition of Φ(r, t), which yields the useful formula

Φ(r, t) =1

2

(Φ(r, t)− Ψ(r0, t)

2

|Ψ(r0, t)|2Φ(r, t)∗

), (11)

which directly relates the two correction schemes.Orthogonalization uses no information about the struc-

ture of spatial leakage beyond Eq. 7, but requires com-plex fields since otherwise Φ(r, t) vanishes identically. Itis driven by data and is applicable to any pair of signals.Equation 11 overcorrects spatial leakage from r0 at theexpense of true interactions, since all instantaneous lin-ear relations between Ψ(r, t) and Ψ(r0, t) are suppressed.In particular, the reconstruction is affected even whenno leakage is present, since then Φ(r, t) = Ψ(r, t) butΦ(r, t) 6= Ψ(r, t). Regression can be made slightly lessconservative by adapting its hypotheses to the structureof spatial leakage. For example, under the assumptionof a stationary inverse model, ∂w(r, t)/∂t = 0, a time-independent coefficient γ(r, r0, t) was derived in [13] byminimizing the mean squared error 〈|Φ(r, ·)|2〉, wherebrackets denote time averaging. In any case, regressionalways requires linearity of the resolving operator, as forEqs. 4, 5, and modifies Ψ(r, t) even when spatial leakageis negligible, contrary to the structural model correction.To further explore the limitations of orthogonalization,we shall consider some explicit connectivity indices andshow that it can non-trivially bias their estimation.

Brain network analyses are often based upon cere-bral rhythms and concentrate on phase-locking and am-plitude co-modulation [19], which appear more gener-ally as long-range communication mechanisms within dis-tributed oscillatory networks in the mean field approxi-mation. We now study phase and amplitude couplingsbetween Ψ(r0, t) at a given r0 and Φ(r, t) at r 6= r0,to those obtained between Ψ(r0, t) and Φ(r, t). To that

aim, it is useful to rewrite Eq. 11 geometrically in termsof phases and amplitudes:

θ(r, r0, t) =π

2× sign[sin θ(r, r0, t)] , (12)

|Φ(r, t)| = |Φ(r, t)| × | sin θ(r, r0, t)| , (13)

where θ(r, r0, t) (θ(r, r0, t)) denotes the phase lag be-tween Φ(r, t) (Φ(r, t)) and Ψ(r0, t).

Let us first focus on phase connectivity and considertwo widely used coupling measures between Φ(r, t) andΨ(r0, t): cross-density 〈Φ(r, ·)Ψ(r0, ·)∗〉 and phase coher-ence 〈exp iθ(r, r0, ·)〉. The effect of orthogonalization oncross-density is directly obtained using Eq. 11,

〈Φ(r, ·)Ψ(r0, ·)∗〉 = i Im[〈Φ(r, ·)Ψ(r0, ·)∗〉

]. (14)

Orthogonalization removes the real part of cross-density,and thus generally underestimates its modulus comparedto our new method. Likewise, it forces phase coherenceto be purely imaginary. More precisely, Eq. 12 yields⟨

exp iθ(r, r0, ·)⟩

= i (f+ − f−) , (15)

where f+ and f− denote the fraction of time spent byθ(r, r0, t) in the ranges [0, π) and [π, 2π), respectively.Orthogonalized phase coherence thus depends in a coarseway on the phase relations obtained with the struc-tural correction, and is generally biased. For exam-ple, the phase-locking |〈exp iθ(r, r0, ·)〉| ≈ 1 occurringwhen θ(r, r0, t) narrowly varies around 0 or π symmet-rically (f+ = f−) is suppressed after orthogonalization,|〈exp iθ(r, r0, ·)〉| = 0. Interestingly, the right-hand sidesof Eqs. 14 and 15 respectively determine imaginary coher-ence [10] and phase lag index [11, 12]. Notably, the phaselag index |f+ − f−| was introduced in [11] on heuristicgrounds and conjectured to be insensitive to instanta-neous linear couplings. The proof follows here as conse-quence of our derivation from orthogonalization.

The case of amplitude connectivity is more intricatebecause it is affected by phase relations, as follows fromEq. 13. In particular, the existence of an interaction be-tween phase lag θ(r, r0, t) and amplitude |Ψ(r0, t)| intro-duces spurious coupling between |Φ(r, t)| and |Ψ(r0, t)|,which may lead to overestimation. This effect is an ar-tifact of orthogonalization and is independent of spa-tial leakage itself. To illustrate this, let us comparethe temporal correlation ρ(r, r0) between |Φ(r, t)| and|Ψ(r0, t)| to its orthogonalized equivalent ρ(r, r0) be-tween |Φ(r, t)| and |Ψ(r0, t)|. We first assume indepen-dence of amplitudes and phase lag, so that the expecta-tion values 〈| sin θ(r, r0, ·)|k|Φ(r, ·)|l|Ψ(r0, ·)|m〉 factorizeinto 〈| sin θ(r, r0, ·)|k〉×〈|Φ(r, ·)|l|Ψ(r0, ·)|m〉 for integersk, l,m ≥ 0. Using the definition of the Pearson correla-tion coefficient, it is direct to show that

ρ(r, r0)

ρ(r, r0)=

M[sin θ(r, r0, ·)]√1 + M[sin θ(r, r0, ·)]2 + M[Φ(r, ·)]2

, (16)

Page 4: Networks

4

where M[x] denotes the mean of |x(t)| divided by its stan-dard deviation. This implies that amplitude correlationis underestimated, ρ(r, r0) ≤ ρ(r, r0), with equality inthe limit of perfect phase-locking where θ(r, r0, t) con-verges to a constant 6= 0 or π, i.e., M[sin θ(r, r0, ·)]→∞.However, this bound can be violated in the presence ofcoupling between amplitudes and phase lag. This sit-uation was investigated in [14] using a model of coher-ent sources, which involves amplitudes a0(t) and a1(t)with identical Rayleigh distribution, and phases φ0(t)and φ1(t) uniformly distributed on the circle, all foursignals being independent of each other. This model as-sumes that Ψ(r0, t) = a0(t) exp(iφ0(t)) and

Φ(r, t) = c a0(t) ei(φ0(t)+ψ) +√

1− c2 a1(t) eiφ1(t) . (17)

The parameter 0 ≤ c ≤ 1 quantifies the strength of thelinear relation between Ψ(r0, t) and Φ(r, t), and ψ con-trols their mean phase lag. The relation between c, ψ,ρ(r, r0) and ρ(r, r0) was studied numerically in [14], butcan be derived analytically using an appropriate approx-imation scheme. In [15], we show that

ρ(r, r0) ≈ c2 , (18)

ρ(r, r0) ≈ c2√c4 + 2c2 + 1/2

, (19)

where c = c sinψ/√

1− c2. Eliminating c from Eq. 19exhibits the non-linear dependence of orthogonalized am-plitude correlation ρ(r, r0) on ρ(r, r0) and phase lag ψ.Let us notice that, depending on the values of c and ψ,orthogonalization may underestimate (e.g., for ψ = 0)or overestimate (e.g., for ψ = π/2) amplitude correla-tion. In the latter case, the ratio ρ(r, r0)/ρ(r, r0) alwayslies between 1 and

√2, showing the existence of spurious

amplitude coupling. These conclusions based on approx-imations qualitatively agree with numerical simulations[15]. Let us also note that the overestimation of ampli-tude correlation may in general be much more dramaticthan in the above model. For example, the case whereθ(r, r0, t) is given by arcsin(κ|Ψ(r0, t)/Φ(r, t)|) with pos-itive κ implies ρ(r, r0) = 1 independently of ρ(r, r0) ≤ 1,showing that ρ(r, r0)/ρ(r, r0) is unbounded.

Orthogonalization in its full generality has been ap-plied with some success to investigations of brain connec-tivity through electrophysiology [10–14]. However, ouranalysis highlights some of its shortcomings, especiallythe counter-intuitive introduction of spurious amplitudecoupling. In this Letter, we introduced a new methodspecifically correcting for spatial leakage and based ona direct characterization of its structure rather than ongeneric regression arguments. Although this method hassome limitations which can be clearly identified, it offersa simple, general and principled way to analyze the large-scale network organization of spatially extended systems.

This theory will be directly relevant to investigations ofelectrophysiological brain networks taking place at thelevel of reconstructed sources. More generally, it shouldfind applications in other scientific fields as well, sinceit applies to any real-world complex system for whichmeasurements are indirect or incomplete.

I would like to thank X. De Tiege and N. Goldmanfor inspiring discussions and their valuable comments onthis manuscript. This work was supported by a researchgrant from the belgian Fonds de la Recherche Scientifique(convention 3.4611.08, FRS–F.N.R.S.).

[email protected][1] R. Albert and A.-L. Barabasi, Rev. Mod. Phys., 74, 47

(2002); M. E. J. Newman, SIAM Rev., 45, 167 (2003).[2] R. Diestel, Graph Theory, 3rd ed., Graduate Texts in

Mathematics (Springer-Verlag, 2005).[3] C. T. Butts, Science, 325, 414 (2009).[4] S. Bialonski, M.-T. Horstmann, and K. Lehnertz, Chaos,

20, 013134 (2010).[5] A. Tarantola, Nat. Phys., 2, 492 (2006); G. Jobert, Eu-

rophysics News, 44, 21 (2013).[6] A. Tarantola, Inverse Problem Theory and Methods for

Model Parameter Estimation (SIAM, 2004).[7] The finiteness of data also biases connectivity estimates;

see S. Bialonski, M. Wendler, and K. Lehnertz, PLoSONE, 6, e22826 (2011); A. Zalesky, A. Fornito, andE. Bullmore, NeuroImage, 60, 2096 (2012).

[8] This notion differs from that of spectral leakage, whichoriginates from finite-dimensional truncations of thespace of fields Ψ(r, t); see J. Trampert and R. Snieder,Science, 271, 1257 (1996).

[9] See J.-M. Schoffelen and J. Gross, Hum. Brain Mapp.,30, 1857 (2009) and references therein.

[10] G. Nolte, O. Bai, L. Wheaton, Z. Mari, S. Vorbach, andM. Hallett, Clin. Neurophysiol., 115, 2292 (2004).

[11] C. J. Stam, G. Nolte, and A. Daffertshofer, Hum. BrainMapp., 28, 1178 (2007).

[12] A. Hillebrand, G. R. Barnes, J. L. Bosboom, H. W.Berendse, and C. J. Stam, NeuroImage, 59, 3909 (2012).

[13] M. J. Brookes, M. Woolrich, and G. R. Barnes, Neu-roImage, 63, 910 (2012).

[14] J. F. Hipp, D. J. Hawellek, M. Corbetta, M. Siegel, andA. K. Engel, Nat. Neurosci., 15, 884 (2012).

[15] See Supplementary Material for Examples of inversemodels, Unitary symmetry and spatial structure, Deriva-tions for amplitude correlation between coherent sources.

[16] B. D. Van Veen and K. M. Buckley, IEEE ASSP Mag.,5, 4 (1988); B. Van Veen, W. Van Drongelen, M. Yucht-man, and A. Suzuki, IEEE Trans. Biomed. Eng., 44, 867(1997).

[17] M. W. Woolrich, A. Baker, H. Luckhoo, H. Mohseni,G. Barnes, M. Brookes, and I. Rezek, NeuroImage, 77,77 (2013).

[18] J. Gross, J. Kujala, M. Hamalainen, L. Timmermann,A. Schnitzler, and R. Salmelin, Proc. Natl. Acad. Sci.USA, 98, 694 (2001).

[19] M. Siegel, T. H. Donner, and A. K. Engel, Nat. Rev.Neurosci., 13, 121 (2012).

Page 5: Networks

5

Supplementary material

Appendix A: Examples of inverse models.

Appendix B: Unitary symmetry and spatial structure.

Appendix C: Derivations for amplitude correlation between coherent sources.

Appendix A: Examples of inverse models

We showed in the main text that the kernel `(r) of thedirect operator is a prime ingredient controlling the spa-tial structure of the minimum Lp-norm inverse model.In this Appendix, we extend our discussion and indicatehow reconstruction priors of the inverse model may af-fect the structure of spatial leakage. We also consider afundamentally different type of inverse model based onspatial filtering. In the following, the direct operator L isalways assumed to be given by Eq. 1.

Minimum Lp-norm estimates. The prototypalexample of inverse models consists in optimizing thefit between observed data µ(t) and prediction (LΨ)(t),with constraints on Ψ(r, t) imposing prior model as-sumptions. We minimize the strictly convex functionalV = Vmisfit + Vprior over real scalar fields Ψ(r, t), where

Vmisfit =1

2

∫dt∣∣∣∣µ(t)− (LΨ)(t)

∣∣∣∣2 , (S1)

Vprior =λ

p

∫dtd3r

∣∣Ψ(r, t)∣∣p , (S2)

with ||x||2 = xTx and λ > 0, p > 1 [6]. The solu-tion is unique, and, when expressed as a functional ofdata, Ψ(r, t) = (Wrµ)(t), defines the inverse operatorWr. The extremization equation δV/δΨ(r, t) = 0 reads

`(r)T(µ(t)− (LΨ)(t)

)= λ

∣∣Ψ(r, t)∣∣p−2

Ψ(r, t) . (S3)

Taking the absolute value and eliminating |Ψ(r, t)|,

Ψ(r, t) = λαp−1 ×`(r)T

(µ(t)− (LΨ)(t)

)∣∣`(r)T(µ(t)− (LΨ)(t)

)∣∣αp, (S4)

where αp = (p− 2)/(p− 1) < 1. Equation 2 is recoveredby setting λ = 1. Note that this derivation does notdirectly apply to the important case p = 1 where Vprior

is not differentiable [6].The qualitative spatial properties of Wr described in

the main text are unaffected by certain model parame-ters but are by others. For example, changing Eq. S1,which embodies the physical assumption that measure-ment noise ε(t) = µ(t) − (LΨ)(t) is a gaussian white

process, into Vmisfit = 12

∫dtdt′ ε(t)TM(t, t′)ε(t′), which

allows to incorporate a more realistic description of noise,replaces `(r)Tε(t) by `(r)T

∫dt′M(t, t′)ε(t′) in Eq. S4.

This leaves the structure of spatial leakage, i.e. data-independent spatial properties of the reconstructed field,qualitatively unmodified. On the other hand, chang-ing the Lp-norm affects spatial leakage. Indeed, usingVprior = λ

p

∫dtd3r ω(r)|Ψ(r, t)|p, where ω(r) > 0 is a

weight function, instead of Eq. S2, incorporates an extrafactor ω(r)αp−1 into the right-hand side of Eq. S4. Thiscan strongly modify the structure of spatial leakage. Forexample, a discontinuity in the weight function forces thereconstruction to be discontinuous too, even when `(r)is smooth.

The linear case. The situation simplifies in thewidely used case p = 2, for which Eq. S4 is linear sinceα2 = 0. The inverse operator now takes the form

Wr = `(r)T A , (S5)

where A is an r-independent operator. In this case, Wr

is seen to be at least as smooth as `(r), while mere con-tinuity holds in general. Interestingly, Eq. S5 actuallyfollows from the unitary symmetry of Vprior with p = 2,as we show in Appendix B. The qualitative structure ofspatial leakage for the minimum L2-norm estimate thusapplies to all inverse models possessing this symmetry,may they be linear or non-linear. Only A depends onthe details of the model. An explicit expression for thisoperator can not be obtained in general, unlike for theminimum L2-norm model where [6]

(Aµ)(t) =(λ+

∫d3r `(r)`(r)T

)−1

µ(t) . (S6)

This result can be derived by rearranging Eq. S4 withαp = 0 as ∫

d3r′ a(r, r′) Ψ(r′, t) = `(r)Tµ(t) , (S7)

where a(r, r′) = λδ(r − r′) + `(r)T`(r′). The linear op-erator on the left-hand side can be inverted using an ex-pansion for large λ,

Ψ(r, t) =

∫d3r′

1

λ

[δ(r − r′)− 1

λ`(r)T`(r′) +

1

λ2

∫d3r′′ `(r)T`(r′′)`(r′′)T`(r′)− . . .

]`(r′)Tµ(t)

=1

λ`(r)T

[1− 1

λ

∫d3r′`(r′)`(r′)T +

1

λ2

(∫d3r′`(r′)`(r′)T

)2

− . . .]µ(t) ,

(S8)

Page 6: Networks

6

which leads to Eqs. S5, S6 after re-summation of thisgeometric series and analytic continuation in λ.

Spatial filtering. A widely used alternative to theabove constrained least-squares criterion is the beam-former [16]. It relies on fundamentally different ideas,yet the discussion on spatial leakage remains similar. Theinverse operator is defined by a linear spatial filter

(Wrµ)(t) = w(r)Tµ(t) (S9)

that focuses on the signal originating from location r. Inthe linearly constrained minimum variance beamformer[16], this filtering is achieved by imposing the unit-gainconstraint w(r)T`(r) = 1, which ensures that field ac-tivity at r passes through the filter undeformed, and byminimizing the mean square 〈(Wrµ)2〉 to dampen thecontribution from elsewhere. Introducing the data co-variance matrix Σ = 〈µµT〉 and a Lagrange multiplierλ(r) enforcing the constraint, this problem amounts tominimize the function

w(r)T Σw(r) + λ(r)(w(r)T`(r)− 1

)(S10)

over w(r) ∈ Rm. A standard calculation yields [16]

w(r) =Σ−1 `(r)

`(r)T Σ−1 `(r). (S11)

The analytical dependence of the inverse operator in `(r)is qualitatively similar to that of Eq. S4 in the formal casep = 0, and the discussion of spatial leakage propertiesmade in the main text holds unchanged, except that thespatial filter w(r) is at least as smooth as the kernel `(r).

These comments also apply to some generalizationsof the beamformer because the qualitative structure ofspatial leakage in Eq. S11 does not depend on the datacovariance matrix Σ. For example, the non-stationarybeamformer of [17] uses a time-dependent data covari-ance Σ(t), making the spatial filter weights given byEq. S11 vary in time but with the same qualitative de-pendence in `(r). They also apply to frequency-domaincolored beamformers where Σ is replaced by the cross-density matrix Σ(ν) = 〈µ(ν)µ(ν)†〉 of the data Fouriertransform µ(ν) at frequency ν [18]. It is noteworthy thatin this example, the orthogonalization method of [14] cannot be applied, contrary to the new structural leakagecorrection scheme presented in the main text. Indeed,the structural model is analogous to Eqs. 7, 8 but usesthe complex coefficient

γ(r, r0, ν) =`(r)T Σ(ν)−1 `(r0)

`(r)T Σ(ν)−1 `(r)(S12)

obtained using Eqs. 8 and S11 with Σ replaced by Σ(ν).Orthogonalization must therefore be adapted to this caseby computing the complex-valued regression coefficient,which yields a trivial corrected field Φ(r, ν) = 0.

Appendix B: Unitary symmetry and spatialstructure

In this Appendix, we show that Eq. S5 derives fromthe unitary invariance of the L2-norm. More generally,we investigate the consequences of unitary symmetry onthe analytical dependence of inverse operators in `(r).To the best of the author’s knowledge, this question hasnot been considered in the literature. We first explainwhat is meant by unitary symmetry and then presenttwo elementary derivations of Eq. S5.

Unitary symmetry. It is convenient to introduce thestandard inner product

(Ψ1|Ψ2) =

∫d3rΨ1(r) Ψ2(r) , (S13)

which allows to write a compact expression for the directoperator defined by Eq. 1,

LΨ = (`|Ψ) . (S14)

It is then quite natural to consider unitary transforma-tions Ψ → UΨ acting on fields Ψ(r) in the fundamentalrepresentation

(UΨ)(r) =

∫d3r′ U(r, r′) Ψ(r′) (S15)

and leaving the inner product invariant,

(UΨ1|UΨ2) = (Ψ1|Ψ2) . (S16)

Since we consider here real-valued fields, this imposes theorthogonality constraint∫

d3rU(r, r′)U(r, r′′) = δ(r′ − r′′) . (S17)

The presence of the parameter `(r) explicitly forbids uni-tary transformations to be symmetries of the direct andinverse problems. A standard trick is to promote `(r) toa background field transforming in the same representa-tion, hence restoring symmetry in the direct model since(U`|UΨ) = (`|Ψ).

We now consider inverse models that are invariant un-der unitary transformations Ψ → UΨ, ` → U`. We alsoassume that, among its parameters, `(r) is the only func-tion of r.

Extremizing invariant functionals. Let us sup-pose that the reconstruction Ψ(r, t) = (Wrµ)(t) is de-fined as the extremum of some unitary invariant func-tional

V = V[m, s] (S18)

in which Ψ(r, t) and `(r) appear through the invariantsm(t) = (`|Ψ(·, t)) and s(t, t′) = (Ψ(·, t)|Ψ(·, t′)) only.The extremization equation δV/δΨ(r, t) = 0 reads

`(r)TδV

δm(t)+ 2

∫dt′

δV

δs(t, t′)Ψ(r, t′) = 0 . (S19)

Page 7: Networks

7

Applying the inverse of the linear operator in the secondterm, if it exists, we obtain the implicit equation

Ψ(r, t) = `(r)TB[t,m, s] , (S20)

for some coefficients B[t,m, s] depending on the deriva-tives of V, as well as on all other r-independent param-eters such as data µ(t). Since this holds whatever µ(t),we obtain Eq. S5 with (Aµ)(t) = B[t,m, s], which is in-dependent of r, as claimed. This argument represents adirect generalization of the derivation of Eq. S4 for p = 2.It shows in particular that physical assumptions aboutmeasurement noise ε(t) = µ(t) − (LΨ)(t) are irrelevantto this argument, since Vmisfit in Eq. S1 can be replacedby any functional µ(t) and (LΨ)(t). Likewise, the gaus-sian white noise prior assumption on Ψ(r, t) embodied inthe L2-norm term, Eq. S2 with p = 2, can be waived byusing any functional of s(t, t′) for Vprior.

Equivariance constraint. A more explicit result canbe obtained by directly solving the general constraintsimplied by unitary symmetry. The following argument isalso slightly more general, in that it does not assume theinverse model to be defined as an extremization problem.The reconstruction is a functional

Ψ = F[`1, . . . , `m] (S21)

of the m components `k(r) of the kernel `(r). Unitarysymmetry imposes that, if Ψ is the reconstruction forgiven functions `k, then the reconstruction for the func-tions U`k must be UΨ. Mathematically, this translatesinto the property that F is unitary equivariant, i.e., forany linear operator U satisfying Eqs. S15, S17, we have

F[U`1, . . . ,U`m] = UF[`1, . . . , `m] . (S22)

This assumption imposes strong constraints on the func-tional F, which in consequence must be of the form

F[`1, . . . , `m] =

m∑k=1

`k Ck((`|`T)

), (S23)

as can be shown using classical theorems from invarianttheory. Functions Ck of the matrix (`|`T) of inner prod-ucts (`j |`k) are the most general unitary invariant func-tionals of the `ks, whereas the factors `k are necessaryfor this sum to transform in the fundamental represen-tation. All r-independent model parameters, includingtime t and data µ(t), enter Eq. S23 through the Cksonly. We thus recover Eq. S5 again, with the kth compo-nent of (Aµ)(t) given by Ck. The operator A is indeedindependent of r, since `(r) only appears via the innerproducts (`|`T), as was explicit in Eq. S6.

Let us notice that the above arguments can be gen-eralized to the case where other parameters transformnon-trivially, by promoting them to background fields.Direct models involving other inner products and isom-etry groups or subgroups can also be considered alongthese lines.

Appendix C: Derivations for amplitude correlationbetween coherent sources

The effect of phase coupling on orthogonalized am-plitude correlation was investigated in [14] using simu-lations of the coherent sources model presented in themain text. We claimed that Eqs. 18, 19 yield a good ap-proximation of their results. In this Appendix, we makethe approximation scheme used explicit and then derivethese equations using elementary calculations. We alsocheck the validity of this approximation by reproducingsimulations of [14]. All probabilities considered here arebased on ergodic densities, i.e., temporal averaging.

Let us recall that a0(t) and a1(t) are two positive sig-nals with Rayleigh distribution, i.e., their fraction of timespent between values a and a + da is −d exp(−a2/2α2),α > 0 (Fig. S1), that φ0(t) and φ1(t) are uniformly dis-tributed over [0, 2π), and that all four variables are in-dependent of each other. These assumptions imply inparticular that ak(t) exp iφk(t) is distributed accordingto a complex gaussian with zero mean and variance α2.Our goal is to compute the correlations

ρ(r, r0) = corr[|Φ(r, ·)|, |Ψ(r0, ·)|

], (S24)

ρ(r, r0) = corr[|Φ(r, ·)|, |Ψ(r0, ·)|

], (S25)

between the amplitudes of Ψ(r0, t) = a0(t) exp iφ0(t) andΦ(r, t) given by Eq. 17 or Φ(r, t) given by Eq. 11.

The approximation. Computing these correlationsas functions of c and ψ is a priori difficult. However, thefollowing approximation scheme can be used. All ampli-tudes having finite variance, e.g., σ2 = (2 − π/2)α2 forthe Reyleigh distribution, they are effectively supportedon a region where the amplitude squared can be reason-ably approximated by a linear function, as exemplifiedin Fig. S1. Since correlation is invariant under lineartransformations, this yields

ρ(r, r0) ≈ corr[|Φ(r, ·)|2, |Ψ(r0, ·)|2

], (S26)

ρ(r, r0) ≈ corr[|Φ(r, ·)|2, |Ψ(r0, ·)|2

]. (S27)

Derivation of Eq. 18. It will prove useful to observethat Φ(r, t) is gaussian itself with zero mean, since itis given by a linear combination of ak(t) exp iφk(t), seeEq. 17. Using

|Φ(r, t)| =∣∣c a0(t) +√

1− c2 a1(t) ei(φ1(t)−φ0(t)−ψ)∣∣ , (S28)

〈a20〉 = 〈a2

1〉 and 〈exp i(φ1 − φ0)〉 = 0, we also see that

〈|Φ(r, ·)|2〉 = 〈a2k〉 , 〈Φ(r, ·)2〉 = 0 . (S29)

We conclude that Φ(r, t) actually has same variance, andthus same distribution, as ak(t) exp iφk(t). In particular,

〈|Φ(r, ·)|n〉 = 〈an0 〉 = 〈an1 〉 , (S30)

Page 8: Networks

8

FIG. S1. Linear approximation of squared amplitude. TheRayleigh distribution of a(t) with maximum reached at valueα = 1 is shown together with the function a2. The Taylorexpansion a2 ≈ 2αa − α2 around a = α is seen to be a goodapproximation in the α ± σ interval (green shaded region),which contains about 80% of samples. Over a larger interval,linear regression still yields a reasonable approximation.

for n ≥ 0. Now, using Eq. S28, the mutual independencebetween a0(t), a1(t) and φ1(t)− φ0(t), and the identities〈a2

0〉 = 〈a21〉 and 〈cos(φ1 − φ0 − ψ)〉 = 0, we find

〈|Φ(r, ·)|2 a20〉 = 〈a2

0〉2 + c2σ2a20, (S31)

where σa20 denotes the standard deviation of a0(t)2. Since|Ψ(r0, t)| = a0(t) and |Φ(r, t)| have the same momentsaccording to Eq. S30, this shows that the right-hand sideof Eq. S26 equals c2. We thus recover Eq. 18.

Let us notice that the independence in ψ actually holdsexactly. This is seen from Eq. S28, which shows thatφ0(t), φ1(t) and ψ only appear as φ1(t)− φ0(t)− ψ, andfrom the trivial property∮

dφ0 dφ1 f(φ1−φ0−ψ) =

∮dφ0 dφ1 f(φ1−φ0) , (S32)

for any function f on the circle. To check the validity ofour approximation for ρ(r, r0) as a function of c, resultsobtained from Eq. 18 and from direct simulations areplotted in Fig. S2(a). Comparison shows fair agreementand thus indicates that the approximation in Eq. S26 isquite good. Actually, fitting a power law ρ(r, r0) = ck

on the simulation curve using a log-log linear regressionyields k ≈ 2.1.

Derivation of Eq. 19. Let us now focus on Eq. S27.The orthogonalized field Φ(r, t) does not follow the samedistribution as Ψ(r0, t) and Φ(r, t). This is seen from itsexplicit expression

Φ(r, t) = eiφ0(t) ×[c sinψ a0(t)+√

1− c2 a1(t) sin(φ1(t)− φ0(t))],

(S33)

which is obtained by direct application of Eqs. 11 and17. Since the rescaling |Φ(r, t)| → |Φ(r, t)|/

√1− c2

leaves the correlation in Eq. S25 unchanged, and since|Φ(r, t)|/

√1− c2 depends on c and ψ through the com-

bination c = c sinψ/√

1− c2, we conclude that ρ(r, r0)must be a function of c only. Its approximate depen-dence in c can be obtained by direct computation of theright-hand side of Eq. S27. Calculations are similar tothose presented above, although they are slightly morecumbersome. We obtain

〈|Φ(r, ·)|2 a20〉 − 〈|Φ(r, ·)|2〉 〈a2

0〉 = c2 sin2 ψ σ2a20

(S34)

for the covariance between squared amplitudes a0(t)2 and|Φ(r, t)|2. Correlation is then obtained upon division bytheir standard deviations σa20 and σ|Φ(r,·)|2 . For the lat-ter, we find

σ2|Φ(r,·)|2 = (1− c2)×

[(c4 + 3/8)σ2

a20

+ (2c2 + 1/8)〈a20〉2].

(S35)

Plugged into Eq. S27 and using the definition of c, theseresults yield

ρ(r, r0) ≈ c2√c4 + 3/8 + (2c2 + 1/8)M[a2

0]2, (S36)

where M[x] was defined in the main text after Eq. 16.Here, we have M[a2

0] = 〈a20〉/σa20 = 1 for the Reyleigh

distribution, from which Eq. 19 finally follows.

This analytical result is compared to the simulation inFig. S2(b), and good agreement is again observed. Inparticular, the fact that ρ(r, r0) ≥ c ≥ ρ(r, r0) whenψ = π/2, whose interpretation is given in the main text,is seen to hold both for Eq. 19 and for the simulation.

(a) (b)

FIG. S2. Analytical results versus simulations. Comparison ofEqs. 18 (a) and 19 (b) to simulations, in which the parameters0 ≤ c ≤ 1 and 0 ≤ ψ ≤ π/2 were varied, show very goodconsistency.


Recommended