+ All Categories
Home > Documents > A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework...

A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework...

Date post: 12-Jun-2020
Category:
Upload: others
View: 8 times
Download: 0 times
Share this document with a friend
61
A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij Khare, University of Florida, USA Sang-Yun Oh, Stanford University, USA Bala Rajaratnam, Stanford University, USA Abstract Sparse high dimensional graphical model selection is a topic of much interest in mod- ern day statistics. A popular approach is to apply 1 -penalties to either (1) parametric likelihoods, or, (2) regularized regression/pseudo-likelihoods, with the latter having the distinct advantage that they do not explicitly assume Gaussianity. As none of the popular methods proposed for solving pseudo-likelihood based objective functions have provable convergence guarantees, it is not clear if corresponding estimators exist or are even computable, or if they actually yield correct partial correlation graphs. This paper proposes a new pseudo-likelihood based graphical model selection method that aims to overcome some of the shortcomings of current methods, but at the same time retain all their respective strengths. In particular, we introduce a novel framework that leads to a convex formulation of the partial covariance regression graph prob- lem, resulting in an objective function comprised of quadratic forms. The objective is then optimized via a coordinatewise approach. The specific functional form of the objective function facilitates rigorous convergence analysis leading to convergence guar- antees; an important property that cannot be established using standard results, when the dimension is larger than the sample size, as is often the case in high dimensional applications. These convergence guarantees ensure that estimators are well-defined un- der very general conditions, and are always computable. In addition, the approach yields estimators that have good large sample properties and also respect symmetry. Furthermore, application to simulated/real data, timing comparisons and numerical convergence is demonstrated. We also present a novel unifying framework that places all graphical pseudo-likelihood methods as special cases of a more general formulation, leading to important insights. Keywords: Sparse inverse covariance estimation, Graphical model selection, Soft thresholding, Partial correlation graph, Convergence guarantee, Generalized pseudo- likelihood, Gene regulatory network arXiv:1307.5381v3 [stat.ME] 14 Aug 2014
Transcript
Page 1: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

A convex pseudo-likelihood framework for high

dimensional partial correlation estimation with

convergence guarantees

Kshitij Khare, University of Florida, USA

Sang-Yun Oh, Stanford University, USA

Bala Rajaratnam, Stanford University, USA

Abstract

Sparse high dimensional graphical model selection is a topic of much interest in mod-

ern day statistics. A popular approach is to apply `1-penalties to either (1) parametric

likelihoods, or, (2) regularized regression/pseudo-likelihoods, with the latter having

the distinct advantage that they do not explicitly assume Gaussianity. As none of the

popular methods proposed for solving pseudo-likelihood based objective functions have

provable convergence guarantees, it is not clear if corresponding estimators exist or

are even computable, or if they actually yield correct partial correlation graphs. This

paper proposes a new pseudo-likelihood based graphical model selection method that

aims to overcome some of the shortcomings of current methods, but at the same time

retain all their respective strengths. In particular, we introduce a novel framework

that leads to a convex formulation of the partial covariance regression graph prob-

lem, resulting in an objective function comprised of quadratic forms. The objective

is then optimized via a coordinatewise approach. The specific functional form of the

objective function facilitates rigorous convergence analysis leading to convergence guar-

antees; an important property that cannot be established using standard results, when

the dimension is larger than the sample size, as is often the case in high dimensional

applications. These convergence guarantees ensure that estimators are well-defined un-

der very general conditions, and are always computable. In addition, the approach

yields estimators that have good large sample properties and also respect symmetry.

Furthermore, application to simulated/real data, timing comparisons and numerical

convergence is demonstrated. We also present a novel unifying framework that places

all graphical pseudo-likelihood methods as special cases of a more general formulation,

leading to important insights.

Keywords: Sparse inverse covariance estimation, Graphical model selection, Soft

thresholding, Partial correlation graph, Convergence guarantee, Generalized pseudo-

likelihood, Gene regulatory network

arX

iv:1

307.

5381

v3 [

stat

.ME

] 1

4 A

ug 2

014

Page 2: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

1 Introduction

One of the hallmarks of modern day statistics is the advent of high-dimensional datasets

arising particularly from applications in the biological sciences, environmental sciences and

finance. A central quantity of interest in such applications is the covariance matrix Σ of

high dimensional random vectors. It is well known that the sample covariance matrix S can

be a poor estimator of Σ, especially when p/n is large, where n is the sample size and p is

the number of variables in the dataset. Hence S is not a useful estimator for Σ for high-

dimensional datasets, where often either p n (“large p, small n”) or when p is comparable

to n and both are large (“large p, large n”). The basic problem here is that the number of

parameters in Σ is of the order p2. Hence in the settings mentioned above, the sample size

is often not large enough to obtain a good estimator.

For many real life applications, the quantity of interest is the inverse covariance/partial

covariance matrix Ω = Σ−1. In such situations, it is often reasonable to assume that there

are only a few significant partial correlations and the other partial correlations are negligible

in comparison. In mathematical terms, this amounts to making the assumption that the

inverse covariance matrix Ω = Σ−1 = ((ωij))1≤i,j≤p is sparse, i.e., many entries in Ω are zero.

Note that ωij = 0 is equivalent to saying that the partial correlation between the ith and

jth variables is zero (under Gaussianity, this reduces to the statement that the ith and jth

variables are conditionally independent given the other variables). The zeros in Ω can be

conveniently represented by partial correlation graphs. The assumption of a sparse graph is

often deemed very reasonable in applications. For example, as Peng et al. (2009) point out,

among 26 examples of published networks compiled by Newman (2003), 24 networks had

edge density less than 4%.

A number of methods have been proposed for identifying sparse partial correlation graphs

in the penalized likelihood and penalized regression based framework (Meinshausen and

Buhlmann, 2006, Friedman et al., 2008, Peng et al., 2009, Friedman et al., 2010). The main

focus here is estimation of the sparsity pattern. Many of these methods do not necessarily

yield positive definite estimates of Ω. However, once a sparsity pattern is established, a

positive definite estimate can be easily obtained using efficient methods (see Hastie et al.

(2009), Speed and Kiiveri (1986)).

The penalized likelihood approach induces sparsity by minimizing the (negative) log-

likelihood function with an `1 penalty on the elements of Ω. In the Gaussian setup, this

approach was pursued by Banerjee et al. (2008) and others. Friedman et al. (2008) proposed

the graphical lasso (“Glasso”) algorithm for the above minimization problem, and is sub-

stantially faster than earlier methods. In recent years, many interesting and useful methods

1

Page 3: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

have been proposed for speeding up the performance of the graphical lasso algorithm (see

Mazumder and Hastie (2012) for instance). It is worth noting that for these methods to

provide substantial improvements over the graphical lasso, certain assumptions are required

on the number and size of the connected components of the graph implied by the zeros in Ω

(the minimizer).

Another useful approach introduced by Meinshausen and Buhlmann (2006) estimates the

zeros in Ω by fitting separate lasso regressions for each variable given the other variables.

These individual lasso fits give neighborhoods that link each variable to others. Peng et al.

(2009) improve this neighborhood selection (NS) method by taking the natural symmetry in

the problem into account (i.e., Ωij = Ωji), as not doing so could result in less efficiency and

contradictory neighborhoods.

In particular, the SPACE (Sparse PArtial Correlation Estimation) method was proposed

by Peng et al. (2009) as an effective alternative to existing methods for sparse estimation

of Ω. The SPACE procedure iterates between (1) updating partial correlations by a joint

lasso regression and (2) separately updating the partial variances. As indicated above, it

also accounts for the symmetry in Ω and is computationally efficient. Peng et al. (2009)

show that under suitable regularity conditions, SPACE yields consistent estimators in high

dimensional settings. All the above properties make SPACE an attractive regression based

approach for estimating sparse partial correlation graphs. In the examples presented in Peng

et al. (2009), the authors find that empirically the SPACE algorithm seems to converge really

fast. It is however not clear if SPACE will converge in general. Convergence is of course

critical so that the corresponding estimator is always guaranteed to exist and is therefore

meaningful, both computationally and statistically. In fact, as we illustrate in Section 2, the

SPACE algorithm might fail to converge in simple cases, for both the standard choices of

weights suggested in Peng et al. (2009). Motivated by SPACE, Friedman et al. (2010) present

a coordinate-wise descent approach (the “Symmetric lasso”), which may be considered as a

symmetrized version of the approach in Meinshausen and Buhlmann (2006). As we show in

Section 2.3, it is also not clear if the Symmetric lasso will converge.

In this paper, we present a new method called the CONvex CORrelation selection methoD

(CONCORD) algorithm for sparse estimation of Ω. The algorithm obtains estimates of Ω

by minimizing an objective function, which is jointly convex, but more importantly com-

prised of quadratic forms in the entries of Ω. The subsequent minimization is performed

via coordinate-wise descent. The convexity is strict if n ≥ p, in which case standard results

guarantee the convergence of the coordinate-wise descent algorithm to the unique global

minimum. If n < p, the objective function may not be strictly convex. As a result, a unique

global minimum may not exist, and existing theory does not guarantee convergence of the

2

Page 4: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

METHOD

Property NS

SP

AC

E

SY

ML

ASSO

SP

LIC

E

CO

NC

OR

D

Symmetry + + + +Convergence guarantee (fixed n) N/A +Asymptotic consistency (n, p→∞) + + +

Table 1: Comparison of regression based graphical model selection methods. A “+” indicatesthat a specified method has the given property. A blank space indicates the absence of aproperty. “N/A” stands for not applicable.

sequence of iterates of the coordinate-wise descent algorithm to a global minimum. In Section

4, by exploiting the quadratic forms present in the objective, it is rigorously demonstrated

that the sequence of iterates does indeed converge to a global minimum of the objective

function regardless of the dimension of the problem. Furthermore, it is shown in Section 6

that the CONCORD estimators are asymptotically consistent in high dimensional settings

under regularity assumptions identical to Peng et al. (2009). Hence, our method preserves

all the attractive properties of SPACE, while also providing a theoretical guarantee of con-

vergence to a global minimum. In the process CONCORD yields an estimator Ω that is

well-defined and is always computable. The strengths of CONCORD are further illustrated

in the simulations and real data analysis presented in Section 5. A comparison of the rel-

evant properties of different estimators proposed in the literature is provided in Table 1

(Neighborhood selection (NS) by Meinshausen and Buhlmann (2006), SPACE by Peng et al.

(2009), Symmetric lasso (SYMLASSO) by Friedman et al. (2010), SPLICE by Rocha et al.

(2008) and CONCORD). The table shows that the CONCORD algorithm preserves all the

attractive properties of existing algorithms, while also providing rigorous convergence guar-

antees. Another major contribution of the paper is the development of a unifying framework

that renders the different pseudo-likelihood based graphical model selection procedures as

special cases. This general formulation facilitates a direct comparison between the above

pseudo-likelihood based methods and gives deep insights into their respective strengths and

weaknesses.

The remainder of the paper is organized as follows. Section 2 briefly describes the SPACE

algorithm and presents examples where it fails to converge. This section motivates our

work and also analyzes other regression-based or pseudo-likelihood methods that have been

proposed. Section 3 introduces the CONCORD method and presents a general framework

that unifies recently proposed pseudo-likelihood methods. Section 4 establishes convergence

3

Page 5: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

of CONCORD to a global minimum, even if n < p. Section 5 illustrates the performance of

the CONCORD procedure on simulated and real data. Comparisons to SPACE and Glasso

are provided. When applied to gene expression data, the results given by CONCORD are

validated in a significant way by a recent extensive breast cancer study. Section 6 establishes

large sample properties of the CONCORD approach. Concluding remarks are given in Section

7. The supplemental document contains proofs of some of the results in the paper.

2 The SPACE algorithm and convergence properties

Let the random vector Yk =(yk1 , y

k2 , · · · , ykp

)′, k = 1, 2, · · · , n denote i.i.d. observations from

a multivariate distribution with mean vector 0 and covariance matrix Σ. Let Ω = Σ−1 =

((ωij))1≤i,j≤p denote the inverse covariance matrix, and let ρ = (ρij)1≤i<j≤p where ρij =

− ωij√ωiiωjj

denotes the partial correlation between the ith and jth variable for 1 ≤ i 6= j ≤ p.

Note that ρij = ρji for i 6= j. Denote the sample covariance matrix by S, and the sample

corresponding to the ith variable by Yi = (y1i , y

2i , · · · , yni )′.

2.1 The SPACE algorithm

Peng et al. (2009) propose the following novel iterative algorithm to estimate the partial

correlations ρij1≤i<j≤p and the partial covariances ωii1≤i≤p corresponding to Ω (see Al-

gorithm 1).

2.2 Convergence Properties of SPACE

From empirical studies, Peng et al. (2009) find that the SPACE algorithm converges quickly.

As mentioned in the introduction, it is not immediately clear if convergence can be established

theoretically. In an effort to understand such properties, we now place the SPACE algorithm

in a useful optimization framework.

Lemma 1. For the choice of weights, wi = ωii, the SPACE algorithm corresponds to an

iterative partial minimization procedure (IPM) for the following objective function:

Qspc(Ω) =1

2

p∑i=1

(−n logωii + ωii‖Yi −

∑j 6=i

ρij√ωjjωii

Yj‖2

)+ λ

∑1≤i<j≤p

∣∣ρij∣∣=

1

2

p∑i=1

−n logωii +1

2ωii‖Yi +

∑j 6=i

ωijωii

Yj‖2 + λ∑

1≤i<j≤p

∣∣ρij∣∣ . (1)

4

Page 6: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Algorithm 1 (SPACE pseudocode)

Input: Standardize data to have mean zero and standard deviation oneInput: Fix maximum number of iterations: rmaxInput: Fix initial estimate: (ω

(0)ii = 1/sii as suggested)

Input: Choose weightsa: wi (wi = ωii or wi = 1)Set r ← 1repeat

## Update partial correlations

Update ρ(r) by minimizing (with current estimates ω(r−1)ii pi=1 as fixed)

1

2

p∑i=1

wi‖Yi −∑j 6=i

ρij

√√√√ ω(r−1)jj

ω(r−1)ii

Yj‖22

+ λ∑

1≤i<j≤p

∣∣ρij∣∣ (2)

## Update conditional variances

Update ω(r)ii

pi=1 by computing (with fixed ρ

(r−1)ij and ω

(r−1)ii for all i and j)

1

ω(r)ii

=1

n‖Yi −

∑j 6=i

(ρij)(r−1)

√√√√ ω(r−1)jj

ω(r−1)ii

Yj‖22 (3)

for i = 1, . . . , p.

r ← r + 1Update weights: wi

until r == rmaxReturn (ρ(rmax), ω(rmax)

ii pi=1)

aPeng et al. (2009) suggest two natural choices of weights wi: (1) uniform weights wi = 1 forall i = 1, 2, . . . , p (ii) partial variance weights wi = ωii.

Proof : Note that when fixing the diagonals ωiipi=1, the minimization in (2) in the SPACE

algorithm (with weights wi = ωii), corresponds to minimizing Qspc with respect to ρ. Now,

let ωii be the minimizer of Qspc with respect to ωii, fixing βij1≤i 6=j≤p (where βij = ρij√

ωjjωii

=

−ωijωii

). Then, it follows that

ωii =

(1

n‖Yi −

∑j 6=i

βijYj‖22

)−1

(4)

The result follows by comparing (4) with the updates in (3).

Although Lemma 1 identifies SPACE as an IPM, existing theory for iterative partial mini-

5

Page 7: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

mization (see for example Zangwill (1969), Jensen et al. (1991), Lauritzen (1996), etc) only

guarantees that every accumulation point of the sequence of iterates is a stationary point of

the objective function Qspc. To establish convergence, one needs to prove that every contour

of the function Qspc contains only finitely many stationary points. It is not clear if this latter

condition holds for the function Qspc. Moreover, for choice of weights wi = 1, the SPACE

algorithm does not appear to have an iterative partial minimization interpretation.

To improve our understanding of the convergence properties of SPACE, we started by

testing the algorithm on simple examples. On some examples, SPACE converges very quickly;

however, examples can be found where SPACE does not converge when using the two possible

choices for weights: partial variance weights (wi = ωii) and uniform weights (wi = 1). We

now give an example of the lack of convergence.

Example 1: Consider the following population covariance and inverse covariance matrices:

Ω =

3.0 2.1 0.0

2.1 3.0 2.1

0.0 2.1 3.0

, Σ = Ω−1 =

8.500 −11.667 8.167

−11.667 16.667 −11.667

8.167 −11.667 8.500

(5)

A sample of n = 100 i.i.d. vectors was generated from the corresponding N (0,Σ) distribu-

tion. The data was standardized and the SPACE algorithm was run with choice of weights

wi = ωii and λ = 160. After the first few iterations successive SPACE iterates alternate

between the following two matrices:29.009570 27.266460 0.000000

27.266460 51.863320 24.680140

0.000000 24.680140 26.359350

and

28.340040 27.221520 −0.705390

27.221520 54.255190 24.569900

−0.705390 24.569900 25.753040

, (6)

thereby establishing non-convergence of the SPACE algorithm in this example (see also Figure

1(a)). Note that the two matrices in (6) have different sparsity patterns. A similar example

of non-convergence of SPACE with uniform weights is provided in Supplemental Section N.

A natural question to ask is whether the non-convergence of SPACE is pathological or

whether is it widespread in settings of interest. To this end, the following simulation study

was undertaken.

Example 2: We created a sparse 100 × 100 matrix Ω with edge density 4% and a condition

number of 100. A total of 100 multivariate Gaussian datasets (with n = 100) having mean

vector zero and covariance matrix Σ = Ω−1 were generated. Table 2 summarizes the number

of times (out of 100) SPACE1 (SPACE with uniform weights) and SPACE2 (SPACE with

partial variance weights) do not converge within 1500 iterations. When they do converge,

6

Page 8: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

the mean number of iterations are 22.3 for SPACE1 and 14.1 for SPACE2 (note that since

the original implementation of SPACE by Peng et al. (2009) was programmed to stop after 3

iterations, we modified the implementation to allow for more iterations in order to check for

convergence of parameter estimates). It is clear from Table 2 that both variations of SPACE,

using unit weights as well as ωii weights, exhibit extensive non-convergence behavior. Our

simulations suggest that the convergence problem is exacerbated as the condition number of

Ω increases.

SPACE1 (wi = 1) SPACE2 (wi = ωii)λ∗ NZ NC λ∗ NZ NC

0.026 60.9% 92 0.085 79.8% 1000.099 19.7% 100 0.160 28.3% 00.163 7.6% 100 0.220 10.7% 00.228 2.9% 100 0.280 4.8% 00.614 0.4% 0 0.730 0.5% 97

Table 2: Number of simulations (out of 100) that do not converge within 1500 iterations(NC) for select values of penalty parameter (λ∗ = λ/n). Average percentage of non-zeros(NZ) in Ω are also shown.

2.3 Symmetric lasso

The Symmetric lasso algorithm was proposed as a useful alternative to SPACE in a recent

work by Friedman et al. (2010). Symmetric lasso minimizes the following (negative) pseudo-

likelihood:

Qsym(α, Ω) =1

2

p∑i=1

[n logαii +

1

αii‖Yi +

∑j 6=i

ωijαiiYj‖2

]+ λ

∑1≤i<j≤p

|ωij| . (7)

where αii = 1/ωii. Here α denotes the vector with entries αii for i = 1, . . . , p and Ω denotes

the matrix Ω with diagonal entries set to zero. A comparison of (1) and (7) shows a deep

connection between SPACE (with wi = ωii) and Symmetric lasso objective functions. In

particular, the Qsym(α, Ω) objective function in (7) is a reparameterization of (1): the only

difference is that the `1 penalty on the elements of ρ is replaced by a penalty on the elements

of Ω in (7). The minimization of the objective function in (7) is performed by coordinate-wise

descent on (α, Ω). Symmetric lasso is indeed a useful and computationally efficient procedure.

However, theoretical properties such as convergence or asymptotic consistency have not yet

been established. The following lemma investigates the properties of the objective function

used in Symmetric lasso.

7

Page 9: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Lemma 2. The Symmetric Lasso objective function in (7) is a non-convex function of (α, Ω).

The proof of Lemma 2 is given in Supplemental Section A. The arguments in the proof

of Lemma 2 demonstrate that the objective function used in Symmetric lasso is not convex,

or even bi-convex in the parameterization used above. However, it can be shown that the

SYMLASSO objective function is jointly convex in the elements of Ω (see Lee and Hastie

(2014) and Supplemental section L). It is straightforward to check that the coordinatewise

descent algorithms for both parameterizations are exactly the same. However, unless a

function is strictly convex, there are no general theoretical guarantees of convergence for

the corresponding coordinatewise descent algorithm. Indeed, when n < p, the SYMLASSO

objective function is not strictly convex. Therefore, it is not clear if the coordinate descent

algorithm converges in general. We conclude this section by remarking that both SPACE and

symmetric lasso are useful additions to the graphical model selection literature, especially

because they both respect symmetry and give computationally fast procedures.

2.4 The SPLICE algorithm

The SPLICE algorithm (Sparse Pseudo-Likelihood Inverse Covariance Estimates) was pro-

posed by Rocha et al. (2008) as an alternative means to estimate Ω. In particular, the

SPLICE formulation uses an `1-penalized regression based pseudo-likelihood objective func-

tion parameterized by matrices D and B where Ω = D−2(I − B). The diagonal matrix D

has elements djj = 1/√ωjj, j = 1, . . . , p. The (asymmetric) matrix B has as columns the

vectors of regression coefficients, βj ∈ Rp. These coefficients, βj, arise when regressing Yj

on the remaining variables. A constraint on each βj is imposed so that regression of Yj

is performed without including itself as a predictor variable: i.e., βjj = 0. Based on the

above properties, the `1-penalized pseudo-likelihood objective function of SPLICE algorithm

(without the constant term) is given by

Qspl(B,D) =n

2

p∑i=1

log(d2ii) +

1

2

p∑i=1

1

d2ii

‖Yi −∑j 6=i

βijYj‖2 + λ∑i<j

|βij|. (8)

In order to optimize (8) with respect to B and D, Rocha et al. (2008) also propose an

iterative algorithm that alternates between maximizing B fixing D, followed by maximizing

D fixing B. As with other regression-based graphical model selection algorithms, a proof of

convergence of SPLICE is not available. The following lemma gives the convexity properties

of the SPLICE objective function.

Lemma 3. i) The SPLICE objective function Qspl(B,D) is not jointly convex in (B,D).

8

Page 10: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

ii) Under the transformation C = D−1, Qspl(B,C) is bi-convex.

The proof of Lemma 3 is given in Supplemental Section B. The convergence properties

of the SPLICE algorithm is not immediately clear since its objective function is non-convex.

Furthermore, it is not clear whether the SPLICE solution yields a global optimum.

3 CONCORD: A convex pseudo-likelihood framework

for sparse partial covariance estimation

The two pseudo-likelihood based approaches, SPACE and Symmetric lasso, have several at-

tractive properties such as computational efficiency, simplicity and use of symmetry. They

also do not directly depend on the more restrictive Gaussian assumption. Additionally, Peng

et al. (2009) also establish (under suitable regularity assumptions) consistency of SPACE

estimators for distributions with sub-Gaussian tails. However, none of the existing pseudo-

likelihood based approaches yield a method that is provably convergent. In Section 2.2, we

showed that there are instances where SPACE does not converge. As explained earlier, con-

vergence is critical as this property guarantees well defined estimators which always exist,

and are computable regardless of the data at hand. An important research objective there-

fore is the development of a pseudo-likelihood framework which preserves all the attractive

properties of SPACE and SYMLASSO, and at the same time, leads to theoretical guarantees

of convergence. It is however not clear immediately how to achieve this goal. A natural

approach to take is to develop a convex formulation of the problem. Such an approach can

yield many advantages, including 1) Guarantee of existence of a global minimum, 2) Better

chance of convergence using convex optimization algorithms, 3) Deeper theoretical analysis

of the properties of the solution and corresponding algorithm. As we have shown, the SPACE

objective function is not jointly convex in the elements of Ω (or any natural reparameteri-

zation). Hence, one is not in a position to leverage tools from convex optimization theory

for understanding its behavior. The SYMLASSO objective function is jointly convex in the

elements of Ω. However, unless a function is strictly convex, there are no general guaran-

tees of convergence for the corresponding coordinatewise descent algorithm. Indeed, when

n < p, the SYMLASSO objective function is not strictly convex, and it is not clear if the

corresponding coordinatewise descent algorithm converges.

In this section, we introduce a new approach for estimating Ω, called the CONvex COR-

relation selection methoD (CONCORD) that aims to achieve the above objective. The CON-

CORD algorithm constructs sparse estimators of Ω by minimizing an objective function that

is jointly convex in the entries of Ω. We start by introducing the objective function for the

9

Page 11: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

CONCORD method and then proceed to derive the details of the corresponding coordinate-

wise descent updates. Convergence is not obvious, as the function may not be strictly convex

if n < p. It is proved in Section 4 that the corresponding coordinate-wise descent algorithm

does indeed converge to a global minimum. Computational complexity and running time

comparisons for CONCORD are given in Sections 3.3 and 5.1, respectively. Subsequently,

large sample properties of the resulting estimator are established in Section 6 in order to pro-

vide asymptotic guarantees in the regime when both the dimension p and the sample size n

tend to infinity. Thereafter, the performance of CONCORD on simulated data, and real data

from biomedical and financial applications is demonstrated. Such analysis serves to establish

that CONCORD preserves all the attractive properties of existing pseudo-likelihood methods

and additionally provides the crucial theoretical guarantee of convergence and existence of a

well-defined solution.

3.1 The CONCORD objective function

In order to develop a convex formulation of the pseudo-likelihood graphical model selection

problem let us first revisit the formulation of the SPACE objective function in (1) with

arbitrary weights wi instead of ωii.

Qspc(Ω) =1

2

p∑i=1

(−n logωii + wi‖Yi −

∑j 6=i

ρij√ωjjωii

Yj‖22

)+ λ

∑1≤i<j≤p

∣∣ωij∣∣ (9)

Now note that the above objective is not jointly convex in the elements of Ω since, 1)

The middle term for the regression with the choices wi = 1 or wi = ωii is not a jointly

convex function of the elements of Ω. 2) The penalty term is on the partial correlations

ρij = − ωij√ωiiωjj

and is hence not a jointly convex function of the elements of Ω.

Now note the following for the regression term:

wi‖Yi −∑j 6=i

ρij√ωjjωii

Yj‖22 = wi‖Yi +

∑j 6=i

ωijωii

Yj‖22

(∵ ρij =

−ωij√ωiiωjj

)= wi‖

1

ωii(ωiiYi +

∑j 6=i

ωijYj)‖22

=wiω2ii

‖p∑j=1

ωijYj‖22

=wiω2ii

(ω′•iY′Yω•i)

10

Page 12: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

The choice of weights wi = ω2ii yields

wi‖Yi −∑j 6=i

ρij√ωjjωii

Yj‖22 = ω′•iY

′Yω•i ≥ 0 (10)

The above expression in (10) is a quadratic form (and hence jointly convex) in the elements

of Ω. Putting the `1-penalty term on the partial covariances ωij instead of on the partial

correlations ρij yields the following jointly convex objective function:

Qcon(Ω) =: Lcon(Ω) + λ∑

1≤i<j≤p

|ωij|

=: −p∑i=1

n logωii +1

2

p∑i=1

‖ωiiYi +∑j 6=i

ωijYj‖22 + λ

∑1≤i<j≤p

|ωij|. (11)

The function Lcon(Ω) can be regarded as a pseudo-likelihood function in the spirit of

Besag (1975). Since − log x and |x| are convex functions, and∑p

i=1 ‖ωiiYi +∑

j 6=i ωijYj‖2

is a positive semi-definite quadratic form in Ω, it follows that Qcon(Ω) is a jointly convex

function of Ω (but not necessarily strictly convex). As we shall see later, this particular

formulation above helps us establish theoretical guarantees of convergence (see Section 4),

and, consequently, yields a regression based graphical model estimator that is well defined and

is always computable. Note that the n/2 in (9) has been replaced by n in (11). The point is

elaborated further in Remark 4. We now proceed to derive the details of the coordinate-wise

descent algorithm for minimizing Qcon(Ω).

3.2 A coordinatewise minimization algorithm for minimizing Qcon(Ω)

Let Ap denote the set of p × p real symmetric matrices. Let the parameter space M be

defined as

M := Ω ∈ Ap : ωii > 0, for every 1 ≤ i ≤ p.

Note that as in other regression based approaches (see Peng et al. (2009)), we have delib-

erately not restricted Ω to be positive definite as the main goal is to estimate the sparsity

pattern in Ω. As mentioned in the introduction, a positive definite estimator can be obtained

by using standard methods (Hastie et al. (2009), Xu et al. (2011)) once a partial correlation

graph has been determined.

Let us now proceed to optimizing Qcon(Ω). For 1 ≤ i ≤ j ≤ p, define the function

11

Page 13: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Tij :M→M by

Tij(Ω) = arg minΩ:(Ω)kl=ωkl ∀(k,l)6=(i,j)

Qcon(Ω). (12)

For each (i, j), Tij(Ω) gives the matrix where all the elements of Ω are left as is except

the (i, j)th element. The (i, j)th element is replaced by the value that minimizes Qcon(Ω)

with respect to ωij holding all other variables ωkl, (k, l) 6= (i, j) constant. We now proceed

to evaluate Tij(Ω) explicitly.

Lemma 4. The function Tij(Ω) defined in (12) can be computed in closed form. In particular,

for 1 ≤ i ≤ p,

(Tii(Ω))ii =−∑

j 6=i ωijsij +

√(∑j 6=i ωijsij

)2

+ 4sii

2sii. (13)

For 1 ≤ i < j ≤ p,

(Tij(Ω))ij =Sλn

(−(∑

j′ 6=j ωij′sjj′ +∑

i′ 6=i ωi′jsii′))

sii + sjj, (14)

where sij is the (i, j)th entry of 1nYTY, and Sλ(x) := sign(x)(|x| − λ)+.

The proof is given in Supplemental Section C. An important contribution of Lemma

4 is that it gives the necessary ingredients for designing a coordinate descent approach to

minimizing the CONCORD objective function. More specifically, (13) can be used to update

the partial variance terms, and (14) can be used to update the partial covariance terms.

The coordinate-wise descent algorithm for CONCORD is summarized in Algorithm 2. The

zeros in the estimated partial covariance matrix can then subsequently be used to construct

a partial covariance or partial correlation graph.

The following procedure can be used to select the penalty parameter λ. Define the residual

sum of squares (RSS) for i = 1, . . . , p as

RSSi(λ) =n∑k=1

(yki −

∑j 6=i

ωijωii

ykj

)2

.

Further, the i-th component of BIC type score can be defined as

BICi(λ) = n log(RSSi(λ)) + log n · |j : j 6= i, ωij,λ 6= 0|.

12

Page 14: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Algorithm 2 (CONCORD pseudocode)

Input: standardize data to have mean zero and standard deviation oneInput: Fix maximum number of iterations: rmaxInput: Fix initial estimate: Ω(0)

Input: Fix convergence threshold: εSet r ← 1converged = FALSESet Ω current ← Ω(0)

repeatΩ old ← Ω current

## Updates to partial covariances ωijfor i← 1, 2, · · · , p− 1 do

for j ← i+ 1, · · · , p do

ω currentij ← (Tij(Ω

current))ij (15)

end forend for

## Updates to partial variances ωiifor i← 1, 2, · · · , p do

ω currentii ← (Tii(Ω

current))ii (16)

end for

Ω(r) ← Ω current

## Convergence checking

if ‖Ω current − Ω old‖max < ε thenconverged = TRUE

elser ← r + 1

end if

until converged = TRUE or r > rmax

Return final estimate: Ω(r)

13

Page 15: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

The penalty parameter λ can be chosen to minimize the sum BIC(λ) =∑p

i=1BICi(λ).

3.3 Computational complexity

We now proceed to show that the computational cost of each iteration of CONCORD is

min (O(np2), O(p3)), that is, the CONCORD algorithm is competitive with other proposed

methods. The updates in Equations in (15) and (16) are implemented differently depending

on whether n ≥ p or n < p.

Case 1 (n ≥ p): Let us first consider the case when n ≥ p. Note that both sums in (14) are

inner products between a row in Ω and a row in S. Clearly, computing these sums require

O(p) operations each. Similarly, the update in (13) requires O(p) operations. Since there are

O(p2) entries in Ω, one complete sweep of updates over all entries in Ω would require O(p3)

operations.

Case 2 (n < p): Let us now consider the case when n < p. We show below that the updates

can be performed in O(np2) operations. The main idea here is that the coordinate-wise

calculations at each iteration, which involves an inner product of two p × 1 vectors, can

be reduced to an inner product calculation involving auxiliary variables (residual variables

to be more specific) of dimension n × 1. The following lemmas are essential ingredients in

calculating the computational complexity in this setting. In particular, Lemma 5 expresses

the inner product calculations in (13) and (14) in terms of residual vectors.

Lemma 5. For 1 ≤ i, j ≤ p, ∑k 6=j

ωiksjk = −ωijsjj + ωiiY′jri,

where Yj is the jth column of the data matrix Y, and ri = Yi +∑

k 6=iωikωii

Yk is an n-vector

of residuals of regressing Yi on the rest.

The following lemma now quantifies the computational cost of updating the residual

vectors during each iteration of the CONCORD algorithm.

Lemma 6. Define the residual vector rm for m = 1, 2, . . . , p as follows:

rm = rm(Ω) = Ym +∑k 6=m

ωmkωmm

Yk (17)

where Ω = ((Ωij))1≤i,j≤p. Then,

1. For m 6= k, l, the residual vector rm is functionally independent of ωkl. (The term ωkl

appears only in the expressions for the residual vectors rk and rl.)

14

Page 16: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

2. Fix all the elements of Ω except ωkl. Suppose ωkl is changed to ω∗kl. Then, updating the

residual vectors rk and rl requires O(n) operations. (Hence, updating rk and rl after

each update in (15) requires O(n) operations.)

3. For m 6= k, the residual vector rm is functionally independent of ωkk. (The term ωkk

appears only in the expression for the residual vector rk.)

4. Fix all elements of Ω except ωkk. Suppose ωkk is changed to ω∗kk. Then, updating the

residual vector rk requires O(n) operations. (Hence, updating rk after each update in

(16) requires O(n) operations).

The proofs of Lemmas 5 and 6 are straightforward and are given in Supplemental Sections

D and E. Note that the inner product between yj and ri takes O(n) operations. Hence, by

Lemma 5 the updates in (15) and (16) require O(n) operations. Also, after each update

in (15) and (16) the residual vectors need to be appropriately modified. By Lemma 6, this

modification can also be achieved in O(n) operations. As a result, one complete sweep of

updates over all entries in Ω can be performed in O(np2) operations.

Hence, we conclude that the computational complexity of the CONCORD algorithm is

competitive with the SPACE and Symmetric lasso algorithms, which are also min (O(np2), O(p3)).

3.4 A unifying framework for pseudo-likelihood based graphical

model selection

In this section, we provide a unifying framework which formally connects the five pseudo-

likelihood formulations considered in this paper, namely, SPACE1, SPACE2, SYMLASSO,

SPLICE and CONCORD (counting two choices for weights in the SPACE algorithm as

two different formulations). Recall that the random vectors Yk =(yk1 , y

k2 , · · · , ykp

)′, k =

1, 2, · · · , n denote i.i.d. observations from a multivariate distribution with mean vector 0

and covariance matrix Σ, the precision matrix is given by Ω = Σ−1 = ((ωij))1≤i,j≤p, and S

denotes the sample covariance matrix. Let ΩD denote the diagonal matrix with ith diagonal

entry given by ωii. Lemma 7 below formally identifies the relationship between all five of the

regression-based pseudo-likelihood methods.

Lemma 7. i) The (negative) pseudo-likelihood functions of CONCORD, SPACE1, SPACE2,

SYMLASSO and SPLICE formulations can be expressed in matrix form as follows (up to

reparameterization):

ii) All five pseudo-likelihoods above correspond to a unified or generalized form of the

15

Page 17: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Regression form Matrix form

Lcon(Ω) 12

∑pi=1

[−n logω2

ii + ‖ωiiYi +∑j 6=i ωijYj‖22

]n2

[− log |Ω2

D|+ tr(SΩ2)]

(18)

Lspc,1(ΩD,ρ) 12

∑pi=1

[−n logωii + ‖Yi −

∑j 6=i ρ

ij√

ωjj

ωiiYj‖22

]n2

[− log |ΩD|+ tr(SΩΩ−2D Ω)

](19)

Lspc,2(ΩD,ρ) 12

∑pi=1

[−n logωii + ωii ‖Yi −

∑j 6=i ρ

ij√

ωjj

ωiiYj‖22

]n2

[− log |ΩD|+ tr(SΩΩ−1D Ω)

](20)

Lsym(α,ΩF ) 12

∑pi=1

[n logαii + 1

αii‖Yi +

∑j 6=i ωijαiiYj‖2

]n2

[− log |ΩD|+ tr(SΩΩ−1D Ω)

](21)

Lspl(B,D) 12

∑pi=1

[n log(d2ii) + 1

d2ii‖Yi −

∑j 6=i βijYj‖22

]n2

[− log |ΩD|+ tr(SΩΩ−1D Ω)

](22)

Gaussian log-likelihood function

Luni(G(Ω), H(Ω)) =n

2[− log detG(Ω) + tr(SH(Ω))] ,

where G(Ω) and H(Ω) are functions of Ω. The functions G and H which characterize

the pseudo-likelihood formulations corresponding to CONCORD, SPACE1, SPACE2, SYM-

LASSO and SPLICE are given as follows:

Gcon(Ω) = Ω2D, Hcon(Ω) = Ω2

Gspc,1(Ω) = ΩD, Hspc,1(Ω) = ΩΩ−2D Ω

Gspc,2(Ω) = Gsym(Ω) = Gspl(Ω) = ΩD, Hspc,2(Ω) = Hsym(Ω) = Hspl(Ω) = ΩΩ−1D Ω

The proof of Lemma 7 is given in Supplemental Section F. The above lemma gives various

useful insights into the different pseudo-likelihoods that have been proposed for the inverse

covariance estimation problem. The following remarks discuss these insights.

Remark 1. Note that when G(Ω) = H(Ω) = Ω, L(G(Ω), H(Ω)) corresponds to the standard

(negative) Gaussian log-likelihood function.

Remark 2. Note that Ω−1D Ω is a re-scaling of Ω so as to make all the diagonal elements one

(hence sparsity between Ω and Ω−1D Ω are the same). In this sense, the SPACE2, SYMLASSO

and SPLICE algorithms make the same approximation to the Gaussian likelihood with the

log determinant term, log |Ω|, replaced by log |ΩD|. The trace term tr(SΩ) is approximated

by tr(SΩΩ−1D Ω). Moreover, if Ω is sparse, then Ω−1

D Ω is close to the identity matrix, i.e.,

Ω−1D Ω ≈ I + C for some C. In this case, the term in the Gaussian likelihood tr(SΩ) is

perturbed by an off-diagonal matrix C resulting in an expression of the form tr(SΩ(I + C)).

Remark 3. Conceptually, the sole source of difference between the three regularized versions

of the objective functions of SPACE2, SYMLASSO and SPLICE algorithms is in the way

in which the `1-penalties are specified. SPACE2 applies the penalty to the partial corre-

lations, SYMLASSO to the partial covariances and SPLICE to the symmetrized regression

coefficients.

16

Page 18: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Remark 4. Note that the CONCORD method approximates the Normal likelihood by ap-

proximating the log |Ω| term by log |Ω2D|, and tr(SΩ) by tr(SΩ2). Hence, the CONCORD

algorithm can be considered as a reparameterization of the Gaussian likelihood with the con-

centration matrix Ω2 (together with an approximation to the log determinant term). More

specifically,

Lcon(Ω) = Luni(Ω2D,Ω

2) =n

2

(− log det Ω2

D + tr(SΩ2))

= n

(− log det ΩD +

1

2tr(SΩ2)

),

and justifies the appearance of “n” as compared to “n/2” in the CONCORD objective in

(11). In Supplemental Section G, we illustrate the usefulness of this correction based on the

insight from our unification framework, and show that it leads to better estimates of Ω.

4 Convergence of CONCORD

We now proceed to consider the convergence properties of the CONCORD algorithm. Note

that Qcon(Ω) is not differentiable. Also, if n < p, then Qcon(Ω) is not necessarily strictly

convex. Hence, the global minimum may not be unique, and as discussed below, the con-

vergence of the coordinatewise minimization algorithm to a global minimum does not follow

from existing theory. Note that althoughQcon(Ω) is not differentiable, it can be expressed as

a sum of a smooth function of Ω and a separable function of Ω (namely λ∑

1≤i<j≤p |ωij|).Tseng (1988, 2001) proves that under certain conditions, every cluster point of the sequence

of iterates of the coordinatewise minimization algorithm for such an objective function is a

stationary point of the objective function. However, if the function is not strictly convex,

there is no general guarantee that the sequence of iterates has a unique cluster point, i.e.,

there is no theoretical guarantee that the sequence of iterates converges. The following theo-

rem shows that the cyclic coordinatewise minimization algorithm applied to the CONCORD

objective function converges to a global minimum. A proof of this result can be found in

Supplemental Section H.

Theorem 1. If Sii > 0 for every 1 ≤ i ≤ p, the sequence of iterates

Ω(r)r≥0

obtained by

Algorithm 2 converges to a global minimum of Qcon(Ω). More specifically, Ω(r) → Ω ∈M as

r →∞ for some Ω, and furthermore Qcon(Ω) ≤ Qcon(Ω) for all Ω ∈M.

Remark 5. If n ≥ 2, and none of the underlying p marginal distributions (corresponding

to the p-variate distribution for the data vectors) is degenerate, it follows that the diagonal

entries of the data covariance matrix S are strictly positive with probability 1.

17

Page 19: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Convergence Threshold

1e−03

1e−01

1e+01

0 1000 2000 3000 4000iterations

abs.

diff

eren

ce b

etw

een

succ

essi

ve u

pdat

es

estimator pvar.1 pvar.2 pvar.3 pcor.1 pcor.2 pcor.3

(a) SPACE algorithm (partial variance weights) ap-plied to dataset in Example 1.

Convergence Threshold1e−05

1e−03

1e−01

0 5 10 15 20iterations

abs.

diff

eren

ce b

etw

een

succ

essi

ve u

pdat

es

estimator pvar.1 pvar.2 pvar.3 pcor.1 pcor.2 pcor.3

(b) CONCORD algorithm applied to dataset in Ex-ample 1.

Figure 1: Illustrations of the non-convergence of SPACE and convergence of CONCORD.The y-axes are log scaled. For SPACE, log absolute difference between entries of successiveestimates becomes constant (thus indicating non-convergence).

With theory in hand, we now proceed to numerically illustrate the convergence properties

established above. When CONCORD is applied to the dataset in Example 1, convergence is

achieved (see Figure 1(b)), whereas SPACE does not converge (see Figure 1(a)).

5 Applications

5.1 Simulated Data

5.1.1 Timing Comparison

We now proceed to compare the timing performance of CONCORD with Glasso and the two

different versions of SPACE. The acronyms SPACE1 and SPACE2 denote SPACE estimates

using uniform weights and partial variance weights, respectively. We first consider the setting

p = 1000, n = 200. For the purposes of this simulation study, a p × p positive definite

matrix Ω (with p = 1000) with condition number 10 was used. Thereafter, 50 independent

datasets were generated, each consisting of n = 200 i.i.d. samples from a Np(0,Σ = Ω−1)

distribution. For each dataset, the four algorithms were run until convergence for a range

of penalty parameter values. We note that the default number of iterations for SPACE in

the R function by Peng et al. (2009) is 3. However, given the convergence issues for SPACE,

we ran SPACE until convergence or until 50 iterations (whichever is smaller). The timing

results (averaged over the 100 datasets) in the top part of Table 3 below show wall clock

times until convergence (in seconds) for Glasso, CONCORD, SPACE1 and SPACE2.

One can see that in the p = 1000, n = 200 setting, CONCORD is uniformly faster than its

18

Page 20: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

competitors. Note the low penalty parameter cases correspond to high dimensional settings

where the estimated covariance matrix is typically poorly conditioned and the log-likelihood

surface is very flat. The results in Table 3 indicate that in such settings CONCORD is faster

than its competitors by orders of magnitude (even though Glasso is implemented in Fortran).

Both SPACE1 and SPACE2 are much slower than CONCORD and Glasso in this setting.

The wall clock time for an iterative algorithm can be thought of as a function of the number

of iterations until convergence, the order of computations for a single iteration, and also the

implementation details (such as choice of software, efficiency of the code etc.). Note that

the order of computations for a single iteration is same for SPACE and CONCORD, and

lower than that of Glasso when n < p. It is likely that the significant increase in the wall

clock time for SPACE is due to implementation details and the larger number of iterations

required for convergence (or non-convergence, since we are stopping SPACE if the algorithm

does not satisfy the convergence criterion by 50 iterations).

We further compare the timing performance of CONCORD and Glasso for p = 3000 with

n = 600 and n = 900 (SPACE is not considered here because of the timing issues mentioned

above. These issues are amplified in this more demanding setting). A p× p positive definite

matrix Ω (with p = 3000) with 3% sparsity is used. Thereafter, 50 independent datasets were

generated, each consisting of n = 600 i.i.d. samples from a Np(0,Σ = Ω−1) distribution. The

same exercise was repeated with n = 900. The timing results (averaged over the 100 datasets)

in the bottom part of Table 3 below show wall clock times until convergence (in seconds) for

Glasso, CONCORD, SPACE1 and SPACE2 for various penalty parameter values. It can be

seen that in both the n = 600 and n = 900 cases, CONCORD was around ten times faster

than Glasso.

In conclusion, these simulation results in this subsection illustrate that CONCORD is

much faster as compared to SPACE and Glasso, especially in very high dimensional settings.

We also note that a downloadable version of the CONCORD algorithm has been developed

in R, and is freely available at http://cran.r-project.org/web/packages/gconcord.

5.1.2 Model selection comparison

In this section, we perform a simulation study in which we compare the model selection per-

formance of CONCORD and Glasso when the underlying data is drawn from a multivariate-t

distribution (the reasons for not considering SPACE are provided in a remark at the end of

this section). The data is drawn from a multivariate-t distribution to illustrate the potential

benefit of using penalized regression methods (CONCORD) outside the Gaussian setting.

For the purposes of this study, using a similar approach as in Peng et al. (2009), a p× psparse positive definite matrix Ω (with p = 1000) with condition number 13.6 is chosen. Using

19

Page 21: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

p = 1000, n = 200Glasso CONCORD SPACE1 (wi = 1) SPACE2 (wi = ωii)

λ NZ Time λ∗ NZ Time λ∗ NZ Time λ∗ NZ Time0.14 4.77% 87.60 0.12 4.23% 6.12 0.10 4.49% 101.78 0.16 100.00% 19206.550.19 0.87% 71.47 0.17 0.98% 5.10 0.17 0.64% 99.20 0.21 1.76% 222.000.28 0.17% 5.41 0.28 0.15% 5.37 0.28 0.14% 138.01 0.30 0.17% 94.590.39 0.08% 5.30 0.39 0.07% 4.00 0.39 0.07% 75.55 0.40 0.08% 108.610.51 0.04% 6.38 0.51 0.04% 4.76 0.51 0.04% 49.59 0.51 0.04% 132.34

p = 3000, n = 600 p = 3000, n = 900Glasso CONCORD Glasso CONCORD

λ NZ Time λ∗ NZ Time λ NZ Time λ∗ NZ Time0.09 2.71% 1842.74 0.09 2.10% 266.69 0.09 0.70% 1389.96 0.09 0.64% 298.210.10 1.97% 1835.32 0.10 1.59% 235.49 0.10 0.44% 1395.42 0.10 0.41% 298.000.10 1.43% 1419.41 0.10 1.19% 232.67 0.10 0.27% 1334.78 0.10 0.26% 302.15

Table 3: Timing comparison (in seconds) for p = 1000, 3000 and varying n. SPACE is rununtil convergence or 50 iterations (whichever is smaller). Note that SPACE1 and SPACE2 aremuch slower compared than CONCORD and Glasso in wall time, for the p = 1000 simulation.Hence, for p = 3000, only Glasso and CONCORD are compared. Here, λ denotes the valueof the penalty parameter for the respective algorithms, with λ∗ = λ/n for CONCORD andSPACE. NZ denotes the percentage of non-zero entries in the corresponding estimator.

this Ω for each sample size n = 200, n = 400 and n = 800, 50 datasets, each having i.i.d.

multivariate-t distribution with mean zero and covariance matrix Σ = Ω−1, are generated.

We compare the model selection performance of Glasso and CONCORD in this heavy tailed

setting with receiver operating characteristic (ROC) curves, which compare false positive

rates (FPR) and true positive rates (TPR). Each ROC curve is traced out by varying the

penalty parameter λ over 50 possible values.

We use the Area-under-the-curve (AUC) as a means to compare model selection perfor-

mance. This measure is frequently used to compare ROC curves (Fawcett, 2006, Friedman

et al., 2010). The AUC of a full ROC curve resulting from perfect recovery of zero/non-zero

structure in Ω would be 1. In typical real applications, FPR is controlled to be sufficiently

low. We therefore compare model selection performance when FPR is less than 15% (or

0.15). When controlling FPR to be less than 0.15, a perfect method will yield AUC of 0.15.

Table 4 provides the median of the AUCs (divided by 0.15 to normalize to 1), as well as the

interquartile ranges (IQR) over the 50 datasets for n = 200, n = 400 and n = 800.

20

Page 22: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

n = 200 n = 400 n = 800Solver Median IQR Median IQR Median IQRGlasso 0.745 0.032 0.819 0.030 0.885 0.029

CONCORD 0.811 0.011 0.887 0.012 0.933 0.013

Table 4: Median and IQR of area-under-the-curve (AUC) for 50 simulations. Each simulationyields a ROC curve from which the AUC is computed for FPR in the interval [0, 0.15] andnormalized to 1.

Table 4 above shows that CONCORD has a much better model selection performance as

compared to Glasso. Moreover, it turns out that CONCORD has a higher AUC than Glasso

for every single one of the 150 datasets (50 each for n = 200, 400 and 800). We note that

CONCORD not only recovers the sparsity structure more accurately in general, it also has

much less variation.

Remark: Note that we need to simulate 50 datasets for each of the above three sample sizes.

For each of these datasets, an algorithm has to be run for 50 different penalty parameter

values. In totality, this amounts to running the algorithm 7500 times. As we demonstrated

in the simulations in Section 5.1.1, when SPACE is run until convergence (or terminated

after the number of iterations is 50), then SPACE’s intractability makes it infeasible to run

it 7500 times. As an alternative, one could follow the approach of Peng et al. (2009) and

stop SPACE after running 3 iterations. However, given the possible non-convergence issues

associated with SPACE, it is not clear if the resulting estimate is meaningful. Even so, if

we follow this approach of stopping SPACE after three iterations, we find that CONCORD

outperforms SPACE1 and SPACE2. For example, if we consider the n = 200 case, then the

median AUC value for SPACE1 is 0.779 (with IQR = 0.054) and the median AUC value for

SPACE2 is 0.802 (with IQR = 0.013).

5.2 Application to breast cancer data

We now illustrate the performance of the CONCORD method on a real dataset. To facilitate

comparison, we consider data from a breast cancer study (Chang et al., 2005) on which

SPACE was illustrated. This dataset contains expression levels of 24481 genes on 248 patients

with breast cancer. The dataset also contains extensive clinical data including survival times.

Following the approach in Peng et al. (2009) we focus on a smaller subset of genes. This

reduction can be achieved by utilizing clinical information that is provided together with the

microarray expression dataset. In particular, survival analysis via univariate Cox regression

with patient survival times is used to select a subset of genes closely associated with breast

cancer. A choice of p-value < 0.0003 yields a reduced dataset with 1107 genes. This subset

21

Page 23: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

of the data is then mean centered and scaled so that the median absolute deviation is 1 (as

outliers seem to be present). Following a similar approach to that in Peng et al. (2009),

penalty parameters for each partial correlation graph estimation method were chosen so that

each partial correlation graph yields 200 edges.

Partial correlation graphs can be used to identify genes that are biologically meaningful

and can lead to gene therapeutic targets. In particular, there is compelling evidence from the

biomedical literature that highly connected nodes are central to biological networks (Carter

et al., 2004, Jeong et al., 2001, Han et al., 2004). To this end, we focus on identifying

the 10 most highly connected genes (“hub” genes) identified by each partial correlation

graph estimation method. Table 6 in Supplemental Section I summarizes the top 10 hub

genes obtained by CONCORD, SYMLASSO, SPACE1 and SPACE2. The table also gives

references from the biomedical literature that places these genes in the context of breast

cancer. These references illustrate that most of the identified genes are indeed quite relevant

in the study of breast cancer. It can also be seen that there is a large level of overlap in the

top 10 genes identified by the four methods. There are also however some notable differences.

For example, TPX2 has been identified only by CONCORD. Bibby et al. (2009) suggests

that mutation of Aurora A - a known general cancer related gene - reduces cellular activity

and mislocalization due to loss of interaction with TPX2. Moreover, a recent extensive study

by Maxwell et al. (2011)1 identifies a gene regulatory mechanism in which TPX2, Aurora

A, RHAMM and BRCA1 play a key role. This finding is especially significant given that

BRCA1 (breast cancer type 1 susceptibility protein) is one of the most well known genes

linked to breast cancer. We also remark that if a higher number of hub genes are targeted

(like the top 20 or top 100 vs. the top 10), CONCORD identifies additional genes not

discovered by existing methods. However, identification of even a single important gene can

lead to significant findings and novel gene therapeutic targets, since many gene silencing

experiments often focus on one or two genes at a time.

We conclude this section by remarking that CONCORD is a useful addition to the graph-

ical models literature as it is competitive with other methods in terms of model selection

accuracy, timing, relevance for applications, and also gives provable convergence guarantees.

5.3 Application to portfolio optimization

We now consider the efficacy of using CONCORD in a financial portfolio optimization set-

ting where a stable estimate of the covariance matrix is often required. We follow closely the

exposition to the problem as given in Won et al. (2012). A portfolio of financial instruments

1http://www.ncbi.nlm.nih.gov/pubmed/22110403

22

Page 24: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

constitutes a collection of both risky and risk-free assets held by a legal entity. The return

on the overall portfolio over a given holding period is defined as the weighted average of the

returns on the individual assets, where the weights for each asset corresponds to its propor-

tion in monetary terms. The primary objective of the portfolio optimization problem is to

determine the weights that maximize the overall return on the portfolio subject to a certain

level of risk (or vice versa). In Markowitz mean-variance portfolio (MVP) theory, this risk

is taken to be the the standard deviation of the portfolio (Markowitz, 1952). As noted in

Luenberger (1997) & Merton (1980), the optimal portfolio weights or the optimal allocation

depends critically on the mean and covariance matrix of the individual asset returns, and

hence estimation of these quantities is central to MVP. As one of the goals in this paper is to

illustrate the efficacy of using CONCORD to obtain a stable covariance matrix estimate, we

shall consider the minimum variance portfolio problem, as compared to the mean-variance

portfolio optimization problem. The former requires estimating only the covariance matrix

and thus presents an ideal setting for comparing covariance estimation methods in the port-

folio optimization context (see Chan et al. (1999) for more details). In particular, we aim

to compare the performance of CONCORD with other covariance estimation methods, for

the purposes of constructing a minimum variance portfolio. The performance of each of the

different methods and the associated strategies will be compared over a sustained period of

time in order to assess their respective merits.

5.3.1 Minimum variance portfolio rebalancing

The minimum variance portfolio selection problem is defined as follows. Given p risky assets,

let rit denote the return of asset i over period t; which in turn is defined as the change in its

price over time period t, divided by the price at the beginning of the period. As usual, let

Σt denote the covariance matrix of the daily returns, rTt = (r1t, r2t, . . . , rpt). The portfolio

weights wTk = (w1k, w2k, . . . , wpk) denote the weight of asset i = 1, . . . , p in the portfolio for

the k-th time period. A long position or a short position for asset i during period k is given

by the sign of wik, i.e., wik > 0 for long, and wik < 0 for short positions respectively. The

budget constraint can be written as 1Twk = 1, where 1 denotes the vector of all ones. Note

that the risk of a given portfolio as measured by the standard deviation of its return is simply

(wTk Σwk)1/2 .

The minimum variance portfolio selection problem for investment period k can now be

formally defined as follows:

minimize wTk Σwk subject to 1Twk = 1. (23)

23

Page 25: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

As (23) above is a simple quadratic program, it has an analytic solution given by w?k =

(1TΣ−11)−1Σ−11. Note that the solution depends on the theoretical covariance matrix Σ. In

practice, the parameter Σ has to be estimated.

The most basic approach to the portfolio selection problem often makes the unrealistic

assumption that returns are stationary in time. A standard approach to dealing with the

non-stationarity in such financial time series is to use a periodic rebalancing strategy. In

particular, at the beginning of each investment period k = 1, 2, . . . , K, portfolio weights wk =

(w1k, . . . , wpk)′ are computed from the previous Nest days of observed returns (Nest is called

the “estimation horizon”). These portfolio weights are then held constant for the duration

of each investment period. The process is repeated at the start of the next investment

period and is often referred to as “rebalancing.” More details of the rebalancing strategy are

provided in Supplemental section J.3.

5.3.2 Application to the Dow Jones Industrial Average

We now consider the problem of investing in the stocks that feature in the Dow Jones

Industrial Average (DJIA) index. The DJIA is a composite blue chip index consisting of 30

stocks (note that Kraft Foods (KFT) data was removed in our analysis due to its limited

data span2. Table 7 in Supplemental Section J.1 lists the 29 component stocks used in our

analysis.

Rebalancing time points were chosen to be every four weeks starting from 1995/02/18

to 2012/10/26 (approximately 17 years), and are shown in Table 8 in Supplemental Section

J.2. Start and end dates of each period are selected to be calendar weeks, and need not

coincide with a trading day. The total number of investment periods is 231, and the number

of trading days in each investment period varies between 15 and 20 days. We shall compare

the following five methods for estimating the covariance matrix: sample covariance, graphical

lasso (Glasso) of Friedman et al. (2008), CONCORD, condition number regularized estimator

(CondReg) of Won et al. (2012), and the Ledoit-Wolf estimator of Ledoit and Wolf (2004).

We consider various choices of Nest, in particular, Nest ∈ 35, 40, 45, 50, 75, 150, 225, 300 in

our analysis. Note that once a choice for Nest is made, it is kept constant for all the 231

investment periods.

Note that for `1-penalized regression methods such as the Glasso and CONCORD meth-

ods, a value for the penalty parameter has to be chosen. For the purposes of this study,

cross-validation was performed within each estimation horizon so as to minimize the resid-

ual sum of squares from out-of-sample prediction averaged over all stocks. Further details

2KFT was a component stock of the DJIA form 9/22/2008 to 9/13/2012. From 9/14/2012, KFT wasreplaced with United Health Group (UNH).

24

Page 26: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Nest Sample Glasso CONCORD CondReg Ledoit-Wolf DJIA35 0.357 0.489 0.487 0.486 0.470 0.18540 0.440 0.491 0.490 0.473 0.439 0.18545 0.265 0.468 0.473 0.453 0.388 0.18550 0.234 0.481 0.482 0.458 0.407 0.18575 0.379 0.403 0.475 0.453 0.368 0.185

150 0.286 0.353 0.480 0.476 0.384 0.185225 0.367 0.361 0.502 0.494 0.416 0.185300 0.362 0.359 0.505 0.488 0.409 0.185

Table 5: Realized Sharpe ratio of different investment strategies corresponding to differentestimators with various Nest. The maximum annualized Sharpe ratios for each row, andothers within 1% of this maximum, are highlighted in bold.

are given in Supplemental Section J.4. The condition number regularized (CondReg) and

Ledoit-Wolf estimators each use different criteria to perform cross-validation. The readers is

referred to Won et al. (2012) and Ledoit and Wolf (2004) for details on the cross-validation

procedure for these methods.

For comparison purposes with Won et al. (2012), we use the following quantities to assess

the performance of the five MVR strategies: Realized return, Realized risk, Realized Sharpe

ratio (SR), Turnover, Size of the short side and Normalized wealth growth. Precise definitions

of these quantities are given in Supplemental Section J.5.

Table 5 gives the realized Sharpe ratios of all MVR strategies for the different choices of

estimation horizon Nest. The column DJIA stands for the passive index tracking strategy that

tracks the Dow Jones industrial average index. It is clear from Table 5 that the CONCORD

method performs uniformly well across different choices of estimation horizons.

Figure 2 shows normalized wealth growth over the trading horizon for the choice Nest =

225. Normalized wealth growth curve for another choice Nest = 75 is provided in Supple-

mental section J.5. These plots demonstrate that CONCORD is either very competitive or

better than leading covariance estimation methods.

We also note that trading costs associated with CONCORD are the lowest for most

choices of estimation horizons, and are very comparable with CondReg for Nest = 35, 40(See Table 12 in Supplemental Section J.5). Moreover, CONCORD also has by far the lowest

short side for most choices of estimation horizons. This property reduces the dependence

on borrowed capital for shorting stocks and is also reflected in the higher normalized wealth

growth.

25

Page 27: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

0.0

2.5

5.0

7.5

10.0

1995 2000 2005 2010date

valu

emethod

Concord

CondReg

glasso

LedoitWolf

Sample

DJIA

Figure 2: Normalized wealth growth after adjusting for transaction costs (0.5% of principal)and borrowing costs (interest rate of 7% APR) with Nest = 225.

6 Large sample properties

In this section, large sample properties of the CONCORD algorithm, estimation consistency

and oracle properties under suitable regularity conditions are investigated. We adapt the

approach in Peng et al. (2009) with suitable modifications. Now let the dimension p = pn

vary with n so that our treatment is relevant to high dimensional settings. Let Ωnn≥1 denote

the sequence of true inverse covariance matrices. As in Peng et al. (2009), for consistency

purposes, we assume the existence of suitably accurate estimates of the diagonal entries, and

consider the accuracy of the estimates of the off-diagonal entries obtained after running the

CONCORD algorithm with diagonal entries fixed. In particular, the following assumption is

made:

• (A0 - Accurate diagonal estimates) There exist estimates αn,ii1≤i≤pn such that for

any η > 0, there exists a constant C > 0 such that

max1≤i≤pn

|αn,ii − ωii| ≤ C

(√log n

n

),

holds with probability larger than 1−O(n−η).

26

Page 28: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Note that the theory that follows is valid when the estimates αn,ii1≤i≤pn and the esti-

mates of the off-diagonal entries are obtained from the same dataset. When lim supn→∞pnn<

1, Peng et al. (2009) show that the diagonal entries of S−1 can be used as estimates of the

diagonal entries of Ω. However, no such general recipe is provided in Peng et al. (2009) for

the case pn > n. Nevertheless, establishing consistency in the above framework is useful, as

it indicates that the estimators obtained are statistically well-behaved when n and p both

increase to infinity.

For vectors ωo ∈ Rpn(pn−1)

2 and ωd ∈ Rpn+ , the notation Ln(ωo, ωd) stands for Lcon

n(Lcon

is defined in (11)) evaluated at a matrix with off-diagonal entries ωo and diagonal entries

ωd. Let ωon = ((ωn,ij))1≤i<j≤pn denote the vector of off-diagonal entries of Ωn, and αpn ∈ Rpn+

denotes the vector with entries αn,ii1≤i≤pn . LetAn denote the set of non-zero entries in the

vector ωon, and let qn = |An|. Let Σn = Ω−1n denote the true covariance matrix for every

n ≥ 1. The following standard assumptions are required.

• (A1 - Bounded eigenvalues) The eigenvalues of Ωn are bounded below by λmin > 0,

and bounded above by λmax <∞ uniformly for all n.

• (A2 - Sub Gaussianity) The random vectors Y1, . . . ,Yn are i.i.d. sub-Gaussian for

every n ≥ 1, i.e., there exists a constant c > 0 such that for every x ∈ Rpn , E[ex′Yi]≤

ecx′Σnx, and for every i, j > 0, there exists ηj > 0 such that E

[et(Y

ij )2]< K whenever

|t| < ηj. Here K is independent of i and j.

• (A3 - Incoherence condition) There exists δ < 1 such that for all (i, j) /∈ An,∣∣∣∣L′′ij,An(Ωn)[L′′An,An(Ωn)

]−1

sign(ωoAn)

∣∣∣∣ ≤ δ,

where for 1 ≤ i, j, t, s ≤ pn satisfying i < j and t < s,

L′′ij,ts(Ωn) := EΩn

((L′′n(Ωn))ij,ts

)= Σn,js1i=t + Σn,it1j=s + Σn,is1j=t + Σn,jt1i=s.

Conditions analogous to (A3) have been used in Zhao and Yu (2006), Peng et al. (2009),

Meinshausen and Buhlmann (2006) to establish high-dimensional model selection con-

sistency. In the context of lasso regression, Zhao and Yu (2006) show that such a

condition (which they refer to as an irrepresentable condition) is almost necessary and

sufficient for model selection consistency, and provide some examples when this condi-

tion is satisfied. We provide some examples of situations where the condition (A3) is

satisfied, along the lines of Zhao and Yu (2006), in Supplemental section M.

27

Page 29: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Define θon = ((θn,ij))1≤i<j≤pn ∈ Rpn(pn−1)/2 by θn,ij =ωn,ij√αn,iiαn,jj

for 1 ≤ i < j ≤ pn. Let

sn = min(i,j)∈An ωn,ij. The assumptions above can be used to establish the following theorem.

Theorem 2. Suppose that assumptions (A0)-(A3) are satisfied. Suppose pn = O(nκ) for

some κ > 0, qn = o(√

n/ log n)

,√

qn lognn

= o(λn), λn√n/ log n → ∞, sn√

qnλn→ ∞ and

√qnλn → 0, as n→∞. Then there exists a constant C such that for any η > 0, the following

events hold with probability at least 1−O(n−η).

• There exists a minimizer ωon = ((ωn,ij))1≤i<j≤pn of Qcon(ωo, αn).

• Any minimizer ωon of Qcon(ωo, αn) satisfies ‖ωon − ωon‖2 ≤ C√qnλn and sign(ωn,ij) =

sign(ωn,ij), ∀ 1 ≤ i < j ≤ pn.

The proof of the above theorem is provided in Supplemental section K.

7 Conclusion

This paper proposes a novel regression based graphical model selection method that aims to

overcome some of the shortcomings of current methods, but at the same time retain their

respective strengths. We first place the highly useful SPACE method in an optimization

framework, which in turn allows us to identify SPACE with a specific objective function.

These and other insights lead to the formulation of the CONCORD objective function. It

is then shown that the CONCORD objective function is comprised of quadratic forms, is

convex, and can be regarded as a penalized pseudo-likelihood. A coordinate-wise descent

algorithm that minimizes this objective, via closed form iterates, is proposed, and subse-

quently analyzed. The convergence of this coordinate-wise descent algorithm is established

rigorously, thus ensuring that CONCORD leads to well defined symmetric partial correla-

tion estimates that are always computable - a guarantee that is not available with popular

regression based methods. Large sample properties of CONCORD establish consistency of

the method as both the sample size and dimension tend to infinity. The performance of

CONCORD is also illustrated via simulations and is shown to be competitive in terms of

graphical model selection accuracy and timing. CONCORD is then applied to a biomedical

dataset and to a finance dataset, leading to novel findings. Last but not least, a framework

that unifies all pseudo-likelihood methods is established, yielding important insights.

Given the attractive properties of CONCORD, a natural question that arises is whether

one should move away from penalized likelihood estimation (such as Glasso) and rather use

only pseudo-likelihood methods. We note that CONCORD is attractive over Glasso for

several reasons: Firstly, it does not assume Gaussianity and is hence more flexible. Secondly,

28

Page 30: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

the computational complexity per iteration of CONCORD is lower than that of Glasso.

Thirdly, CONCORD is faster (in terms of wall clock time) than Glasso by an entire order

of magnitude in higher dimensions. Fourthly, CONCORD delivers better model selection

performance. It is however important to note that if there is a compelling reason to assume

multivariate Gaussianity (which some applications may warrant), then using both Glasso and

CONCORD can potentially be useful for affirming multivariate associations of interest. In

this sense, the two classes of methods could be complementary in many practical applications.

29

Page 31: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

References

Banerjee, O., El Ghaoui, L., and D’Aspremont, A. (2008). Model Selection Through Sparse

Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data. The Journal

of Machine Learning Research, 9:485–516.

Besag, J. (1975). Statistical Analysis of Non-Lattice Data. Journal of the Royal Statistical

Society. Series D (The Statistician), 24(3):179–195.

Bibby, R. A., Tang, C., Faisal, A., Drosopoulos, K., Lubbe, S., Houlston, R., Bayliss, R.,

and Linardopoulos, S. (2009). A cancer-associated aurora A mutant is mislocalized and

misregulated due to loss of interaction with TPX2. The Journal of Biological Chemistry,

284(48):33177–84.

Carter, S. L., Brechbuhler, C. M., Griffin, M., and Bond, A. T. (2004). Gene co-expression

network topology provides a framework for molecular characterization of cellular state.

Bioinformatics (Oxford, England), 20(14):2242–50.

Chan, L. K., Karceski, J., and Lakonishok, J. (1999). On portfolio optimization: Forecasting

covariances and choosing the risk model. Working Paper 7039, National Bureau of Economic

Research.

Chang, H. Y. et al. (2005). Robustness, scalability, and integration of a wound-response

gene expression signature in predicting breast cancer survival. Proceedings of the National

Academy of Sciences of the United States of America, 102(10):3738–3743.

Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861–

874.

Friedman, J., Hastie, T., and Tibshirani, R. (2008). Sparse inverse covariance estimation

with the graphical lasso. Biostatistics, 9(3):432–441.

Friedman, J., Hastie, T., and Tibshirani, R. (2010). Applications of the lasso and grouped

lasso to the estimation of sparse graphical models. Technical report, Stanford University.

Han, J.-D. J. et al. (2004). Evidence for dynamically organized modularity in the yeast

protein-protein interaction network. Nature, 430(6995):88–93.

Hastie, T., Tibshirani, R., and Friedman, J. H. (2009). The Elements of Statistical Learning.

Springer.

Page 32: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Jensen, S. r. T., Johansen, S. r., and Lauritzen, S. L. (1991). Globally Convergent Algorithms

for Maximizing Likelihood Function. Biometrika, 78(4):867–877.

Jeong, H., Mason, S. P., Barabasi, A.-L., and Oltvai, Z. N. (2001). Lethality and centrality

in protein networks. Nature, 411(6833):41–42.

Lauritzen, S. L. (1996). Graphical Models. Oxford University Press, USA.

Ledoit, O. and Wolf, M. (2004). A well-conditioned estimator for large-dimensional covariance

matrices. Journal of Multivariate Analysis, 88(2):365–411.

Lee, J. D. and Hastie, T. J. (2014). Learning the structure of mixed graphical models. to

appear in Journal of Computational and Graphical Statistics.

Luenberger, D. G. (1997). Investment Science. Oxford University Press, USA.

Markowitz, H. (1952). Portfolio Selection. The Journal of Finance, 7(1):77–91.

Maxwell, C. A., Bentez, J., Gmez-Bald, L., Osorio, A., Bonifaci, N., Fernndez-Ramires, R.,

Costes, S. V., Guin, E., Chen, H., Evans, G. J. R., Mohan, P., Catal, I., Petit, A., Aguilar,

H., Villanueva, A., Aytes, A., Serra-Musach, J., Rennert, G., Lejbkowicz, F., Peterlongo,

P., Manoukian, S., Peissel, B., Ripamonti, C. B., Bonanni, B., Viel, A., Allavena, A.,

Bernard, L., Radice, P., Friedman, E., Kaufman, B., Laitman, Y., Dubrovsky, M., Milgrom,

R., Jakubowska, A., Cybulski, C., Gorski, B., Jaworska, K., Durda, K., Sukiennicki, G.,

Lubiski, J., Shugart, Y. Y., Domchek, S. M., Letrero, R., Weber, B. L., Hogervorst, F.

B. L., Rookus, M. A., Collee, J. M., Devilee, P., Ligtenberg, M. J., van der Luijt, R. B.,

Aalfs, C. M., Waisfisz, Q., Wijnen, J., van Roozendaal, C. E. P., Easton, D. F., Peock, S.,

Cook, M., Oliver, C., Frost, D., Harrington, P., Evans, D. G., Lalloo, F., Eeles, R., Izatt,

L., Chu, C., Eccles, D., Douglas, F., Brewer, C., Nevanlinna, H., Heikkinen, T., Couch,

F. J., Lindor, N. M., Wang, X., Godwin, A. K., Caligo, M. A., Lombardi, G., Loman, N.,

Karlsson, P., Ehrencrona, H., von Wachenfeldt, A., Bjork Barkardottir, R., Hamann, U.,

Rashid, M. U., Lasa, A., Calds, T., Andrs, R., Schmitt, M., Assmann, V., Stevens, K.,

Offit, K., Curado, J., Tilgner, H., Guig, R., Aiza, G., Brunet, J., Castellsagu, J., Martrat,

G., Urruticoechea, A., Blanco, I., Tihomirova, L., Goldgar, D. E., Buys, S., John, E. M.,

Miron, A., Southey, M., Daly, M. B., Schmutzler, R. K., Wappenschmidt, B., Meindl,

A., Arnold, N., Deissler, H., Varon-Mateeva, R., Sutter, C., Niederacher, D., Imyamitov,

E., Sinilnikova, O. M., Stoppa-Lyonne, D., Mazoyer, S., Verny-Pierre, C., Castera, L.,

de Pauw, A., Bignon, Y.-J., Uhrhammer, N., Peyrat, J.-P., Vennin, P., Fert Ferrer, S.,

Collonge-Rame, M.-A., Mortemousque, I., Spurdle, A. B., Beesley, J., Chen, X., Healey,

Page 33: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

S., Barcellos-Hoff, M. H., Vidal, M., Gruber, S. B., Lzaro, C., Capell, G., McGuffog, L.,

Nathanson, K. L., Antoniou, A. C., Chenevix-Trench, G., Fleisch, M. C., Moreno, V.,

Pujana, M. A., HEBON, EMBRACE, SWE-BRCA, BCFR, GEMO Study Collaborators,

and kConFab (2011). Interplay between brca1 and rhamm regulates epithelial apicobasal

polarization and may influence risk of breast cancer. PLoS Biol, 9(11):e1001199.

Mazumder, R. and Hastie, T. (2012). Exact Covariance Thresholding into Connected Compo-

nents for Large-Scale Graphical Lasso. The Journal of Machine Learning Research, 13:781–

794.

Meinshausen, N. and Buhlmann, P. (2006). High-dimensional graphs and variable selection

with the Lasso. The Annals of Statistics, 34(3):1436–1462.

Merton, R. C. (1980). On estimating the expected return on the market: An exploratory

investigation. Working Paper 444, National Bureau of Economic Research.

Newman, M. (2003). The structure and function of complex networks. SIAM Review,

45(2):167–256.

Peng, J., Wang, P., Zhou, N., and Zhu, J. (2009). Partial Correlation Estimation by Joint

Sparse Regression Models. Journal of the American Statistical Association, 104(486):735–

746.

Rocha, G., Zhao, P., and Yu, B. (2008). A path following algorithm for Sparse Pseudo-

Likelihood Inverse Covariance Estimation (SPLICE). Technical report, Statistics Depart-

ment, UC Berkeley, Berkeley, CA.

Speed, T. P. and Kiiveri, H. T. (1986). Gaussian Markov Distributions over Finite Graphs.

The Annals of Statistics, 14(1):138–150.

Tseng, P. (1988). Coordinate ascent for maximizing nondifferentiable concave functions.

Technical report, Massachusetts Institute of Technology.

Tseng, P. (2001). Convergence of a block coordinate descent method for nondifferentiable

minimization. Journal of Optimization Theory and Applications, 109(3):475–494.

Won, J.-H., Lim, J., Kim, S.-J., and Rajaratnam, B. (2012). Condition Number Regularized

Covariance Estimation. Journal of the Royal Statistical Society: Series B.

Xu, P.-F., Guo, J., and He, X. (2011). An Improved Iterative Proportional Scaling Proce-

dure for Gaussian Graphical Models. Journal of Computational and Graphical Statistics,

20(2):417–431.

Page 34: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Zangwill, W. (1969). Nonlinear programming: a unified approach. Prentice-Hall international

series in management. Prentice-Hall, Englewood Cliffs, NJ.

Zhao, P. and Yu, B. (2006). On Model Selection Consistency of Lasso. Journal of Machine

Learning Research, 7:2541–2563.

Page 35: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Supplemental Section

A Proof of Lemma 2

Let Y denote the n × p matrix with jth column given by Yj for j = 1, 2, . . . , p. Define

Qsym(α, Ω) = 12

(∑pj=1 Lsym,j(αjj, Ωj)

)+ λ

(∑1≤i<j≤p |ωij|

)so that

Lsym,j(αjj, Ωj) = n logαjj +1

αjj‖Yj + YΩjαjj‖2

2 (24)

where α = (α11 α22 · · · αpp)′, αjj = 1/ωjj and Ωj is the jth column of Ω. Recall that Ω is

the matrix Ω with zeros in place of the diagonal entries. If follows that

∂Qsym(α, Ω)

∂αjj=

n

αjj−

Y′jYj

α2jj

+ Ω′jY′YΩj, and

∂2Qsym(α, Ω)

∂α2jj

= − n

α2jj

+ 2Y′jYj

α3jj

(25)

It is clear that in general ∂2Qsym(α, Ω)/∂α2jj 6≥ 0. Hence, Qsym(α, Ω) is not convex.

B Proof of Lemma 3

Proof. i) Rewrite the SPLICE objective function Qspl(B,D) = Lspl(B,D) + λ∑

i<j |βij|where

Lspl(B,D) =1

2

[n log det(D2) + tr(D−2A)

],

and A = [aij] = (I−B)Y′Y(I−B′). The function Lspl(B,D) with all variables fixed except

djj is given by

Lspl,j(B, djj) =1

2

[n log d2

jj +ajjd2jj

]+ constants.

Now,

∂Qspl(B,D)

∂djj=

n

djj− ajjd3jj

∂2Qspl(B,D)

∂d2jj

= − n

d2jj

+ 3ajjd4jj

It is clear in general ∂Q2spl(B,D)/∂d2

jj 6≥ 0. Hence Qspl(B,D) is not convex.

1

Page 36: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

ii) Similarly, define Q∗spl(B,C) = L∗spl(B,C) + λ∑

i<j |βij| where

L∗spl(B,C) =1

2

[n log C−2 + tr(C2A)

].

It is clear that for a fixed C, L∗spl(B,C) is a convex function in B (Rocha et al., 2008). Now

for a fixed B let

L∗spl,j(B, cjj) =1

2

[−2n log cjj + c2

jjajj]

+ constants

∂Q∗spl(B,C)

∂cjj= − n

cjj+ cjjajj

∂2Q∗spl(B,C)

∂c2jj

=n

c2jj

+ ajj

Now, note that ∂(Q∗spl)2(B,C)/∂c2

jj ≥ 0 since ajj ≥ 0.

To see that ajj ≥ 0 note that A = (I − B)Y′Y(I − B′) = G′G, where G = Y(I − B′)

Now, ajj = G′•jG•j = ‖G•j‖2 ≥ 0

C Proof of Lemma 4

Note that for 1 ≤ i ≤ p,

Qcon(Ω) = −n logωii +n

2

(ω2iisii + 2ωii

∑j 6=i

ωijsij

)+ terms independent of ωii. (26)

where sij = Y′iYj/n. Hence,

∂ωiiQcon(Ω) = 0 ⇔ − 1

ωii+ ωiisii +

∑j 6=i

ωijsij = 0

⇔ ωii =−∑

j 6=i ωijsij +

√(∑j 6=i ωijsij

)2

+ 4sii

2sii,

Note that since ωii > 0 the positive root has been retained as the solution.

Also, for 1 ≤ i < j ≤ p,

Qcon(Ω) = nsii + sjj

2ω2ij+n

(∑j′ 6=j

ωij′sjj′ +∑i′ 6=i

ωi′jsii′

)ωij+λ|ωij|+ terms independent of ωij.

(27)

2

Page 37: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

It follows that

(Tij(Ω))ij =Sλn

(−(∑

j′ 6=j ωij′sjj′ +∑

i′ 6=i ωi′jsii′))

sii + sjj,

where Sη is the soft-thresholding operator given by Sη(x) = sign(x)(|x| − η)+.

D Proof of Lemma 5

Let Yj denote jth column of the data matrix Y. Then, using the identity∑p

k=1 ωiksjk =

ωijsjj +∑

k 6=j ωiksjk = ωiisij +∑

k 6=i ωiksjk,

∑k 6=j

ωiksjk = −ωijsjj + ωii

(sij +

∑k 6=i

ωikωii

sjk

)

= −ωijsjj + ωiiY′j

(Yi +

∑k 6=i

ωikωii

Yk

)= −ωijsjj + ωiiY

′jri,

where ri = Yi +∑

k 6=iωikωii

Yk is an n-vector of residuals after regressing the ith variable on

the rest.

E Proof of Lemma 6

1. Result follows easily from inspecting rk and rl.

2. If ωkl is updated to ω∗kl, it follows from part 1 that among all the residual vectors, only

rk and rl change values. The residual vector rk can be updated as follows:

r∗k = rk +(ω∗kl − ωkl)

ωkkYl .

Clearly, this update requires O(n) operations. The vector rl can be updated similarly.

3. Result follows easily from inspecting rk.

4. If ωkk is updated to ω∗kk, it follows from part 3 that among all the residual vectors, only

rk changes value. The residual vector rk can be updated as follows:

r∗k = (rk −Yk)ωkkω∗kk

+ Yk .

3

Page 38: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Clearly, this update requires O(n) operations.

F Proof of Lemma 7

Proof. (CONCORD) Let A = nS Expanding the `2-norm of the residual, we have

‖ωiiYi +∑j 6=i

ωijYj‖22 = ‖

p∑j=1

ωijYj‖22 = ‖Yωi•‖2

2 = ω′i•Y′Yωi• = ω′i•Aωi•

Hence, (18) is equivalent to

Lcon(Ω) =1

2

p∑i=1

(−2n logωii + ω′i•Aωi•) = −np∑i=1

logωii +1

2

p∑i=1

ω′i•Aωi•

= −n log

(p∏i=1

ωii

)+n

2tr(ΩSΩ)

=n

2

(− log det Ω2

D + tr(SΩ2)).

Hence, Gcon(Ω) = ΩD and Hcon(Ω) = Ω2

(SPACE with unit weights) Reparameterizing (19) using the identity −ρij√ωjj/ωii =

ωij/ωii, the `2-norm of the residual can be expressed as follows.

‖Yi +∑j 6=i

ωijωii

Yj‖22 = ‖ 1

ωii(ωiiYi +

∑j 6=i

ωijYj)‖22 =

1

ω2ii

ω′i•Aωi•

Hence, (19) is equivalent to

Lspc,1(Ω) = −n2

log det ΩD +1

2

p∑i=1

1

ω2ii

ω′i•Aωi•

= −n2

log det ΩD +n

2

p∑i=1

ω′i•ωii

Sωi•ωii

= −n2

log det ΩD +1

2tr(Ω−1

D ΩAΩΩ−1D )

=n

2

(− log det ΩD + tr(SΩΩ−2

D Ω)).

Therefore, Gspc,1(Ω) = ΩD and Hspc,1(Ω) = ΩΩ−2D Ω.

4

Page 39: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

(SPACE with ωii weights) Similar to the analysis for SPACE1 with unit weights, the

`2-norm of the residual for the SPACE2 formulation (i.e., with weights ωii) can be expressed

as follows.

ωii‖Yi −∑j 6=i

ρij√ωjjωii

Yj‖22 = ωii

(1

ω2ii

ω′i•Aωi•

)=

1

ωiiω′i•Aωi•

Hence, (20) is equivalent to

Lspc,2(Ω) = −n2

log det ΩD +1

2

p∑i=1

1

ωiiω′i•Aωi•

= −n2

log det ΩD +n

2

p∑i=1

ω′i•√ωii

Sωi•√ωii

= −n2

log det ΩD +n

2tr(Ω

−1/2D ΩSΩΩ

−1/2D )

=n

2

(− log det ΩD + tr(SΩΩ−1

D Ω))

Therefore, Gspc,2(Ω) = ΩD and Hspc,2(Ω) = ΩΩ−1D Ω.

(SYMLASSO) Reparameterizing (21) by αii = 1/ωii and −ρij√ωjj/ωii = ωij/ωii yields

(20). It follows that Gsym(Ω) = ΩD, Hsym(Ω) = ΩΩ−1D Ω.

(SPLICE) Reparameterizing (22) by d2ii = 1/ωii and βij = ρij

√ωjj/ωii yields (20). It

follows that Gspl(Ω) = ΩD, Hspl(Ω) = ΩΩ−1D Ω.

G Effect of correction factor

Following steps similar to proof of Lemma 4, the update formulas for Qcon(Ω) = Lcon(Ω) +

λ∑

i<j |ωij| of (18) can be shown to be

(Tkk(Ω))kk =−∑

j 6=k ωkjskj +

√(∑j 6=k ωkjskj

)2

+ 2skk

2skk(28)

(Tkl(Ω))kl =Sλn

(−(∑

j 6=l ωkjsjl +∑

j 6=k ωljsjk

))skk + sll

(29)

5

Page 40: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

G.1 Numerical example

Analysis on a dataset (n = 1000) generated from following Ω was used for this example.

Ω =

1.0 0.3 0.0

0.3 1.0 0.3

0.0 0.3 1.0

Without penalty, i.e. λ = 0, computed solutions Ωcon from using CONCORD and Ωuncorrected

from using update formulas (28) and (29) are

Ωuncorrected =

0.675 0.089 −0.015

0.089 0.658 0.117

−0.015 0.117 0.668

, Ωcon =

0.974 0.257 0.007

0.257 0.983 0.344

0.007 0.344 0.978

It is clear that the estimate Ωcon with the correction factor performs better parameter esti-

mation.

H Proof of Theorem 1

Khare and Rajaratnam (2014) establish convergence of the cyclic coordinatewise minimiza-

tion algorithm for a general class of objective functions. The proof of convergence for CON-

CORD relies on showing that the corresponding objective function is a special case of the

general class of objective functions considered in Khare and Rajaratnam (2014). A more

detailed version of the following argument can be found in (Khare and Rajaratnam, 2014,

Section 4.1). We provide the main steps here for convenience and completeness.

Let y = y(Ω) ∈ Rp2 denote a vectorized version of Ω obtained by shifting the corre-

sponding diagonal entry at the bottom of each column of Ω, and then stacking the columns

on top of each other. Let P i denote the p × p permutation matrix such that P iz =

(z1, · · · , zi−1, zi+1, · · · , zp, zi) for every z ∈ Rp. It follows by the definition of y that

y = y(Ω) = ((P 1Ω·1)T , (P 2Ω·2)T , · · · , (P pΩ·p)T )T .

Let x = x(Ω) ∈ Rp(p+1)

2 be the symmetric version of y, obtained by removing all ωij with

i > j from y. More precisely,

x = x(Ω) = (ω11, ω12, ω22, · · · , ω1p, ω2p, · · · , ωpp)T .

6

Page 41: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Let P be the p2× p(p+1)2

matrix such that every entry of P is either 0 or 1, exactly one entry

in each row of P is equal to 1, and y = Px. Let S be a p2× p2 block diagonal matrix with p

diagonal blocks, and the ith diagonal block is equal to Si := 12P iS(P i)T , where S = 1

nYTY.

It follows that

1

2

p∑i=1

ΩT·iSΩ·i =

1

2

p∑i=1

ΩT·i (P

i)TP iS(P i)TP iΩ·i =1

2

p∑i=1

(P iΩ·i)T (P iS(P i)T )(P iΩ·i)

= yT Sy

= xT P T SPx. (30)

Note that for every 1 ≤ i ≤ p, the matrix Si = 12P iS(P i)T is positive semi-definite. Let S1/2

denote the p2 × p2 block diagonal matrix with p diagonal blocks, such that the ith diagonal

block is given by (Si)1/2. Let E = S1/2P . It follows by (30) that

1

2

p∑i=1

ΩT·iSΩ·i = (Ex)T (Ex). (31)

By the definition of x(Ω), we obtain

ωii = x i(i+1)2

(32)

for every 1 ≤ i ≤ p. Let

S0 =

j : 1 ≤ j ≤ p(p+ 1)

2, j 6= i(i+ 1)

2for any 1 ≤ i ≤ p

,

and

X = x ∈ Rp(p+1)

2 : xj ≥ 0 for every j ∈ Sc0.

It follows by (11), (31) and (32) that the CONCORD algorithm can be viewed as a cyclic

coordinatewise minimization algorithm for minimizing the function

Qcon(x) = n

xTETEx−∑i∈Sc0

log xi +λ

n

∑j∈S0

|xj|

, (33)

subject to x ∈ X . For every 1 ≤ i ≤ p(p + 1)/2, there exist 1 ≤ k, l ≤ p such that xi = ωkl.

Note that ‖E·i‖2 = Skk+Sll2

> 0. It also follows from (Khare and Rajaratnam, 2014, Lemma

4.1) that for every ξ ∈ R, the set Rξ := x ∈ X : Qcon(x) ≤ ξ is bounded in the sense

that for every i ∈ S0, xi is uniformly bounded above and below, and for every i ∈ Sc0, xi

7

Page 42: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

is uniformly bounded above and below (from zero). It follows by (Khare and Rajaratnam,

2014, Theorem 3.1) that the sequence of iterates produced by the CONCORD algorithm

converges.

I Application to breast cancer data

Gene Symbol CO

NC

OR

D

SY

ML

AS

SO

SP

AC

E1

SP

AC

E2

Reference

HNF3A (FOXA1) + + + + Koboldt and Others (2012), Albergaria et al. (2009),Davidson et al. (2011), Lacroix and Leclercq (2004),Robinson et al. (2011)

TONDU + + + +

FZD9 + + + + Katoh (2008), Rø nneberg et al. (2011)

KIAA0481 + + + + [Gene record discontinued]

KRT16 + + + Glinsky et al. (2005), Joosse et al. (2012), Pellegrino et al.(1988)

KNSL6 (KIF2C) + + Eschenbrenner et al. (2011), Shimo et al. (2007, 2008)

FOXC1 + + + + Du et al. (2012), Sizemore and Keri (2012), Wang et al.(2012), Ray et al. (2011), Tkocz et al. (2012)

PSA + + + Kraus et al. (2010), Mohajeri et al. (2011), Sauter et al.(2004), Yang et al. (2002)

GATA3 + + + + Koboldt and Others (2012), Davidson et al. (2011), Al-bergaria et al. (2009), Eeckhoute et al. (2007), Jiang et al.(2010), Licata et al. (2010), Yan et al. (2010)

C20ORF1 (TPX2) + Maxwell et al. (2011), Bibby et al. (2009)

E48 + + +

ESR1 + Zheng et al. (2012)

Table 6: Summary of the top hub genes identified by each of the four methods, CONCORD,SYMLASSO, SPACE1 & SPACE2: Genes indicated by ‘+’ denote the 10 most highly con-nected genes for each of the methods. References are provided at the end of this supplementalsection.

8

Page 43: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

J Application to portfolio optimization

J.1 Constituents of Dow Jones Industrial Average

Symbol Description Return (%) Risk (%) SR

AA Alcoa Inc. 9.593 41.970 0.109AXP American Express Company 18.706 38.913 0.352BA The Boeing Company 13.417 32.685 0.258

BAC Bank of America Corporation 13.182 48.588 0.168CAT Caterpillar Inc. 19.042 35.050 0.401

CSCO Cisco Systems, Inc. 22.650 44.565 0.396CVX Chevron Corporation 15.486 26.716 0.392DD E. I. du Pont de Nemours and Company 10.591 30.537 0.183DIS The Walt Disney Company 12.312 32.800 0.223GE General Electric Company 12.449 31.667 0.235HD The Home Depot, Inc. 17.266 34.422 0.356

HPQ Hewlett-Packard Company 10.769 40.727 0.142IBM International Business Machines Corporation 18.715 29.944 0.458

INTC Intel Corporation 18.325 41.543 0.321JNJ Johnson & Johnson 13.664 22.087 0.392JPM JPMorgan Chase & Co. 18.292 42.729 0.311KO The Coca-Cola Company 10.617 24.092 0.233

MCD McDonald’s Corp. 14.457 26.114 0.362MMM 3M Company 12.596 25.353 0.300MRK Merck & Co. Inc. 12.385 29.616 0.249MSFT Microsoft Corporation 18.612 33.904 0.401PFE Pfizer Inc. 14.376 29.060 0.323PG Procter & Gamble Co. 13.262 24.241 0.341T AT&T, Inc. 11.231 28.781 0.217

TRV The Travelers Companies, Inc. 14.726 31.706 0.307UTX United Technologies Corp. 18.618 28.760 0.474VZ Verizon Communications Inc. 11.403 27.728 0.231

WMT Wal-Mart Stores Inc. 15.495 27.955 0.375XOM Exxon Mobil Corporation 15.466 25.764 0.406

Table 7: Dow Jones Industrial Average component stocks and their respective realized re-turns, realized risk and Sharpe ratios. The risk-free rate is set at 5%.

9

Page 44: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

J.2 Investment periods

k Date Range k Date Range k Date Range k Date Range1 95/02/18-95/03/17 59 99/07/31-99/08/27 117 04/01/10-04/02/06 175 08/06/21-08/07/182 95/03/18-95/04/14 60 99/08/28-99/09/24 118 04/02/07-04/03/05 176 08/07/19-08/08/153 95/04/15-95/05/12 61 99/09/25-99/10/22 119 04/03/06-04/04/02 177 08/08/16-08/09/124 95/05/13-95/06/09 62 99/10/23-99/11/19 120 04/04/03-04/04/30 178 08/09/13-08/10/105 95/06/10-95/07/07 63 99/11/20-99/12/17 121 04/05/01-04/05/28 179 08/10/11-08/11/076 95/07/08-95/08/04 64 99/12/18-00/01/14 122 04/05/29-04/06/25 180 08/11/08-08/12/057 95/08/05-95/09/01 65 00/01/15-00/02/11 123 04/06/26-04/07/23 181 08/12/06-09/01/028 95/09/02-95/09/29 66 00/02/12-00/03/10 124 04/07/24-04/08/20 182 09/01/03-09/01/309 95/09/30-95/10/27 67 00/03/11-00/04/07 125 04/08/21-04/09/17 183 09/01/31-09/02/27

10 95/10/28-95/11/24 68 00/04/08-00/05/05 126 04/09/18-04/10/15 184 09/02/28-09/03/2711 95/11/25-95/12/22 69 00/05/06-00/06/02 127 04/10/16-04/11/12 185 09/03/28-09/04/2412 95/12/23-96/01/19 70 00/06/03-00/06/30 128 04/11/13-04/12/10 186 09/04/25-09/05/2213 96/01/20-96/02/16 71 00/07/01-00/07/28 129 04/12/11-05/01/07 187 09/05/23-09/06/1914 96/02/17-96/03/15 72 00/07/29-00/08/25 130 05/01/08-05/02/04 188 09/06/20-09/07/1715 96/03/16-96/04/12 73 00/08/26-00/09/22 131 05/02/05-05/03/04 189 09/07/18-09/08/1416 96/04/13-96/05/10 74 00/09/23-00/10/20 132 05/03/05-05/04/01 190 09/08/15-09/09/1117 96/05/11-96/06/07 75 00/10/21-00/11/17 133 05/04/02-05/04/29 191 09/09/12-09/10/0918 96/06/08-96/07/05 76 00/11/18-00/12/15 134 05/04/30-05/05/27 192 09/10/10-09/11/0619 96/07/06-96/08/02 77 00/12/16-01/01/12 135 05/05/28-05/06/24 193 09/11/07-09/12/0420 96/08/03-96/08/30 78 01/01/13-01/02/09 136 05/06/25-05/07/22 194 09/12/05-10/01/0121 96/08/31-96/09/27 79 01/02/10-01/03/09 137 05/07/23-05/08/19 195 10/01/02-10/01/2922 96/09/28-96/10/25 80 01/03/10-01/04/06 138 05/08/20-05/09/16 196 10/01/30-10/02/2623 96/10/26-96/11/22 81 01/04/07-01/05/04 139 05/09/17-05/10/14 197 10/02/27-10/03/2624 96/11/23-96/12/20 82 01/05/05-01/06/01 140 05/10/15-05/11/11 198 10/03/27-10/04/2325 96/12/21-97/01/17 83 01/06/02-01/06/29 141 05/11/12-05/12/09 199 10/04/24-10/05/2126 97/01/18-97/02/14 84 01/06/30-01/07/27 142 05/12/10-06/01/06 200 10/05/22-10/06/1827 97/02/15-97/03/14 85 01/07/28-01/08/24 143 06/01/07-06/02/03 201 10/06/19-10/07/1628 97/03/15-97/04/11 86 01/08/25-01/09/21 144 06/02/04-06/03/03 202 10/07/17-10/08/1329 97/04/12-97/05/09 87 01/09/22-01/10/19 145 06/03/04-06/03/31 203 10/08/14-10/09/1030 97/05/10-97/06/06 88 01/10/20-01/11/16 146 06/04/01-06/04/28 204 10/09/11-10/10/0831 97/06/07-97/07/04 89 01/11/17-01/12/14 147 06/04/29-06/05/26 205 10/10/09-10/11/0532 97/07/05-97/08/01 90 01/12/15-02/01/11 148 06/05/27-06/06/23 206 10/11/06-10/12/0333 97/08/02-97/08/29 91 02/01/12-02/02/08 149 06/06/24-06/07/21 207 10/12/04-10/12/3134 97/08/30-97/09/26 92 02/02/09-02/03/08 150 06/07/22-06/08/18 208 11/01/01-11/01/2835 97/09/27-97/10/24 93 02/03/09-02/04/05 151 06/08/19-06/09/15 209 11/01/29-11/02/2536 97/10/25-97/11/21 94 02/04/06-02/05/03 152 06/09/16-06/10/13 210 11/02/26-11/03/2537 97/11/22-97/12/19 95 02/05/04-02/05/31 153 06/10/14-06/11/10 211 11/03/26-11/04/2238 97/12/20-98/01/16 96 02/06/01-02/06/28 154 06/11/11-06/12/08 212 11/04/23-11/05/2039 98/01/17-98/02/13 97 02/06/29-02/07/26 155 06/12/09-07/01/05 213 11/05/21-11/06/1740 98/02/14-98/03/13 98 02/07/27-02/08/23 156 07/01/06-07/02/02 214 11/06/18-11/07/1541 98/03/14-98/04/10 99 02/08/24-02/09/20 157 07/02/03-07/03/02 215 11/07/16-11/08/1242 98/04/11-98/05/08 100 02/09/21-02/10/18 158 07/03/03-07/03/30 216 11/08/13-11/09/0943 98/05/09-98/06/05 101 02/10/19-02/11/15 159 07/03/31-07/04/27 217 11/09/10-11/10/0744 98/06/06-98/07/03 102 02/11/16-02/12/13 160 07/04/28-07/05/25 218 11/10/08-11/11/0445 98/07/04-98/07/31 103 02/12/14-03/01/10 161 07/05/26-07/06/22 219 11/11/05-11/12/0246 98/08/01-98/08/28 104 03/01/11-03/02/07 162 07/06/23-07/07/20 220 11/12/03-11/12/3047 98/08/29-98/09/25 105 03/02/08-03/03/07 163 07/07/21-07/08/17 221 11/12/31-12/01/2748 98/09/26-98/10/23 106 03/03/08-03/04/04 164 07/08/18-07/09/14 222 12/01/28-12/02/2449 98/10/24-98/11/20 107 03/04/05-03/05/02 165 07/09/15-07/10/12 223 12/02/25-12/03/2350 98/11/21-98/12/18 108 03/05/03-03/05/30 166 07/10/13-07/11/09 224 12/03/24-12/04/2051 98/12/19-99/01/15 109 03/05/31-03/06/27 167 07/11/10-07/12/07 225 12/04/21-12/05/1852 99/01/16-99/02/12 110 03/06/28-03/07/25 168 07/12/08-08/01/04 226 12/05/19-12/06/1553 99/02/13-99/03/12 111 03/07/26-03/08/22 169 08/01/05-08/02/01 227 12/06/16-12/07/1354 99/03/13-99/04/09 112 03/08/23-03/09/19 170 08/02/02-08/02/29 228 12/07/14-12/08/1055 99/04/10-99/05/07 113 03/09/20-03/10/17 171 08/03/01-08/03/28 229 12/08/11-12/09/0756 99/05/08-99/06/04 114 03/10/18-03/11/14 172 08/03/29-08/04/25 230 12/09/08-12/10/0557 99/06/05-99/07/02 115 03/11/15-03/12/12 173 08/04/26-08/05/23 231 12/10/06-12/10/2658 99/07/03-99/07/30 116 03/12/13-04/01/09 174 08/05/24-08/06/20

Table 8: Investment periods in YY/MM/DD format

10

Page 45: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

J.3 Details of minimum variance portfolio rebalancing

The investment period during which a set of portfolio weights are held constant is also

referred to as the “holding period”. The number of trading days in the k-th investment

period, Lk, may vary if rebalancing time points are chosen to coincide with either calendar

months, weeks or fiscal quarters. Let t index the number of an arbitrary day during the

entire investment horizon. The number of trading days Tj in the first j investment periods

is given by

Tj =

j∑k=1

Lk, (34)

where j = 1, 2, . . . , K with T0 = 0. We consider holding Nest constant for all investment

periods, k = 1, 2, . . . . For convenience, denote by kt the investment period that trading day

t belongs to: i.e., kt = k(t) := k : t ∈ [Tk−1, Tk].The algorithm for the minimum variance portfolio rebalancing strategy (MVR) can now

be described as follows: At the beginning of time period k, that is after Tk−1 days, compute

an estimate of the covariance matrix Σk for period k from Nest past returns: i.e., rt : t ∈[Tk−1−Nest +1, Tk−1]. Then, compute a new set of portfolio weights wk = (1T Σ−1

k 1)−1Σ−1k 1,

and hold this portfolio constant until the Tk-th trading day. The process is then repeated

for the next holding period.

J.4 Details of cross-validation

Consider the matrix of returns R for all the stocks in the portfolio in the estimation horizon

preceding the start of the investment period (k − 1).

R = ((rti)), where i ∈ 1, . . . , p, t ∈ Tk−1 −Nest + 1, . . . , Tk−1.

Hence, R is an Nest-by-p matrix, and the column vector Rj is an Nest-vector of returns

for the j-th stock.

Now denote by Ω(λ) = ((ωij(λ)))1≤i,j≤p an estimate of Ω obtained by `1-regularization

methods such as Glasso or CONCORD. The use of λ makes explicit the dependence of

these estimation methods on the penalty parameter λ. The data are the over the estimation

horizon is divided into m-folds. The penalty parameter is chosen so as to minimize the out

11

Page 46: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

of sample predictive risk (PR) given by

PR(λ) =M∑m=1

1

Nm

p∑i=1

‖R(m)i −

∑j 6=i

β(\m)ij (λ)R

(m)j ‖2

2

,

where R(m)i is the vector of returns for stock i in fold m, and where Nm is the number of

observations in the m-th fold. The regression coefficient β(\m)ij (λ) is determined as follows:

β(\m)ij (λ) = −ω

(\m)ij (λ)

ω(\m)ii (λ)

, with Ω(\m)(λ) based on using all the available data within a given

estimation horizon except for fold m. The optimal choice of penalty parameter λ∗ is then

determined as follows:

λ∗ = arg infλ≥0

PR(λ).

J.5 Performance metrics

For comparison purposes with (Won et al., 2012), we use the following quantities to assess

the performance of the five MVR strategies. The formulas for these metrics are given below.

• Realized return: The average return of the portfolio over the entire investment horizon.

rp =1

T

T∑t=1

r′twkt

• Realized risk : The risk (standard error) of the portfolio over the entire investment

horizon.

σp =

[1

T

T∑t=1

(r′twkt − rp)2

]1/2

• Realized Sharpe ratio (SR): The realized excess return of the portfolio over the risk-free

rate per unit realized risk for the entire investment horizon.

SR =rp − rfσp

(35)

• Turnover : The amount of new portfolio assets purchased or sold over each trading

period. The turnover for the k-th investment period when the portfolio weights wk are

12

Page 47: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

held constant is given by

TO(k) =

p∑i=1

∣∣∣∣∣∣wik − Tk−1+Lk∏t=Tk−1+1

(1 + rit)

wi(k−1)

∣∣∣∣∣∣ (36)

with wi0 = 0 for all i = 1, . . . , p.

• Size of the short side The proportion of the negative weights to the sum of the absolute

weights of each portfolio. The short side for the k-th investment period is given by

SS(k) =

∑pi=1 |min(wik, 0)|∑p

i=1 |wik|

The average and standard error of the short sides over the all investment periods is

SS =1

K

K∑k=1

SS(k), σSS =

[1

K

K∑k=1

(SS(k)− SS)2

]1/2

• Normalized wealth growth: Accumulated wealth derived from the portfolio over the

trading period when the initial budget is normalized to one. Note that both transaction

costs and borrowing costs are taken into account. Let W (t − 1) denote the wealth of

the portfolio after the (t − 1)-th trading day. Then, the wealth of the portfolio after

the t-th trading day is given by

W (t) =

W (t− 1) (1 + r′twkt − TC(kt)−BC(kt)) , t = Tkt−1 + 1

W (t− 1) (1 + r′twkt) , t 6= Tkt−1 + 1,

where TC(k) and BC(k) are transaction costs (of trading stocks) and borrowing costs

(of capital for taking short positions on stocks), respectively. On the first day of each

trading period, we adjust the return for these trading costs. Denote the transaction

cost rate by rc, then the transaction cost incurred at the beginning of period k is given

by

TC(k) = rc · TO(k). (37)

The borrowing cost rate, BC(k), depends on the short side of the portfolio weights

during the (k − 1)-th period. Denote the borrowing daily percentage by rb, then the

13

Page 48: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Nest Sample Glasso CONCORD CondReg LedoitWolf DJIA35 17.08 (33.86) 13.10 (16.57) 13.29 (17.04) 13.62 (17.74) 12.33 (15.58) 8.51 (18.96)40 16.66 (26.52) 13.13 (16.57) 13.34 (17.02) 13.39 (17.74) 11.78 (15.46) 8.51 (18.96)45 11.13 (23.19) 12.74 (16.52) 13.05 (17.04) 13.05 (17.77) 10.99 (15.43) 8.51 (18.96)50 9.90 (20.95) 12.89 (16.39) 13.21 (17.04) 13.08 (17.65) 11.25 (15.36) 8.51 (18.96)75 11.61 (17.45) 11.28 (15.57) 13.10 (17.04) 12.77 (17.15) 10.56 (15.10) 8.51 (18.96)

150 9.40 (15.41) 10.28 (14.97) 13.20 (17.08) 12.76 (16.30) 10.63 (14.66) 8.51 (18.96)225 10.49 (14.98) 10.38 (14.89) 13.58 (17.10) 12.92 (16.04) 11.04 (14.52) 8.51 (18.96)300 10.41 (14.95) 10.37 (14.95) 13.66 (17.16) 12.85 (16.07) 10.94 (14.52) 8.51 (18.96)

Table 9: Realized returns of different investment strategies corresponding to different esti-mators with various Nest (realized risks are given in parentheses). The maximum annualizedreturns and risks are highlighted in bold.

Nest Sample Glasso CONCORD CondReg LedoitWolf35 8.42 (3.19) 0.45 (0.12) 0.38 (0.10) 0.39 (0.27) 1.40 (0.38)40 5.81 (2.28) 0.41 (0.12) 0.34 (0.10) 0.37 (0.26) 1.29 (0.36)45 4.58 (1.65) 0.39 (0.12) 0.31 (0.10) 0.36 (0.23) 1.20 (0.35)50 3.74 (1.19) 0.39 (0.13) 0.28 (0.09) 0.36 (0.25) 1.11 (0.33)75 2.03 (0.67) 0.50 (0.19) 0.21 (0.08) 0.43 (0.29) 0.86 (0.29)

150 0.87 (0.32) 0.73 (0.27) 0.14 (0.07) 0.40 (0.22) 0.54 (0.23)225 0.57 (0.24) 0.56 (0.22) 0.11 (0.07) 0.31 (0.13) 0.41 (0.18)300 0.44 (0.21) 0.44 (0.23) 0.09 (0.07) 0.24 (0.11) 0.33 (0.17)

Table 10: Average turnovers for various estimation horizons, Nest (standard errors are givenin parentheses). The minimum average and standard error values for each row are highlightedin bold.

borrowing cost rate is given by

BC(k) = ((1 + rb)Lk−1 − 1)

p∑i=1

|min(wi(k−1), 0)|. (38)

K Proof of Theorem 2

The result follows by noting the following straightforward facts

1. The existence of a minimizer follows by the convexity of Qcon.

2. By assumptions (A0) and (A1), for any η > 0, αn,ii1≤i≤pn are uniformly bounded

away from zero and infinity with probability larger than 1−O(n−η).

3. When the diagonal entries are fixed at αn,ii1≤i≤pn , then the objective function Qcon

(reparameterized from ωo to θ) is same as the objective function of SPACE with

14

Page 49: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Nest Sample Glasso CONCORD CondReg LedoitWolf35 41.13 (3.18) 0.66 (0.84) 0.05 (0.14) 1.75 (5.00) 20.50 (6.64)40 38.64 (3.47) 0.64 (0.75) 0.05 (0.14) 1.78 (5.04) 20.45 (6.63)45 36.89 (4.26) 0.90 (0.85) 0.05 (0.14) 1.84 (4.95) 20.31 (6.61)50 35.46 (4.38) 1.35 (1.19) 0.04 (0.11) 2.17 (5.44) 20.33 (6.66)75 30.89 (5.37) 8.67 (3.76) 0.04 (0.11) 4.91 (7.38) 20.13 (6.83)

150 25.65 (6.25) 23.48 (4.68) 0.02 (0.07) 9.07 (6.31) 19.60 (6.82)225 23.68 (6.69) 23.36 (6.27) 0.01 (0.05) 10.71 (3.22) 19.26 (6.91)300 22.45 (6.90) 22.42 (6.87) 0.00 (0.02) 9.95 (2.93) 18.85 (7.10)

Table 11: Average short sides for various estimation horizons, Nest (standard errors are givenin parentheses). The minimum average and standard error values for each row are highlightedin bold.

Nest Sample Glasso CONCORD CondReg LedoitWolf35 567.958 (214.05) 22.635 (5.62) 18.642 (4.53) 20.757 (17.46) 91.316 (25.19)40 394.508 (149.90) 20.660 (5.70) 16.858 (4.40) 20.013 (16.78) 85.661 (24.16)45 315.340 (108.87) 19.899 (5.80) 15.470 (4.22) 19.419 (15.27) 80.524 (23.39)50 260.887 (81.13) 20.146 (6.39) 14.081 (4.06) 19.695 (16.04) 76.154 (22.43)75 150.242 (45.87) 30.942 (10.92) 10.516 (3.17) 25.191 (19.19) 63.481 (20.94)

150 75.700 (27.88) 61.495 (18.40) 6.596 (2.24) 26.788 (12.83) 46.680 (17.78)225 56.242 (22.09) 54.117 (18.82) 5.155 (1.80) 22.973 (6.08) 39.441 (15.72)300 46.904 (20.09) 47.118 (20.72) 4.404 (1.67) 18.823 (5.16) 35.065 (14.89)

Table 12: Average trading costs in basis points for various estimation horizons, Nest (stan-dard errors are given in parentheses). Borrowing rate is taken to be 7% APR and transactioncost rate is taken to be 0.5% of principal for each transaction. The minimum transactioncost for each row is highlighted in bold.

15

Page 50: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

0.0

2.5

5.0

7.5

10.0

1995 2000 2005 2010date

valu

emethod

Concord

CondReg

glasso

LedoitWolf

Sample

DJIA

Figure 3: Normalized wealth growth after adjusting for transaction costs (0.5% of principal)and borrowing costs (interest rate of 7% APR) with Nest = 75.

weights wi = α2n,ii (which are uniformly bounded), except that the penalty term is now∑

1≤i<j≤pn λn√αn,iiαn,jjθij, instead of

∑1≤i<j≤pn λnθij as in Qspc.

4. Since θn,ij =ωn,ij√αn,iiαn,jj

, using the uniform boundedness of αn,ii1≤i≤pn , there exists a

constant C1 such that for any η > 0,

‖ωon − ωon‖2 ≤ C1‖θon − θon‖2

holds with probability larger than 1−O(n−η).

5. For 1 ≤ i < j ≤ pn, sign(ωn,ij) = sign(θn,ij), since they differ by a positive multiplicative

constant.

6. When the penalty term in SPACE is replaced by∑

1≤i<j≤pn λn√αn,iiαn,jjθij, the uni-

form boundedness of αn,ii1≤i≤pn implies that Theorems 1, 2 and 3 of Peng et al.

(2009) hold with trivial modifications at appropriate places. The result now follows

immediately using these theorems along with the above assertions.

Remark: Note that Theorem 2 on the consistency of CONCORD has been formulated as to

exactly parallel the result given for SPACE by Peng et al. (2009). An accurate estimator

16

Page 51: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

0

5

10

15

20

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(a) Nest = 35

0

5

10

15

20

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(b) Nest = 40

0

3

6

9

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(c) Nest = 45

0

2

4

6

8

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(d) Nest = 50

0

1

2

3

4

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(e) Nest = 75

0.0

0.5

1.0

1.5

2.0

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(f) Nest = 150

0.0

0.5

1.0

1.5

2.0

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(g) Nest = 225

0.0

0.5

1.0

1.5

1995 2000 2005 2010date

turn

over

method

Concord

CondReg

glasso

LedoitWolf

Sample

(h) Nest = 300

Figure 4: Turnover in percentage points.

17

Page 52: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(a) Nest = 35

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(b) Nest = 40

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(c) Nest = 45

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(d) Nest = 50

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(e) Nest = 75

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(f) Nest = 150

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(g) Nest = 225

1

100

10000

1995 2000 2005 2010date

cost

method

Concord

CondReg

glasso

LedoitWolf

Sample

(h) Nest = 300

Figure 5: Trading costs in basis points for each trading period. Borrowing rate is taken tobe 7% APR and transaction cost rate is taken to be 0.5% APR. The y-axes are log-scaled.

18

Page 53: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

of ωii when pn > n can be obtained by using the inverse of the sample conditional variance

of each variable. In practice, however, once can simply use the diagonal estimates given by

CONCORD, and there is no need for recourse to external estimates. Note also that CON-

CORD estimates themselves always exist, regardless of the sample size, and with certainty

will lead to estimates, even when pn > n. This property follows directly from the convergence

of the CONCORD algorithm.

L Joint convexity of the SYMLASSO in the Ω param-

eterization

We will show that the SYMLASSO objective function in (7) is jointly convex if we reparam-

eterize in terms of Ω (see also Lee and Hastie (2014)). However, the SYMLASSO objective

function is not in general strictly convex if n < p, and hence the convergence of the coordi-

natewise descent algorithm is not guaranteed. It follows from the proof of Lemma 7 that the

SYMLASSO objective function (in terms of Ω) is given by

Qsym(Ω) =n

2

[− log |ΩD|+ tr(SΩΩ−1

D Ω)]

+ λ∑

1≤i<j≤p

|ωij|

=n

2

[−

p∑i=1

logωii +1

ωiiωTi•Sωi•

]+ λ

∑1≤i<j≤p

|ωij|.

To prove the convexity of Qsym(Ω), we first prove the following lemma.

Lemma 8. Consider the function f on R+ × Rk defined by f(a) = aTAaa1

. If A is positive

semi-definite, then f is a convex function.

Proof It follows by straightforward manipulations that

f(a) = A11a1 + 2k+1∑j=2

A1jaj +aT−1A−1a−1

a1

, (39)

where a−1 := (aj)k+1j=2 and A−1 is the principle submatrix of A obtained by excluding the

first row and the first column. Since the first two terms above are clearly convex functions

of a, it suffices to prove that the third termaT−1A−1a−1

a1is a convex function of a. Again, by

straightforward manipulations, it follows that the Hessian matrix of this term is given by

H =2

a31

(aT−1A−1a−1 −(a1A−1a−1)T

−a1A−1a−1 a21A−1

).

19

Page 54: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Hence, for any b ∈ Rk+1 (with b−1 := (bj)k+1j=2), it follows that

bTHb

=2

a31

(b2

1aT−1A−1a−1 − 2b1a1b

T−1A−1a−1 + a2

1bT−1A−1b−1

). (40)

Since A−1 is positive semi-definite, it follows that if bT−1A−1b−1 = 0, then A−1b−1 = 0. In

this case

bTHb =2

a31

(b2

1aT−1A−1a−1

)≥ 0.

If bT−1A−1b−1 > 0, then it follows by (40) that

bTHb

=2b2

1

a31

(aT−1A−1a−1 −

(bT−1A−1a−1)2

bT−1A−1b−1

)+

2

a31

a1

√bT−1A−1b−1 − b1

bT−1A−1a−1√bT−1A−1b−1

2

≥ 0.

The last statement follows by noting that(aT−1A−1a−1

) (bT−1A−1b−1

)≥ (bT−1A−1a−1)2 (using

the positive semi-definiteness of A−1 and the Cauchy-Schwarz inequality). Hence H is a

positive semi-definite matrix, which combined with (39) implies that f is a convex function.

It follows by the above lemma that 1ωiiωTi•Sωi• is a convex function in ωi• (and hence Ω) for

every 1 ≤ i ≤ p. Since − log x and |x| are convex functions, it follows that Qsym(Ω) is a

convex function.

M Examples where the Incoherence condition (A3) is

satisfied

We now present two lemmas which outline settings where the Incoherence condition (A3) is

satisfied. The first lemma shows that (A3) is satisfied if the true correlations are sufficiently

small. This lemma can be regarded as a parallel result to (Zhao and Yu, 2006, Corollary 2),

which shows that the irrepresentable condition for lasso regression is satisfied if the entries of1nXTnXn (Xn being the regression design matrix) are bounded by c

2qn−1for some 0 ≤ c < 1.

Lemma 9. Let

dn := max1≤i≤pn

|j : ωn,ij 6= 0|.

20

Page 55: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

The incoherence condition (A3) is satisfied if

|Σn,ij|√Σn,iiΣn,jj

≤√

2δλmin√qndnλmax

,

for every n ≥ 1 and 1 ≤ i 6= j ≤ pn.

Proof: It can be shown by straightforward algebraic manipulations that

L′′An,An(Ωn) = UTn VnUn,

where Vn is a pn-block diagonal matrix with the ith diagonal block given by Σn without the

ith row and column, and Un is an appropriate pn(pn − 1)× qn orthogonal matrix with 0 and

1 elements. Each column of Un has exactly two 1’s. Hence for any x ∈ Rqn , it follows that

xTUTn Unx = 2xTx. It follows that the smallest eigenvalue of UT

n VnUn is bounded below by2

λmax. Consequently, the largest eigenvalue of (UT

n VnUn)−1 is bounded above by λmax2

.

Since the diagonal entries of Σn are uniformly bounded above by 1λmin

, it follows that

|Σn,kl| ≤√

2δ√qndnλmax

,

for every n ≥ 1 and 1 ≤ k 6= l ≤ pn. Note that for every (i, j) /∈ An, L′′ij,An(Ωn) has at most

2dn non-zero entries. Hence, we get that

∥∥∥L′′ij,An(Ωn)∥∥∥ ≤√2dn ×

√2δ√

qndnλmax=

2δ√qnλmax

.

Finally, we note from the discussion above that∣∣∣∣L′′ij,An(Ωn)[L′′An,An(Ωn)

]−1

sign(ωoAn)

∣∣∣∣≤

∥∥∥L′′ij,An(Ωn)∥∥∥∥∥∥∥[L′′An,An(Ωn)

]−1∥∥∥∥∥∥sign(ωoAn)

∥∥≤ 2δ√qnλmax

× λmax2×√qn

= δ.

Hence (A3) is satisfied.

The next lemma shows that the Incoherence condition (A3) holds if the true Ωn’s are tridi-

agonal matrices satisfying some mild conditions. This lemma can be regarded as a parallel

result to (Zhao and Yu, 2006, Corollary 3).

21

Page 56: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Lemma 10. Suppose that Ωn is a tridiagonal matrix with all diagonal entries equal to 1 and

the non-zero off-diagonal entries equal to ρn, for every n ≥ 1. If ρ := supn |ρn| satisfies

(1− ρ2)(2− ρ4/2)≤ δ,

then (A3) is satisfied.

Proof: Using standard results for inverse of tridiagonal matrices, it follows that

Σn,ij =ρ|i−j|n

1− ρ2n

,

for every 1 ≤ i, j ≤ pn. Note that An = (i− 1, i) : 2 ≤ i ≤ pn, and |An| = pn − 1. Hence,

L′′An,An(Ωn) is a tridiagonal matrix (with the ith row corresponding to the edge (i, i + 1)),

with

L′′i(i+1),i(i+1)(Ωn) = Σn,ii + Σn,(i+1)(i+1) =2

1− ρ2n

,

for every 1 ≤ i ≤ pn − 1, and

L′′i(i+1),(i+1)(i+2)(Ωn) = Σn,i(i+2) =ρ2n

1− ρ2n

,

for every 1 ≤ i ≤ pn − 2. Again, using standard results for inverse of tridiagonal matrices, it

follows that (L′′An,An(Ωn)

)−1

i(i+1),j(j+1)=

(1− ρ2n)(ρ2

n/2)|i−j|

2− ρ4n/2

,

for every 1 ≤ i, j ≤ pn − 1. Using the fact that∑∞

i=0 ai = 1

1−a for |a| < 1, we conclude

that each entry in(L′′An,An

)−1(Ωn) sign(ωoAn) is bounded above in absolute value by 2

2−ρ4n/2.

Moreover, if i < j and (i, j) /∈ An, then L′′ij,An(Ωn) has at most four non-zero entries (entries

corresponding to the edges (i − 1, i), (i, i + 1), (j − 1, j) and (j, j + 1), if applicable). All of

these non-zero entries are bounded above in absolute value by |ρn|1−ρ2n

. It follows that for every

(i, j) /∈ An, ∣∣∣∣L′′ij,An(Ωn)[L′′An,An(Ωn)

]−1

sign(ωoAn)

∣∣∣∣≤ 4|ρn|

1− ρ2n

× 2

2− ρ4n/2

=8|ρn|

(1− ρ2n)(2− ρ4

n/2)

≤ 8|ρ|(1− ρ2)(2− ρ4/2)

22

Page 57: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

≤ δ.

Hence (A3) is satisfied.

N Non-convergence of SPACE

We provide a simple example where the SPACE algorithm (with uniform weights) does not

converge, and the iterates alternate between two matrices. A sample of n = 4 i.i.d. vectors

was generated from the N (0,Σ) distribution with Σ as in (5). The standardized data is as

follows: 0.659253 −0.635923 0.492419

0.994414 −1.015863 1.115863

−1.150266 1.141668 −1.135115

−0.503401 0.510117 −0.473166

. (41)

The SPACE algorithm was implemented with choice of weights wi = 1 and λ = 0.2. Again,

after the first few iterations, it turns out that successive SPACE iterates alternate between 1.432570 1.416740 −2.132500

1.416740 3552.598070 0.000000

−2.132500 0.000000 89.163310

and

3552.565950 1.416720 0.000000

1.416720 1.404240 2.100770

0.000000 2.100770 123.137260

,

thereby also establishing non-convergence of the SPACE algorithm in the case when the

weights wi = 1. Note that some of the elements in the two matrices above are vastly different.

The sparsity pattern is also different, thereby yielding two different partial correlation graphs.

23

Page 58: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

References

Albergaria, A., Paredes, J., Sousa, B., Milanezi, F., Carneiro, V., Bastos, J., Costa, S., Vieira,

D., Lopes, N., Lam, E. W., Lunet, N., and Schmitt, F. (2009). Expression of FOXA1 and

GATA-3 in breast cancer: the prognostic significance in hormone receptor-negative tumours.

Breast Cancer Research, 11(3):R40.

Bibby, R. A., Tang, C., Faisal, A., Drosopoulos, K., Lubbe, S., Houlston, R., Bayliss, R.,

and Linardopoulos, S. (2009). A cancer-associated aurora A mutant is mislocalized and

misregulated due to loss of interaction with TPX2. The Journal of Biological Chemistry,

284(48):33177–84.

Davidson, B., Stavnes, H. T., Holth, A., Chen, X., Yang, Y., Shih, I.-M., and Wang, T.-L.

(2011). Gene expression signatures differentiate ovarian/peritoneal serous carcinoma from

breast carcinoma in effusions. Journal of Cellular and Molecular Medicine, 15(3):535–44.

Du, J., Li, L., Ou, Z., Kong, C., Zhang, Y., Dong, Z., Zhu, S., Jiang, H., Shao, Z., Huang,

B., and Lu, J. (2012). FOXC1, a target of polycomb, inhibits metastasis of breast cancer

cells. Breast Cancer Research and Treatment, 131(1):65–73.

Eeckhoute, J., Keeton, E. K., Lupien, M., Krum, S. A., Carroll, J. S., and Brown, M. (2007).

Positive cross-regulatory loop ties GATA-3 to estrogen receptor alpha expression in breast

cancer. Cancer Research, 67(13):6477–83.

Eschenbrenner, J., Winsel, S., Hammer, S., Sommer, A., Mittelstaedt, K., Drosch, M., Klar,

U., Sachse, C., Hannus, M., Seidel, M., Weiss, B., Merz, C., Siemeister, G., and Hoffmann,

J. (2011). Evaluation of activity and combination strategies with the microtubule-targeting

drug sagopilone in breast cancer cell lines. Frontiers in Oncology, 1:44.

Glinsky, G. V., Berezovska, O., and Glinskii, A. B. (2005). Microarray analysis identifies

a death-from-cancer signature predicting therapy failure in patients with multiple types of

cancer. The Journal of clinical investigation, 115(6):1503–21.

Jiang, S., Katayama, H., Wang, J., Li, S. A., Hong, Y., Radvanyi, L., Li, J. J., and Sen,

S. (2010). Estrogen-induced aurora kinase-A (AURKA) gene expression is activated by

GATA-3 in estrogen receptor-positive breast cancer cells. Hormones & Cancer, 1(1):11–20.

Joosse, S. A., Hannemann, J., Spotter, J., Bauche, A., Andreas, A., Muller, V., and Pantel,

K. (2012). Changes in Keratin Expression during Metastatic Progression of Breast Cancer:

Impact on the Detection of Circulating Tumor Cells. Clinical cancer research : an official

journal of the American Association for Cancer Research, 18(4):993–1003.

24

Page 59: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Katoh, M. (2008). WNT signaling in stem cell biology and regenerative medicine. Current

Drug Targets, 9(7):565–70.

Khare, K. and Rajaratnam, B. (2014). Convergence of cyclic coordinate l1 minimization.

Preprint, Department of Statistics, Stanford University (soon to be available on arxiv).

Koboldt, D. C. and Others (2012). Comprehensive molecular portraits of human breast

tumours. Nature, 490(7418):61–70.

Kraus, T. S., Cohen, C., and Siddiqui, M. T. (2010). Prostate-specific antigen and hormone

receptor expression in male and female breast carcinoma. Diagnostic Pathology, 5:63.

Lacroix, M. and Leclercq, G. (2004). About GATA3, HNF3A, and XBP1, three genes co-

expressed with the oestrogen receptor-alpha gene (ESR1) in breast cancer. Molecular and

Cellular Endocrinology, 219(1-2):1–7.

Lee, J. D. and Hastie, T. J. (2014). Learning the structure of mixed graphical models. to

appear in Journal of Computational and Graphical Statistics.

Licata, L. A., Hostetter, C. L., Crismale, J., Sheth, A., and Keen, J. C. (2010). The RNA-

binding protein HuR regulates GATA3 mRNA stability in human breast cancer cell lines.

Breast Cancer Research and Treatment, 122(1):55–63.

Maxwell, C. A., Bentez, J., Gmez-Bald, L., Osorio, A., Bonifaci, N., Fernndez-Ramires, R.,

Costes, S. V., Guin, E., Chen, H., Evans, G. J. R., Mohan, P., Catal, I., Petit, A., Aguilar,

H., Villanueva, A., Aytes, A., Serra-Musach, J., Rennert, G., Lejbkowicz, F., Peterlongo,

P., Manoukian, S., Peissel, B., Ripamonti, C. B., Bonanni, B., Viel, A., Allavena, A.,

Bernard, L., Radice, P., Friedman, E., Kaufman, B., Laitman, Y., Dubrovsky, M., Milgrom,

R., Jakubowska, A., Cybulski, C., Gorski, B., Jaworska, K., Durda, K., Sukiennicki, G.,

Lubiski, J., Shugart, Y. Y., Domchek, S. M., Letrero, R., Weber, B. L., Hogervorst, F.

B. L., Rookus, M. A., Collee, J. M., Devilee, P., Ligtenberg, M. J., van der Luijt, R. B.,

Aalfs, C. M., Waisfisz, Q., Wijnen, J., van Roozendaal, C. E. P., Easton, D. F., Peock, S.,

Cook, M., Oliver, C., Frost, D., Harrington, P., Evans, D. G., Lalloo, F., Eeles, R., Izatt,

L., Chu, C., Eccles, D., Douglas, F., Brewer, C., Nevanlinna, H., Heikkinen, T., Couch,

F. J., Lindor, N. M., Wang, X., Godwin, A. K., Caligo, M. A., Lombardi, G., Loman, N.,

Karlsson, P., Ehrencrona, H., von Wachenfeldt, A., Bjork Barkardottir, R., Hamann, U.,

Rashid, M. U., Lasa, A., Calds, T., Andrs, R., Schmitt, M., Assmann, V., Stevens, K.,

Offit, K., Curado, J., Tilgner, H., Guig, R., Aiza, G., Brunet, J., Castellsagu, J., Martrat,

G., Urruticoechea, A., Blanco, I., Tihomirova, L., Goldgar, D. E., Buys, S., John, E. M.,

25

Page 60: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Miron, A., Southey, M., Daly, M. B., Schmutzler, R. K., Wappenschmidt, B., Meindl,

A., Arnold, N., Deissler, H., Varon-Mateeva, R., Sutter, C., Niederacher, D., Imyamitov,

E., Sinilnikova, O. M., Stoppa-Lyonne, D., Mazoyer, S., Verny-Pierre, C., Castera, L.,

de Pauw, A., Bignon, Y.-J., Uhrhammer, N., Peyrat, J.-P., Vennin, P., Fert Ferrer, S.,

Collonge-Rame, M.-A., Mortemousque, I., Spurdle, A. B., Beesley, J., Chen, X., Healey,

S., Barcellos-Hoff, M. H., Vidal, M., Gruber, S. B., Lzaro, C., Capell, G., McGuffog, L.,

Nathanson, K. L., Antoniou, A. C., Chenevix-Trench, G., Fleisch, M. C., Moreno, V.,

Pujana, M. A., HEBON, EMBRACE, SWE-BRCA, BCFR, GEMO Study Collaborators,

and kConFab (2011). Interplay between brca1 and rhamm regulates epithelial apicobasal

polarization and may influence risk of breast cancer. PLoS Biol, 9(11):e1001199.

Mohajeri, A., Zarghami, N., Pourhasan Moghadam, M., Alani, B., Montazeri, V., Baiat, A.,

and Fekhrjou, A. (2011). Prostate-specific antigen gene expression and telomerase activity

in breast cancer patients: possible relationship to steroid hormone receptors. Oncology

Research, 19(8-9):375–80.

Pellegrino, M. B., Asch, B. B., Connolly, J. L., and Asch, H. L. (1988). Differential ex-

pression of keratins 13 and 16 in normal epithelium, benign lesions, and ductal carcinomas

of the human breast determined by the monoclonal antibody Ks8.12. Cancer Research,

48(20):5831–6.

Ray, P. S., Bagaria, S. P., Wang, J., Shamonki, J. M., Ye, X., Sim, M.-S., Steen, S., Qu, Y.,

Cui, X., and Giuliano, A. E. (2011). Basal-like breast cancer defined by FOXC1 expression

offers superior prognostic value: a retrospective immunohistochemical study. Annals of

Surgical Oncology, 18(13):3839–47.

Rø nneberg, J. A., Fleischer, T., Solvang, H. K., Nordgard, S. H., Edvardsen, H., Potapenko,

I., Nebdal, D., Daviaud, C., Gut, I., Bukholm, I., Naume, B. r., Bø rresen Dale, A.-L.,

Tost, J., and Kristensen, V. (2011). Methylation profiling with a panel of cancer related

genes: association with estrogen receptor, TP53 mutation status and expression subtypes

in sporadic breast cancer. Molecular Oncology, 5(1):61–76.

Robinson, J. L. L., Macarthur, S., Ross-Innes, C. S., Tilley, W. D., Neal, D. E., Mills, I. G.,

and Carroll, J. S. (2011). Androgen receptor driven transcription in molecular apocrine

breast cancer is mediated by FoxA1. The EMBO Journal, 30(15):3019–27.

Sauter, E. R., Lininger, J., Magklara, A., Hewett, J. E., and Diamandis, E. P. (2004). Asso-

ciation of kallikrein expression in nipple aspirate fluid with breast cancer risk. International

Journal of Cancer, 108(4):588–91.

26

Page 61: A convex pseudo-likelihood framework for high dimensional ...A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees Kshitij

Shimo, A., Nishidate, T., Ohta, T., Fukuda, M., Nakamura, Y., and Katagiri, T. (2007).

Elevated expression of protein regulator of cytokinesis 1, involved in the growth of breast

cancer cells. Cancer Science, 98(2):174–81.

Shimo, A., Tanikawa, C., Nishidate, T., Lin, M.-L., Matsuda, K., Park, J.-H., Ueki, T.,

Ohta, T., Hirata, K., Fukuda, M., Nakamura, Y., and Katagiri, T. (2008). Involvement of

kinesin family member 2C/mitotic centromere-associated kinesin overexpression in mam-

mary carcinogenesis. Cancer Science, 99(1):62–70.

Sizemore, S. T. and Keri, R. A. (2012). The Forkhead Box Transcription Factor FOXC1 Pro-

motes Breast Cancer Invasion by Inducing Matrix Metalloprotease 7 (MMP7) Expression.

The Journal of Biological Chemistry, 287(29):24631–40.

Tkocz, D., Crawford, N. T., Buckley, N. E., Berry, F. B., Kennedy, R. D., Gorski, J. J.,

Harkin, D. P., and Mullan, P. B. (2012). BRCA1 and GATA3 corepress FOXC1 to inhibit

the pathogenesis of basal-like breast cancers. Oncogene, 31(32):3667–3678.

Wang, J., Ray, P. S., Sim, M.-S., Zhou, X. Z., Lu, K. P., Lee, A. V., Lin, X., Bagaria, S. P.,

Giuliano, A. E., and Cui, X. (2012). FOXC1 regulates the functions of human basal-like

breast cancer cells by activating NF-κB signaling. Oncogene.

Yan, W., Cao, Q. J., Arenas, R. B., Bentley, B., and Shao, R. (2010). GATA3 inhibits breast

cancer metastasis through the reversal of epithelial-mesenchymal transition. The Journal

of Biological Chemistry, 285(18):14042–14051.

Yang, Q., Nakamura, M., Nakamura, Y., Yoshimura, G., Suzuma, T., Umemura, T., Tamaki,

T., Mori, I., Sakurai, T., and Kakudo, K. (2002). Correlation of prostate-specific antigen

promoter polymorphisms with clinicopathological characteristics in breast cancer. Anti-

cancer Research, 22(3):1825–8.

Zheng, Y., Huo, D., Zhang, J., Yoshimatsu, T. F., Niu, Q., and Olopade, O. I. (2012).

Microsatellites in the Estrogen Receptor (ESR1, ESR2) and Androgen Receptor (AR) Genes

and Breast Cancer Risk in African American and Nigerian Women. PLoS ONE, 7(7):e40494.

27


Recommended