+ All Categories
Page 1: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints

James C. Ross1,2 [email protected] G. Dy2 [email protected]

1: Brigham and Women’s Hospital, Boston, MA 02115 USA2: Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115 USA


Motivated by the need to identify new andclinically relevant categories of lung disease,we propose a novel clustering with con-straints method using a Dirichlet processmixture of Gaussian processes in a varia-tional Bayesian nonparametric framework.We claim that individuals should be groupedaccording to biological and/or genetic simi-larity regardless of their level of disease sever-ity; therefore, we introduce a new way oflooking at subtyping/clustering by recastingit in terms of discovering associations of in-dividuals to disease trajectories (i.e., group-ing individuals based on their similarity in re-sponse to environmental and/or disease caus-ing variables). The nonparametric natureof our algorithm allows for learning the un-known number of meaningful trajectories.Additionally, we acknowledge the usefulnessof expert guidance by providing for theirinput using must-link and cannot-link con-straints. These constraints are encoded withMarkov random fields. We also provide anefficient variational approach for performinginference on our model.

1. Introduction

Personalized medicine holds the promise of providingindividuals tailored medical care optimally suited totheir needs. In recent years, there has been an ex-plosion of clinical, biological, and genetic data, theanalysis of which will hopefully bring us closer to real-izing this goal. Understanding distinct mechanisms ofdisease – unique biological pathways and their geneticdeterminants – is at the core of this endeavor and is

Proceedings of the 30 th International Conference on Ma-chine Learning, Atlanta, Georgia, USA, 2013. JMLR:W&CP volume 28. Copyright 2013 by the author(s).

often referred to as “disease sub-typing”.

In this paper, we are specifically motivated by thetask of identifying novel and clinically relevant cat-egories of Chronic Obstructive Pulmonary Disease(COPD), a smoking related lung disease with a signifi-cant health burden worldwide. Alpha1-antitrypsin de-ficiency is one known form of genetic disorder leadingto COPD (Silverman & Sandhaus, 2009); experts hy-pothesize that there are other distinct, as yet unkown,categories of this disease determined by genetic predis-position (Cho, 2010; 2012; Barker & Brightling, 2013).The challenge is to identify these subgroups given largeamounts of data obtained from clinical studies. Thekey difficulty is grouping individuals with similar ge-netic make-up in spite of significantly different levelsof disease severity. For example, a younger personwith little exposure to smoke and relatively healthylungs should be placed in the same category with anolder, life-long smoker with advanced lung disease pro-vided they have the same genetic or biological pre-disposition.

The manner in which lung health changes as a func-tion of age and smoke exposure can be used to identifymeaningful subgroups. Some people are genetically re-sistant to the effects of smoke exposure and have pre-served lung health even after years of smoking. On theother hand, others are highly sensitive to smoke andexperience rapid health decline given similar levels ofexposure. This leads to the notion of “disease trajecto-ries”, and indeed there is an analogy to the trajectoriesof projectiles moving through space. We seek meaning-ful disease trajectories with the hypothesis that thoseindividuals associated with the same trajectory havesimilar genetic predispositions to lung health decline.The problem is that we do not know how many suchtrajectories (disease subgroups) exist, nor do we knowthe functional forms of those trajectories.

The traditional way to discover unknown subgroupsgiven data is by clustering (Jain et al., 1999). Clus-tering algorithms group data based on some notion

Page 2: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints

of similarity. Standard clustering algorithms typicallydefine similarity in the form of some metric or a prob-ability model. Most standard methods do not take thestructure of the problem into account and treat all thefeatures/variables in the same way; however, in ourCOPD sub-typing problem, we have variables such asage and smoking that are causative agents of variablesthat indicate lung function and disease severity. Thetype of grouping we are interested in discovering re-lates to how different groups of individuals respond toexposure. This led us to the design of a mixture ofGaussian process (GP) regression model.

There are algorithms for clustering time series data (Li& Prakash, 2011). These methods assume that eachsample has a time-sampled measurement. In our case,it is not always possible to work with longitudinal data(data in which a given individual is studied at multi-ple time points); many studies are cross-sectional. Ourapproach is flexible in the sense that input variablescan represent any entity that directly affects measur-able lung health or disease severity, including age andsmoke exposure. Our model is also able to learn com-ponent “trajectory” functions even when we only haveone sample per patient. This is possible because com-plete clinical datasets typically have multiple repre-sentatives of the same trajectory captured at differentstages of the disease process.

We use a mixture of GPs rather than the standardmixture of regression models (Grun & Leisch, 2007),because we do not know what the regression modelis. A Gaussian process (Rasmussen & Williams, 2006)provides a nonparametric distribution over functions.There has been work combining GPs with mixturemodels (Rasmussen & Ghahramani, 2002; Meeds &Osindero, 2006; Yuan & Neubauer, 2009). These worksaddress modeling data where there are local disconti-nuities. In a local region of the input space, there is agating function that determines which GP componentit is generated from. Our work addresses GP compo-nents at a global scale.

In 2012 Lazaro-Gredilla et al. (2012) introduced a mix-ture of Gaussian Processes to address the data associ-ation problem, which arises in multi-target trackingscenarios. As alluded to in the earlier paragraphs,this scenario is similar to the one we are interestedin, with one important difference: whereas they as-sumed the number of trajectories is known, we donot. To address this issue, we recast their formula-tion in a Bayesian nonparametric framework using thestick-breaking Dirichlet Process Model (Blei & Jordan,2006).

The added flexibility provided by the nonparamet-

ric model makes finding local minima more likely.We steer inference towards meaningful solutions byincorporating must-link and cannot-link constraints(Wagstaff & Cardie, 2000; Zhu, 2008) between data in-stances. This is an important feature of our model as itprovides a mechanism to include expert input (doctors,biologists, geneticists, etc.). Basu et al. (2006) demon-strated the use of Hidden Markov Random Fields(HMRF) to apply such constraints for semi-supervisedclustering. Orbanz & Buhmann (2008) used MRFs toimpose constraints in a nonparametric setting for spa-tial smoothing in image segmentation; they performedinference using Gibbs sampling. Inspired by these ap-proaches, we also use MRFs to encode must-link andcannot-link constraints, and we further demonstrate avariational approach for performing approximate in-ference.

In this paper, we introduce a novel variational Dirich-let process mixture of Gaussian processes that can alsolearn from must-link and cannot-link constraints. Thecontributions of this work are: 1) our model is able tolearn the number of clusters (trajectories) automat-ically for a mixture of GPs; 2) we provide a modelallowing a mixture of GPs to learn from constraints;3) we derive a variational inference approach to clus-tering with contraints encoded using MRFs; and 4) wepresent a transformative way of looking at sub-typingCOPD; instead of applying traditional clustering al-gorithms, we utilize our domain knowledge regardingthe disease mechanism and cast it as a problem of dis-covering multiple “disease trajectories”.

The rest of the paper is organized as follows. In Sec-tion 2 we give a brief overview of the theory behindour model. In Section 3 we describe our probabilis-tic model; we define both the structure and the con-stituent probability distributions. The update equa-tions used for variational inference are given in 4, andwe describe the conditions under which efficient com-putation is possible. We demonstrate algorithm per-formance on both synthetic and real-world datasets inSection 5, and we conclude in Section 6.

2. Background

In this section we briefly review theory on which ourmodel builds: Gaussian processes, Markov randomfields, and Dirichlet process mixtures.

2.1. Gaussian Processes

Gaussian Processes (GPs) have been used extensivelyfor Bayesian nonlinear regression. We cover the keyconcepts here as they pertain to our framework andrefer the reader to Rasmussen & Williams (2006) for

Page 3: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints


Gaussian Processes can be interpreted as a nonpara-metric prior over functions. They have the propertythat given a finite sampling of the domain, the corre-sponding vector of function values, f , are distributedaccording to a multivariate Gaussian with mean 0 (ar-bitrary but used in standard practice) and covariancematrix K:

f ∼ N (f |0,K ) (1)

The elements of K are determined by the kernel func-tion, k : [K ]n,n′ = k (xn ,xn′ ). The choice of kernelfunction and selection of its parameter values controlsthe behavior of the GP. One popular kernel function(and the one used throughout our experiments) is theexponential of a quadratic form given by

k (xn ,xn′ ) = θ0 exp

(-θ12‖xn -xn′ ‖2


In order to perform GP regression, we assume an ob-served dataset of inputs and corresponding (noisy) tar-

gets, D ≡ {xn, yn }Nn=1, where we model the targets asp (y|f ) = N

(y|f , σ2IN

). Here, σ2 is the variance on

the target variables. It can then be shown that the pre-dicted mean and variance of target value y∗ at somenew input x∗ are given by

µ∗ = k>∗(K + σ2IN

)−1y (3)

σ2∗ = σ2 + k∗∗ − k>∗

(K + σ2IN

)−1y (4)

where k∗∗ = k (x∗,x∗ ) and [k∗ ]n = k (xn ,x∗ ).

2.2. Markov Random Fields

A Markov random field (MRF) is represented by anundirected graphical model in which the nodes repre-sent variables or groups of variables and the edges indi-cate dependence relationships. An important propertyof MRFs is that a collection of variables is condition-ally independent of all others in the field given thevariables in their Markov blanket. The Hammersley-Clifford theorem states that the distribution, p (Z ),over the variables in a MRF factorizes according to

p (Z ) =1



Hc (zc )


where Z is a normalization constant called the parti-tion function, C is the set of all cliques in the MRF,zc are the variables in clique c, and Hc is the energyfunction over clique c (Geman & Geman, 1984; Be-sag, 1974). The energy function captures the desiredconfiguration of local variables.








N ∞

Figure 1. Probabilistic graphical model for constrained,nonparametric, Gaussian process regression.

2.3. Dirichlet Process Mixtures

Ferguson (1973) first introduced the Dirichlet process(DP) as a measure on measures. It is parameterized bya base measure, G0, and a positive scaling parameterα:

G | {G0, α } ∼ DP (G0, α ) (6)

The notion of a Dirichlet process mixture (DPM)arises if we treat the k th draw from G as a parameterof the distribution over some observation (Antoniak,1974). DPMs can be interpreted as mixture modelswith an infinite number of mixture components.

More recently, Blei & Jordan (2006) described avariational inference algorithm for DPMs using thestick-breaking construction introduced by Sethuraman(1991). The stick-breaking construction represents Gas

πk (v ) = vk

k −1∏j=1

(1- v j ) (7)

G =


πi (v ) δη∗i (8)

where δη∗i is the Kronecker delta, and the v i aredistributed according to a beta distribution: v i ∼Beta (1, α ), and η∗i ∼ G0. The use of the stick-breaking construction in our formulation will be dis-cussed in Section 3.

3. Our Formulation

In this section we describe our formulation, includinga definition of the variables in our model.

Let X = [x1 · · ·xQ ] be the N ×Q matrix of observedinputs where N is the number of instances and Q is the

Page 4: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints

dimension of the inputs. Let Y = [y1 · · ·yD ] be theN×D matrix of corresponding target values, where Drepresents the dimension of the target variables. Weintroduce the N × ∞ binary indicator matrix, Z, torepresent the association between the data instancesand the latent regression functions. Following the no-tation in Lazaro-Gredilla et al. (2012), we designate

the set of latent functions as{

f(k )d (x)


. We

collect all latent functions of trajectory k in the matrix

F(k) =[f(k)1 · · · f (k)D

], and we designate the complete

set of latent functions as{F(k)


The probabilistic graphical model describing our for-mulation can be seen in Figure 1. The set C is a collec-tion of data instance pairs representing given must-linkand cannot-link constraints. With these quantities de-fined, we give the joint distribution of our model:

p(Y,Z,v, {F(k)}


p({F(k)} |X

)p(Y| {F(k)},Z

)p (Z |v, C) p (v |α)



p({F(k)} |X




N(f(k)d |0,K



p(Y| {F(k)},Z





N(Yn,d |F(k)

n,d , σ2)Zn,k


p (Z |v, C) =1



(i,j )∈C

H (zi , zn )






(1-vj )



p (v |α) =


Beta (vk |1, α) (13)

Equation 10 represents the prior distribution over theinfinite collection of Gaussian processes. The likeli-hood in our model is given in Equation 11; note thatthis distribution factorizes over the target dimensionsbut that the same Gaussian process covariance matrixfor a given regressor is used for all dimensions. Wealso assume that the variances for each target variable

dimension, σ2, are known and constant. This is a re-alistic assumption for our disease sub-typing use case:devices that measure disease severity can have theirmeasurement variance characterized. For applicationswhere σ2 is not known, this and other hyperparame-ters can be automatically learned via empirical Bayes.

Equation 12 describes the distribution over Z and con-sists of two terms: the first is a MRF that captures thepairwise constraints, and the second is a multinomialdistribution with parameters drawn for a Dirichletprocess using the stick-breaking construction. Equa-tion 13 expresses the distribution over the variable,v, used for the stick-breaking process; here α is theconcentration parameter.

The energy function used in our experiments is givenby

H (zi , zj ) =

−w i,j , < zi , zj >= 1 and (i , j ) is ML−w i,j , < zi , zj >= 0 and (i , j ) is CL

0, Otherwise


where < zi , zj > represents the inner product betweenzi and zj , ML stands for must-link and CL for cannot-link, and w i,j is in the interval [0, 1] with lower valuesexpressing less confidence in the constraint and vice-versa.

While our formulation has similarities to Lazaro-Gredilla et al. (2012), we emphasize that our algorithmis both nonparametric in the number of mixture com-ponents and semi-supervised, important features forour intended application. Additionally, while Orbanz& Buhmann (2008) showed that MRFs can be incor-porated with DPMs, but they performed inference us-ing Gibbs sampling. In Section 4 we will show thatvariational inference can be applied provided certainconditions are satisfied by the constraints.

4. Inference

In this section we give the variational inference up-date equations used in our model. Variational infer-ence is a method of approximate inference that makesassumptions (typically a factorization) over the dis-tribution of interest, and it turns an inference prob-lem into an optimization problem (Jordan et al., 1999;Jaakkola, 2001). Additionally, whereas approximateinference methods based on sampling (such as MonteCarlo Markov Chain) can be slow to converge, varia-tional inference enjoys a greater computational advan-tage in this regard.

For our application, we are interested in the distribu-tion over the latent variables in our model given our

Page 5: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints

observations: p(Z,v, {F(k)} |X,Y

). The posterior

probability is approximated by optimizing the varia-tional lower bound. The standard variational inferenceapproach is to assume a factorized approximation ofthis distribution, in our case p∗ (Z) p∗


)p∗ (v).

In order to derive the expression for one of these fac-tors, the expectation with respect to the other factorsis considered. Derivation of the variational distribu-tions begins with the following expressions

ln p∗ (Z) = E{F(k)},v

{ln p

(Y,Z,v, {F(k)}



ln p∗({F(k)}

)= EZ,v

{ln p

(Y,Z,v, {F(k)}



ln p∗ (v) = EZ,{F(k)}

{ln p

(Y,Z,v, {F(k)}



Given space limitations, we provide the expressions foreach factor without derivation.

The variational distribution over {F(k)} is given as





N(f(k)d |µ(k),C(k)



C(k) =(K(k)-1 +R(k)


µ(k) = C(k) R(k) yd (20)


R(k) =1


EZ {Z }1,k 0 · · ·

0. . .

... EZ {Z }N,k


Note that as in Blei & Jordan (2006), our approximatedistribution truncates the stick-breaking construction,so that k ranges from 1 to K (set to 20 in all ourexperiments).

The expression for p∗ (v) is given by

p∗ (v) =




∣∣∣∣1 +


EZ {Z }n,k,




EZ {Z }n,j


Finally, the distribution for p∗ (Z) is given by

p∗ (Z) =∏V∈V




(i,j )∈Ci,j∈V

H (zi , zn )







rn,k =ρn,k∑Kk=1 ρn,k


ln ρn,k =




− 1



2 -

2Yn,d E{F(k)}






2 })]+

Ev {ln vk}+


Ev {ln (1− vj)} (25)

In Equation 23, V represents a set of sets. Each el-ement V of V is a set of data indices belonging to aconnected subgraph of the constraint MRF. Becausethe set of constraints is generally sparse, the MRFcan be characterized by a collection of disconnectedsubgraphs. If the constraint set is dense, we can ap-proximate the distribution by truncating the neigh-borhood to enforce low cardinality. It is importantto note that the distribution factorizes over the resul-tant subgraphs. Given that each subgraph cardinalityis small, it is feasible to compute the correspondingpartition function, ZV. This in turn enables efficientcomputation of EZ {Z }.

As an example, consider the MRF shown in Fig. 2.Here, V = {{1, 4 }, {2 }, {3, 5, 6, 8 }, {7 }, {9 } }. Notethat each subgraph cardinality is low (with a max-imum of four in this example), so that their corre-sponding partition functions are easily computed.

Inference begins by randomly initializing the matrix Zsuch that each element is equal to or greater than zeroand each row sums to one. We then iteratively updateequations 18, 22, and 23 until we observe no change inEZ {Z } or until a pre-specified number of iterations isreached.

5. Experiments

In this section we demonstrate algorithm performanceon both synthetic and real-world datasets. For allour experiments, the cardinality of the constraint sub-graphs was kept below 5. No special attention was

Page 6: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints

z1 z2 z3

z4 z5 z6

z7 z8 z9

Figure 2. Example MRF illustrating disconnected sub-graphs. Each graph edge represents either a must-link orcannot-link constraint.

given to the reported parameter settings for α, θ0, orθ1. Rather, a coarse parameter selection of reasonablevalues was used.

5.1. Experiments on Synthetic Data

We tested algorithm performance on two syntheticdatasets. The first consists of noisy samples takenfrom two curves: a sinusoid and a sinusoid with amodest linear offset. The second dataset is made upof noisy samples taken from two interlaced helices in3D. For both cases, the algorithm was run for 50 iter-ations. α was set to 1.0, and θ0 was set to 1.0 in bothcases. For the sinusoids experiment, σ2 = 0.02 andθ1 = 0.005. For the helices experiment, σ2 = 0.1 andθ1 = 0.0005. Must-link constraints were generated byrandomly choosing pairs of points from a given func-tion, preferring pairs that are spaced farther apart.Cannot-link constraints were generated by randomlychoosing pairs of points from different functions, pre-ferring pairs in regions where the functions tend to becloser to one another.

We used normalized mutual information(NMI ) (Strehl & Ghosh, 2003) to investigate al-gorithm performance for a number of differentconstraints. Letting A represent the cluster assign-ments determined by the algorithm and B representthe ground-truth cluster assignments, the NMI is

given by NMI = H(A)−H(A|B)√H(A)H(B)

, where H(·) is the

entropy. Higher NMI values mean that the clusteringresults are more similar to ground-truth; the criterionreaches its maximum value of one when there isperfect agreement.

Figure 3 gives the results of the synthetic experiments.Each entry in the rightmost plot of this figure repre-sents the average NMI score across fifty, randomly ini-tialized runs. There is a clear increase in performance

Figure 4. Illustration of an algorithm output for the un-constrained case. While the curves provide a reasonableexplanation of the data, it may not be the solution of in-terest.

Figure 5. Left: example of learned regressors during train-ing for the unconstrained cases. Right: learned regressorsusing constraints. Plots are taken from different folds.

with added constraints in both cases. The center plotsillustrate the regression curves found by the algorithm,and the data instances are color coded according totheir association to each curve.

Without constraints, the algorithm has a greater ten-dency to converge on solutions that may not be ofinterest. This is illustrated in Figure 4. While the so-lution shown does a reasonably good job of explainingthe data, this particular solution might not be “opti-mal”. By adding constraints, the optimization land-scape is modified to one more favorable for findinginteresting solutions.

5.2. Experiments on Real-World Data

As stated in the introduction, the motivation for ourmodel stems from the need to identify clinically mean-ingful subtypes of lung disease. Here we show resultson data from the Normative Aging Study (NAS) (Bell,1972), a longitudinal study designed to investigate therole of aging on various health issues, including lungfunction. The complete dataset includes a large num-ber of features; here we focus on the effects of age ona widely used measure of lung function, FEV1 (forced

Page 7: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints

Figure 3. Illustration of algorithm performance on two synthetic datasets. Left: original data. Middle: a correct resultfound by the algorithm, color-coded by association to regression curves. Right: Average NMI score as a function oftotaled ML and CL constraints used.

expiratory volume in one second). We randomly chosea subset of forty subjects such that each subject wasrepresented at a minimum of five time points and ev-eryone had approximately the same height.

Since our goal is to identify regression curves that asso-ciate subjects according to genetics, longitudinal stud-ies like NAS provide a good arena in which to test andimplicitly provide constraints: all data instances be-longing to a given subject are must-linked together(i.e., there are “built-in” constraints).

It is known that lung function decline (as measuredby a decrease in FEV1) is a natural part of the agingprocess, even in healthy individuals. However, someindividuals are thought to experience a more rapid de-cline while others a more modest decline. This effectis thought to be even more pronounced as a functionof smoke exposure. For our initial analysis we focus onage as the input variable. (Accurately capturing ex-posure to smoke is nontrivial and will be the focus ofour future work). We investigate our algorithm’s per-formance by performing a five-fold cross validation.For each fold the test set consists of a randomly se-lected time point for each individual, and the trainingset consists of the remaining data. No data point is re-peated as a test instance across the different folds. Foreach of the five training sessions, we learn regressioncurves both with and without constraints, identifyingsolutions with the lowest variational bound in eachcase.

We want to identify curves that are geneti-cally/biologically meaningful despite various levels ofmeasured lung function, so we desire solutions suchthat the data points for an individual are associatedto the same curve during the training phase. We re-port the percentage of times this occurs for each of thefive folds, both with and without constraints.

We are also interested in the predictive power of thelearned regressors. For each instance in the test set weidentify the curve most often associated to that indi-vidual in the training set and use that regressor to pre-

Figure 6. Detected individuals in a frame from EUCAVIAR video sequence.

dict the FEV1 value associated with the test instance;we do this for both the constrained and unconstrainedcases. Additionally, we compare our predictions tothose made by the currently accepted prediction equa-tion used in clinical practice (Hankinson, 1999) givenby

FEV1 = 0.5536− 0.01303× age−0.000172× age2 + 0.00011607× height2 (26)

We ran 50 iterations for all experiments and set σ2 =0.0225, α = 1.0, θ0 = 1.0, and θ1 = 0.002. The resultsare summarized in Table 1.

We also highlight examples of learned regressors forboth the constrained and unconstrained case in Fig-ure 5. The constrained case depicts trends that agreewell with clinical expectation, while the unconstrainedcase shows an unexpected increase in lung health forone of the sub-populations, clearly contrary to what isknown about lung physiology and the aging process.

Although our algorithm was designed specifically forapplication to lung disease sub-typing, our last exper-iment shows that it is potentially useful for relatedtracking scenarios. We demonstrate this by consid-ering a video-sequence taken from the EU CAVIAR

Page 8: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints

Table 1. Five-fold cross-validation results on clinical data taken from the Normative Aging Study. The first three columnsshow the average mean squared error between actual and predicted FEV1 measures using the standard clinical predictionequation (“Clin. Pred.”) and predictions using our algorithm with (“Alg. Pred. (Constr.)”) and without (“Alg. Pred.(Unconstr.)”) constraints. The last two columns show the matching percentages for the constrained and unconstrainedcases; see text for details.

Fold Clin. Pred. Alg. Pred. (Constr.) Alg. Pred. (Unconstr.) Match Perc. (Constr.) Match Perc. (Unconstr.)

1 0.57 0.24 0.38 0.88 0.572 0.36 0.06 0.18 0.84 0.633 0.58 0.20 0.37 0.84 0.604 0.46 0.28 0.40 0.84 0.565 0.49 0.24 0.31 0.83 0.59

dataset1. This is a human-labeled benchmark se-quence featuring four individuals walking through ascene. Each of the 1, 164 data instances used here con-sist of the sequence’s frame number, and the target val-ues are the centroids of each detected bounding-box.Each of the four individuals in the scene is assigned aunique ID in the available ground-truth, and we usethat information to impose ML and CL constraints onour algorithm. We again run for 50 iterations and setσ2 = 2.0, α = 0.03, θ0 = 100.0, and θ1 = 0.0005.Results are shown in Figure 7.

Figure 7. Algorithm performance on the EU CAVIARvideo sequence. Left: original data. Middle: a correctresult found by the algorithm, color-coded by associationto regression curves. Right: Average normalized mutualinformation NMI score as a function of totaled ML andCL constraints used.

For all experiments described in this section, our algo-rithm was able to identify meaningful results both interms of the number of regressors as well as their func-tional forms. As the number of constraints increases,the results converged on are more likely to representthe solution of interest. The flexibility to automati-cally identify both the number of regressors and theirforms while honoring valuable expert input are the keyadvantages of our approach.

1 http://homepages.inf.ed.ac.uk/rbf/CAVIAR/

6. Conclusion

We have introduced a nonparametric, mixture ofGaussian process regression framework that uses must-link and cannot-link constraints to identify solutionsof interest. Our motivation for building this model isto assist with lung disease sub-type idenfication; wehave provided a new way of looking at this problemby recasting it in terms of discovering associations ofindividuals to disease trajectories, and we have demon-strated the efficacy of our approach on real-world clin-ical data. In the process of designing an appropriatelearning model for solving this clinical problem, wehave developed a novel Dirichlet process mixture ofGaussian processes with constraints. It is applicableto other applications requiring clustering/data associ-ation to trajectories or nonparametric functions. Wehave also successfully shown its effectiveness on syn-thetic and tracking data.


This work was supported by US NIH grants R01HL089856 and R01 HL089897. We thank Michael H.Cho and Peter J. Castaldi for their guidance on clinicalaspects of our model, and we thank Augusto Litonjuafor supplying Normative Aging Study data.

Page 9: Nonparametric Mixture of Gaussian Processes with Constraintsproceedings.mlr.press/v28/ross13a.pdfmodel builds: Gaussian processes, Markov random elds, and Dirichlet process mixtures.

Nonparametric Mixture of Gaussian Processes with Constraints


Antoniak, Charles E. Mixtures of dirichlet processeswith applications to bayesian nonparametric prob-lems. The annals of statistics, pp. 1152–1174, 1974.

Barker, Bethan L and Brightling, Christopher E. Phe-notyping the heterogeneity of chronic obstructivepulmonary disease. Clinical Science, 124(6):371–387, 2013.

Basu, S., Bilenko, M., Banerjee, A., and Mooney, R.J.Probabilistic semi-supervised clustering with con-straints. Semi-supervised learning, pp. 71–98, 2006.

Bell, Benjamin et al. The normative aging study: aninterdisciplinary and longitudinal study of healthand aging. The International Journal of Aging andHuman Development, 3(1):5–17, 1972.

Besag, Julian. Spatial interaction and the statisticalanalysis of lattice systems. Journal of the Royal Sta-tistical Society. Series B (Methodological), pp. 192–236, 1974.

Blei, David M and Jordan, Michael I. Variational infer-ence for dirichlet process mixtures. Bayesian Anal-ysis, 1(1):121–143, 2006.

Cho, Michael H et al. Variants in fam13a are asso-ciated with chronic obstructive pulmonary disease.Nature genetics, 42(3):200–202, 2010.

Cho, Michael H et al. A genome-wide association studyof copd identifies a susceptibility locus on chromo-some 19q13. Human molecular genetics, 21(4):947–957, 2012.

Ferguson, Thomas S. A bayesian analysis of some non-parametric problems. The annals of statistics, pp.209–230, 1973.

Geman, Stuart and Geman, Donald. Stochastic relax-ation, gibbs distributions, and the bayesian restora-tion of images. Pattern Analysis and Machine Intel-ligence, IEEE Transactions on, (6):721–741, 1984.

Grun, Bettina and Leisch, Friedrich. Fitting finite mix-tures of generalized linear regressions in r. Computa-tional Statistics & Data Analysis, 51(11):5247–5252,2007.

Hankinson, John L et al. Spirometric reference valuesfrom a sample of the general us population. Ameri-can journal of respiratory and critical care medicine,159(1):179–187, 1999.

Jaakkola, Tommi S. 10 tutorial on variational approxi-mation methods. Advanced mean field methods: the-ory and practice, pp. 129, 2001.

Jain, Anil K, Murty, M Narasimha, and Flynn,Patrick J. Data clustering: a review. ACM com-puting surveys (CSUR), 31(3):264–323, 1999.

Jordan, M.I., Ghahramani, Z., Jaakkola, T.S., andSaul, L.K. An introduction to variational methodsfor graphical models. Machine learning, 37(2):183–233, 1999.

Lazaro-Gredilla, Miguel, Vaerenbergh, Steven Van,and Lawrence, Neil D. Overlapping mixtures ofgaussian processes for the data association problem.Pattern Recognition, 45(4):1386–1395, 2012.

Li, Lei and Prakash, B Aditya. Time series cluster-ing: Complex is simpler! In Proceedings of the28th International Conference on Machine Learning(ICML-11), pp. 185–192, 2011.

Meeds, Edward and Osindero, Simon. An alternativeinfinite mixture of gaussian process experts. Ad-vances in Neural Information Processing Systems,18:883, 2006.

Orbanz, P. and Buhmann, J.M. Nonparametricbayesian image segmentation. International Jour-nal of Computer Vision, 77(1):25–45, 2008.

Rasmussen, C.E. and Ghahramani, Z. Infinite mix-tures of gaussian process experts. Advances in neu-ral information processing systems, 2:881–888, 2002.

Rasmussen, C.E. and Williams, C.K.I. Gaussian pro-cesses for machine learning, volume 1. MIT pressCambridge, MA, 2006.

Sethuraman, Jayaram. A constructive definition ofdirichlet priors. Technical report, DTIC Document,1991.

Silverman, Edwin K and Sandhaus, Robert A. Alpha1-antitrypsin deficiency. New England Journal ofMedicine, 360(26):2749–2757, 2009.

Strehl, Alexander and Ghosh, Joydeep. Clusterensembles—a knowledge reuse framework for com-bining multiple partitions. The Journal of MachineLearning Research, 3:583–617, 2003.

Wagstaff, Kiri and Cardie, Claire. Clustering withinstance-level constraints. In Proceedings of theSeventeenth International Conference on MachineLearning, pp. 1103–1110, 2000.

Yuan, Chao and Neubauer, Claus. Variational mixtureof gaussian process experts. Advances in Neural In-formation Processing Systems, 21:1897–1904, 2009.

Zhu, Xiaojin. Semi-supervised learning literature sur-vey. 2008.

Top Related