+ All Categories
Home > Documents > Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a...

Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a...

Date post: 18-Jul-2020
Category:
Upload: others
View: 6 times
Download: 0 times
Share this document with a friend
73
Probability density imputation of missing data with Gaussian Mixture Models Adri` a Garriga Alonso Hertford College Supervised by Prof. Mihaela van der Schaar Co-supervised by Prof. Edith Elkind Trinity Term 2017 A dissertation submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science
Transcript
Page 1: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Probability density imputation ofmissing data with Gaussian Mixture

Models

Adria Garriga Alonso

Hertford College

Supervised by

Prof. Mihaela van der Schaar

Co-supervised by

Prof. Edith Elkind

Trinity Term 2017

A dissertation submitted in partial fulfillment of the requirements for thedegree of Master of Science in Computer Science

Page 2: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 3: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Acknowledgements

First of all, I thank my supervisor, Prof. Mihaela van der Schaar, for her invaluable

guidance and her keen eye for interesting problems. I also thank my supervising student,

Jinsung Yoon, for his many correct intuitions that saved me hours of exploration, and

good explanations so that I could learn.

I also thank my co-supervisor, Prof. Edith Elkind, for her excellent actionable advice, given

in the direst of moments. On the same note, I thank Sarah Retz-Jones, the coordinator of

the MSc in CS, for easing my biggest worries, and ensuring everything runs smoothly so

that I don’t have more to worry about on top.

I thank my co-students Alex, Eleonora, Michael and Qixuan, for being excellent people,

and solidary in our common struggle to do excellent work. I thank my friend Kholood,

who is as diligent as no other, and always had a solution to my writing troubles; and my

friend Paco, for his effective encouragement and motivation. I thank my friends from

Hertford College, Alex, Cecile, David, Devesh, Maalik and Rayner, for listening to my

anxious rants, and preventing a breakdown or two.

Finally, I thank my partner Tanja for always being there, and my parents Miquel, Isabel,

and sister Nuria for their continuous support. Thank you all!

Page 4: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 5: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Abstract

Multiple imputation (Rubin, 2004) is a procedure for conducting unbiased data analysis

of an incomplete data set X, using complete-data analysis techniques. It consists in

generating m imputed data sets in which the missing values Xmis have been replaced by

plausible values, analysing the imputed data sets, and combining the results. To assert

the soundness of this approach, the plausible values are assumed to be drawn from the

posterior distribution of the missing values given the data, p(Xmis |X). Typically, m ≤ 10

samples are used.

In this thesis we go one step further, and set m→∞. That is, we attempt to estimate the

distribution p(Xmis |X) with a closed-form probability distribution, performing probability

density imputation. To this end, we provide the Partially Observed Infinite Gaussian

Mixture Model (POIGMM), an algorithm for (1) density estimation from incomplete data

sets, and (2) density imputation; based on the Infinite GMM by Blei and Jordan (2006).

A density-imputed data set can easily be reduced to a multiple-imputed data set, by

taking m samples from the estimated posterior distribution. Accordingly, we compare the

POIGMM with the state of the art in multiple imputation: MissForest (Stekhoven and

Buhlmann, 2011) and MICE (Van Buuren and Oudshoorn, 1999).

Page 6: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 7: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Contents

Acknowledgements v

Abstract vii

Contents ix

Abbreviations xiii

1 Introduction 1

1.1 Goals and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Intuition behind the POIGMM for imputation . . . . . . . . . . . . . . . . 4

1.3 Document structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Background 7

2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.1 Notational conventions . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.2 Basic probability theory . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1.3 Supervised machine learning . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Probabilistic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2.1 Maximum likelihood estimation . . . . . . . . . . . . . . . . . . . . 10

2.2.2 Bayesian inference . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.2.3 Bayesian networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2.4 Mixture model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2.5 Variational inference . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2.6 Exponential and conjugate families of distributions . . . . . . . . . 16

2.3 Important probability distributions . . . . . . . . . . . . . . . . . . . . . . 17

2.3.1 Multivariate Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.3.2 Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4 Missing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.4.1 Types of missing data: MCAR, MAR, NMAR . . . . . . . . . . . . 20

ix

Page 8: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

x Contents

2.4.2 Handling missing data . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Partially Observed Infinite GMM 25

3.1 The Dirichlet Process Mixture Model . . . . . . . . . . . . . . . . . . . . . 26

3.1.1 The Dirichlet Process Prior . . . . . . . . . . . . . . . . . . . . . . 26

3.1.2 Joint likelihood of a DP mixture . . . . . . . . . . . . . . . . . . . . 27

3.1.3 The variational distribution . . . . . . . . . . . . . . . . . . . . . . 29

3.2 Inference in the partially observed infinite GMM . . . . . . . . . . . . . . . 30

3.2.1 Optimal cluster assignment (τ ) update . . . . . . . . . . . . . . . . 30

3.2.2 Optimising the weight distribution parameters (γ) . . . . . . . . . . 32

3.2.3 Optimising the component parameters (m, β,W, ν) . . . . . . . . . 32

3.3 Imputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.3.1 Semi-Bayesian imputation . . . . . . . . . . . . . . . . . . . . . . . 36

3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4 Experiments 39

4.1 Data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.2.1 Creating missing data . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.2.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2.3 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3.1 How to read the graphs . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.3.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5 Related Work 45

5.1 Multiple Imputation by Chained Equations (MICE) . . . . . . . . . . . . . 45

5.2 MissForest: Random Forests for imputation . . . . . . . . . . . . . . . . . 46

6 Conclusions 47

6.1 Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

A Experiments 49

B Mathematical definitions 54

B.1 Probability distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

B.1.1 Gaussian-Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

B.1.2 The Beta distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Page 9: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Contents xi

B.1.3 Multinomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . 55

B.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

B.3 Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

C Extra derivations 57

C.1 Expectation of the logarithm of the normal . . . . . . . . . . . . . . . . . . 57

Bibliography 61

Page 10: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 11: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Abbreviations

CI Conditional Independence

DP Dirichlet Process

ELBO Evidence Lower Bound

GMM Gaussian Mixture Model

i.i.d. independently identically distributed

KL Kullback Leibler

MAP Maximum a posteriori

MAR Missing at random

MCAR Missing completely at random

MICE Multiple Imputation by Chained Equations

ML Machine Learning

MLE Maximum Likelihood Estimation

MSE Mean Squared Error

NMAR Not missing at random

POIGMM Partially Observed Infinite Gaussian Mixture Model

PSD positive semi-definite

RF Random Forest

RV Random Variable

SGD Stochastic Gradient Descent

xiii

Page 12: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 13: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Chapter 1

Introduction

Data collected from the real world is often messy. For example:

• A hospital has 10 possible tests that can be administered to patients. Each of them is

designed to identify one specific disease, and is given when the patient shows possible

symptoms of the disease. None of them, however, is perfectly accurate: each fails to

correctly diagnose patients as sick, or healthy, 5% of the time. The hospital hired

an analyst to improve the accuracy of the tests, by pooling the results of several of

them together, using the records of past patients. However, understandably, not all

the patients received all the tests.

• The UK Met Office has more than 200 weather measurement stations, as well

as accepting voluntary weather data submissions (Met Office Weather Stations).

Invariably, some of the sensors in the stations eventually fail until they are replaced.

Additionally, the specific types of measurements collected in private submissions

may be different for each volunteer, and for the same volunteer on each subsequent

submission.

• A municipal library is trying to understand who uses their facilities and how. They

make a very detailed survey asking questions about level of satisfaction, frequency

of facility usage, and demographics. Of the users that take the survey, many do not

finish the survey or skip questions.

Each of these cases can be seen as a set of multi-dimensional data points. In the hospital’s

case, each point represents a patient, each dimension containing information about one

test, or whether or not they had a certain disease. In the weather stations’ case, each

point is a time instant and contains temperature, humidity, pressure, et cetera. In the

library survey, each point is a respondent, the dimensions their answers.

1

Page 14: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2 Chapter 1. Introduction

These examples have another trait in common. They contain missing data: in some points,

the values of some of the dimensions are unknown.

This poses a problem to most machine learning methods and statistical tests, since they

require fully observed data. One option is to discard all points that have at least one

missing value. However, there is still much useful information in the observed values of

those points. How can we use this information?

The traditional answer lies in imputation (Little and Rubin, 2002). Imputation consists in

replacing each of the missing values in the data set by values that are concordant with the

other values of the point, and the rest of the data set. Then, the data set is analysed with

a standard algorithm that needs fully-observed data.

For example, consider a data set of pine trees, listing their ages and heights. If the height

of one pine tree is missing, a plausible replacement is the average height of pine trees that

are the same age.

However, most likely the imputed tree does not have exactly the mean of the other trees’

heights. If further analysis on the imputed data takes the pine’s height as exact, the

conclusions extracted from the data set will be biased and likely incorrect. A solution,

proposed by Rubin (2004), is multiple imputation. Multiple imputation consists in creating

m > 1 imputed data sets. For each data set, the missing values are replaced by a sample

from the probability distribution of that missing value, given the original data set. Larger

values of m imply less biased results. Commonly m ≤ 10 is used.

In this thesis we go one step further, and set m → ∞. That is: we aim to estimate

a closed-form probability distribution, that approximates as well as possible the true

distribution of the missing values given the data set. We call this kind of imputation

probability density imputation, or just density imputation. The closed-form family of

distributions we consider is that of mixtures of Gaussians.

Density imputation has several potential advantages:

• In some machine learning methods, it is possible to analytically include the estimated

closed-form distribution in computations that ordinarily use fully observed variables.

In essence these methods can learn directly from the m→∞ imputations.

– Linear models, in which the output is a linear transformation of the input (at

least for the popular MSE cost function, see section 2.1.3).

– Probabilistic models (section 2.2). There, the distribution of the missing data

can be seen as one or several latent (that is, unobserved) variables with a known

distribution.

Page 15: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

3

– Kernel machine learning methods use a kernel function to determine the sim-

ilarity between two points. For partially observed points, we can compute

the expected value of the kernel instead (Nebot-Troyano and Belanche-Munoz,

2010).

• Many machine learning algorithms define a parametrised family of functions. Then,

they optimise those parameters to minimise the loss function (section 2.1.3). Some

of those (notably, nearly all modern neural networks) use stochastic gradient descent

(SGD) to perform optimisation. SGD is based on incrementally changing the values

of the parameters towards stochastic samples of the negative gradient of the loss.

Here, the estimated distribution can be used to sample new values for the missing

variables at each gradient descent step. The effective number of imputations in this

case would be finite, but growing linearly with the computation time spent in SGD.

Furthermore, neural networks are famous for needing a large amount of data to

be useful (otherwise they over-fit, section 2.1.3). Dropout, a popular technique

that alleviates the over-fitting problem, also consists in adding stochastic noise to

the network (Srivastava et al., 2014) . Thus, density imputation may address the

over-fitting and the biased imputation problems simultaneously.

For situations that do not benefit from density imputation, we can recover the multiple

imputation case simply by drawing m samples from the estimated distribution for each

data point.

The idea of using a closed-form distribution family to approximate the true distribution of

missing values is not strictly new. The aforementioned Nebot-Troyano and Belanche-Munoz

(2010) use univariate Gaussians centred on each observed value, ignoring correlations be-

tween variables in the same point, to compute the expected kernel and perform Support

Vector Machine classification. Damianou, Titsias, and Lawrence (2016) estimate a uni-

variate Gaussian for each missing value instead, based on the observed values in the

same point. However, their method needs to be bootstrapped with a part of the data

set that has no missing values, thus becoming biased. Furthermore, they do not propose

propagating the closed-form distribution to further analysis.

From another angle, Gaussian Mixture Models have also been used for multiple imputation.

Zio, Guarnera, and Luzi (2007) provided a maximum likelihood estimation algorithm for

finite mixtures of Gaussians from an incomplete data set, and used it to perform multiple

imputation.

Page 16: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

4 Chapter 1. Introduction

1.1 Goals and contributions

The goals of this project are:

• To create a density imputation algorithm that takes into account correlations between

variables in the same point. We restrict our attention to data sets where each data

point is assumed to be independently identically distributed (i.i.d.). This implies, for

example, no temporal correlation between the data points, so our method would not

be suited to analyse the Met Office’s data.

• To improve upon the state of the art of single and multiple imputation methods.

Our contributions are:

• A Bayesian inference algorithm for estimating the probability density in the presence

of missing values, that accounts for relationships between variables. This is the

Partially Observed Infinite Gaussian Mixture Model (POIGMM).

• An imputation algorithm based on expected conditional distributions of the

POIGMM.

1.2 Intuition behind the POIGMM for imputation

Suppose we have an i.i.d. dataset X, where each point xn belongs to R2. Suppose we know

the probability distribution that the points are drawn from, for example the one on the

left-hand-side of Figure 1.1. Then, we obtain a point x∗ with a missing value: in this case,

the y coordinate is unknown. The vertical red line represents the possible values of the

point. Then the imputation probability distribution, p(xmis∗ |xobs

∗ ,X), is the “slice” of the

original probability distribution that falls on the line.

Thus, a way to obtain a density imputation algorithm is to:

1. Estimate the probability distribution of the data, a density model, from a data set

with missing values.

2. Compute conditional distributions on that model, given a partially observed data

point.

GMMs are widely held to be excellent for density estimation. They are also used for

clustering, which is finding separated subsets of the data set that have similar values.

In recent years, significantly better probabilistic clustering models have emerged (Iwata,

Page 17: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

1.3. Document structure 5

z∗ of data point

x∗

ofdat

ap

oint

Probability density

Figure 1.1: Left: the data density, and a point with missing y value in red. Right: thecorrect p(xmis

∗ |xobs∗ ,X).

Duvenaud, and Ghahramani, 2012; Dilokthanakul et al., 2016; Johnson et al., 2016).

However, for none of these models the conditional distribution of a partially observed

point is tractable to compute. Furthermore, their performance at density estimation is

not much better than that of GMMs.

1.3 Document structure

Chapter 2 explains concepts that the main contributions of the thesis rely on. Chapter

3 contains the derivation of the density estimation and imputation algorithms of the

POIGMM. Chapter 4 is an account of the performed experiments and their results. Chapter

5 reviews the related state of the art in imputation. Finally, Chapter 6 summarises the

contributions, explains things that could have been done better and shows directions for

future work.

Page 18: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 19: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Chapter 2

Background

This chapter contains concepts necessary for understanding, and following the derivation

of, the contributions of this thesis. Section 2.1 briefly reviews notational conventions,

basic probability theory, and supervised machine learning. Section 2.2 reviews general

probabilistic models, inference, and Bayesian networks. Section 2.3 contains some proba-

bility distributions and theorems that are required to develop the imputation algorithm in

Chapter 3. Finally, section 2.4 explains the mechanisms by which missing data appears,

the reason it is a problem, and several classes of strategies for handling it.

2.1 Preliminaries

This section gives informal definitions of basic probability concepts, to ensure the reader

uses the same terminology. The section also establishes notational conventions, and gives

a short account of supervised machine learning (ML). This work is not strictly about

supervised learning. However it uses terms from that area, and some of the related work

uses ML algorithms.

2.1.1 Notational conventions

Boldface lowercase (x, z) and uppercase (X,Z) letters denote vectors and matrices, respec-

tively. The ith row of X, as a column vector, is xi, and the jth element of xi is xi,j. The

jth column of X is X:,j. Indices may also be sets rather than scalars, for example if misn

is the set of indices of the missing dimensions of xn; then xmisnn , abbreviated as xmis

n , is the

vector containing the missing dimensions of x. When applying two subsets of indices to a

7

Page 20: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

8 Chapter 2. Background

matrix dimensions, such as Mmisn,obsn , the resulting matrix contains elements mi,j such

that i ∈ misn and j ∈ obsn. Figure 2.3 is a visualisation of this.

Some special functions, such as the Gamma function (Γ) or the Dirac delta (δ), are used

throughout the document. Their definitions are compiled in section B.2. They are also

named in the text whenever they appear in an expression.

2.1.2 Basic probability theory

Let x be a random variable (RV). We write x ∼ p(x | θ) or x ∼ p(θ) to denote that,

given parameters θ, x is distributed according to probability (or density) function p(x | θ).Probability densities (for continuous RVs) and probabilities (for discrete RVs, or regions of

continuous RVs) are not distinguished throughout this work, we write both as p(·) or q(·).

Let X and Y be sets, and x ∈ X and y ∈ Y be random variables. Then p(x, y) is said to

be a joint probability distribution if it is a probability distribution over X × Y .

We write p(x | y) to denote the conditional probability distribution, of x given y.

The fundamental rules of probability are (Bishop, 2006):

Sum rule: p(x) =∑y∈Y

p(x, y) or p(x) =

∫Yp(x, y)dy

Product rule: p(x, y) = p(x | y)p(y)

(2.1)

The probability distribution p(x) =∫Y p(x, y)dy is the marginal distribution of x, with

respect to p(x, y).

From these rules, Bayes’ rule can be derived (several forms shown):

p(y |x) =p(x | y)p(y)

p(x)=

p(x | y)p(y)∫Y p(x, y)dy

=p(x, y)

p(x)(2.2)

2.1.3 Supervised machine learning

Let X be a set of N points, x1,x2, . . . ,xN . Each xn belongs to the input space X , and

has an associated label yn ∈ Y .

Assume the data points X are drawn independently from a probability distribution D (i.i.d.

assumption). Assume there exists a mapping f : X 7→ Y, such that for all n = 1, . . . , N ,

f(xi) = yi. Then, the supervised learning problem is to estimate a model. That is: from a

Page 21: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.1. Preliminaries 9

given family of functions G, specific to the algorithm, find g ∈ G such that g(x) is “close”

to f(x) under the distribution D.

Usually, X is the Cartesian product of several feature sets, F1, . . . ,FD. Each of the features

Fi and Y are often one of the following sets:

• The set of categories C = {0, 1, . . . ,M}, for some M . When Y = C, the supervised

learning problem is said to be multiclass classification or just classification. When

Fi = C, it is a categorical feature.

• The set of real numbers R. When Y = R, supervised learning is called regression.

When Fi = R, it is a numerical feature.

Alternative names for features are variables or dimensions; the latter being especially

common when talking about numerical features.

The notion of “closeness” is formalised with a loss or cost function L : XN ×YN ×G 7→ R.

Intuitively, the loss function takes a data set (points and their labels) and the model, and

outputs how well the model fits the data set. Popular loss functions are:

• MSE : suitable for regression. Given a data set of points x1, . . . , xN and labels

y1, . . . , yN ∈ R, it is defined:

LMSE(X,y; g) =1

N

N∑n=1

(yn − g(xn))2 (2.3)

• Cross-entropy : suitable for classification. When yn ∈ {1, . . . ,M}, for all m set:

yn,m = 1 if yn = m and 0 otherwise. Then:

LH(X,y; g) =1

N

N∑n=1

M∑m=1

yn,m log yn,m (2.4)

Machine learning algorithms always restrict their search of a model g to a family of

functions. Most commonly this family is characterised by parameters θ, and the model

search problem reduces to a parameter search, or optimisation, problem.

Often, an algorithm requires the specification of hyperparameters λ. In this case, the

family of functions taken into account is characterised by both θ and λ. However, the

hyperparameters are not included in the “main” optimisation.

Most commonly, a data set is partitioned in three: the training set for optimising the

parameters, the validation set for performing hyperparameter search, and the test set to

Page 22: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

10 Chapter 2. Background

measure the algorithm’s performance. The reason mutually exclusive sets are used for

these tasks is that some algorithms will over-fit : they will find a model g that minimises

the loss function very well in the training data points; so much so, that they assume

idiosyncratic patterns of the sample X hold for the entire distribution D.

Usually, the larger the family of possible models G is (that is, the larger the number of

parameters θ to optimise), the more likely the algorithm is to over-fit.

2.2 Probabilistic Models

Consider a data set X of n points. As in the supervised learning setting, we assume they

all come from the same distribution D, which is unknown (i.i.d. assumption). In this case,

however, we wish to estimate the distribution D.

One approach to this is to define a probabilistic model, that is, a parametrised family of

probability distributions. We denote our parameters as θ, and the family is defined by

P = {p(x | θ) : θ ∈ Θ}. We shall leave the functional form of p(x | θ) unspecified for now.

We now want to estimate the probability distribution from P that is closest to D, for some

metric of closeness. As in supervised learning section 2.1.3, this reduces to optimising the

parameters θ.

2.2.1 Maximum likelihood estimation

Because of the i.i.d. assumption, the probability of the data set X being drawn from a

distribution with parameters θ can be factored into points. This quantity is known as the

likelihood :

p(X | θ) =N∏i=1

p(xi | θ) (2.5)

From here, the first inference method, known as Maximum Likelihood Estimation (MLE),

is straightforward. We wish to find the parameters θ under which the probability of

obtaining the data set X is the highest. This is the optimisation problem:

θMLE = arg maxθ∈Θ

p(X | θ) (2.6)

In actual computer hardware, if the data set is not small, the calculation of the above

expression is likely to underflow (Bishop, 2006). The minimum positive value that 64-bit

Page 23: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.2. Probabilistic Models 11

IEEE 754 floating point can represent is about 2.225 · 10−308.1 Thus: if the points have a

geometric mean likelihood of 0.1 (very likely, especially at the beginning of optimisation),

and there are more than 308 points (overwhelmingly likely), calculating the likelihood will

output 0.

Thus it is more convenient to maximise the logarithm of the likelihood, known as the

“log-likelihood”. This increases possible representable values in the interval [0, 1], extending

the minimum positive floating-point value to e−10308 :

L(θ) = log p(X | θ) =N∑i=1

log p(xi | θ) (2.7)

Since the logarithm is a monotonically increasing function, maximising the log-likelihood is

equivalent to maximising the likelihood. The most popular losses (MSE, cross-entropy; see

section 2.1.3) used in supervised learning algorithms are (up to sign negation) log-likelihood

functions.

Once the parameters have been estimated, we can compute the probability of a test point

p(x∗) by evaluating p(x∗ | θ).

The MLE approach is simple, but it is unfortunately also prone to over-fitting. Suppose

we are performing inference for a model of throws of a six-sided die, to measure how biased

it is. The parameters are 6 real numbers, corresponding to the unnormalised probability

of numbers 1–6. The data set consists of 12 samples, of which none is the number 4. This

is not an unlikely occurrence: assuming a fair die, it happens (5/6)12 ' 0.11 of the time.

However, a MLE inference algorithm would give probability 0 to the number 4 coming up

in the future. How can we prevent our algorithms from making such a rash judgement?

2.2.2 Bayesian inference

One answer lies in Bayesian inference. This class of algorithms expresses its knowledge

about the parameters as a probability distribution, rather than a single value. This can

prevent making unjustifiably drastic judgements from little data. Because the parameters

θ always remain a random variable rather than a value, these methods are often called

Bayesian nonparametrics.

Bayesian inference starts by giving a probability distribution to the parameters, p(θ),

known as the prior distribution or just prior. Then, it calculates the posterior distribution

1Value obtained by running command: python -c "import sys; print(sys.float_info.min)".

Page 24: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

12 Chapter 2. Background

of the parameters given the data, p(θ |X). Using Bayes’ rule (equation 2.2):

p(θ |X) =p(X | θ)p(θ)

p(X)=

p(X | θ)p(θ)∫Θp(X | θ)p(θ)dθ

(2.8)

The denominator p(X) is independent of θ, and is often referred to as a normalisation

constant.

Once the posterior is calculated, the algorithm can answer queries about the probability

(or probability density) of a test point x∗, taking into account the uncertainty about the

parameters:

p(x∗ |X) =

∫Θ

p(x∗ | θ)p(θ |X)dθ (2.9)

These inference and test procedures are jointly known as fully Bayesian inference.

The prior distribution is a way for an expert analyst to incorporate their knowledge about

the application domain into the algorithm. This is sometimes criticised, because ideally

machine learning methods would learn everything from the data. The critique is addressed

somewhat by using noninformative priors, that encode little assumptions. In our dice

example, this would be for instance a uniform distribution over the 5-dimensional parameter

space. These help regularise the resulting model preventing overly rash judgements, and

make it possible to have uncertainty about the parameters, while minimising required user

intervention.

Another criticism of Bayesian inference is that the prior is often chosen not because it

matches the analyst’s beliefs, but rather because it is mathematically convenient (Bishop,

2006). See section 2.2.6 about conjugate families.

Maximum a posteriori inference

In some cases, it is intractable to calculate the integrals in Equations (2.8) or (2.9).

However the regularisation or domain knowlege that a prior can provide is still desirable.

In that case, a Maximum a posteriori (MAP) estimate of the parameters can be performed:

θMAP = arg maxθ∈Θ

p(X | θ)p(θ)p(X)

= arg maxθ∈Θ

p(X | θ)p(θ)

since the denominator does not depend on θ.

Page 25: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.2. Probabilistic Models 13

2.2.3 Bayesian networks

Consider a joint distribution over variables and its decomposition into conditional proba-

bilities:

p(w, x, y, z) = p(w |x, y, z)p(x | y, z)p(y | z)p(z)

By the product rule, this holds for any joint distribution.

We say that a random variable a is conditionally independent (CI) from RVs B = b1, . . . , bN

and RVs C = c1, . . . , cN , written (a ⊥ B |C), if p(a |B,C) = p(a |C).

If we make some conditional independence assumptions, we can simplify the above

distribution. For example, if we assume (w ⊥ x, y | z), it follows that p(w |x, y, z) = p(w | z)

and the distribution above can be written as:

p(w, x, y, z) = p(w | z)p(x | y, z)p(y | z)p(z)

A Bayesian Network (BN) is a joint distribution over several variables, that imposes some

CI assumptions. The CI assumptions are characterized by a directed acyclic graph. Each

variable corresponds to a node, and needs only be conditioned on its preceding nodes. For

example, the distribution above corresponds to the left-hand side on Figure 2.1.

We introduce two more concepts.

• Plate notation: the rectangles with quantities on the right-hand side of Figure 2.1 are

an example. They represent several repetitions of the same structure, conditionally

independent between them. Graph edges between elements of the same plate

represent a 1-to-1 relationship between each of them, whereas edges that cross the

plate boundary are many-to-1 or 1-to-many relationships.

• Latent and observed variables : variables that are grey represent observed variables,

that we know the value of. White variables are latent: we do not know their value,

only their probability distribution. Conditioning on the value of an observed variable

changes the distribution of latent variables in the graph.

• Parameters : quantities without a circle around them are parameters of the overall

joint distribution.

p(v,w,x,y, z | θ) = p(v,y)N∏n=1

p(zn)p(yn | zn, θ)M∏m=1

p(wm | z)p(xm | z,y) (2.10)

Page 26: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

14 Chapter 2. Background

zn yn

xmwm

N

Mv

z y

xw

θ

Figure 2.1: Left: Graphical representation of p(w | z)p(x | y, z)p(y | z)p(z). Right: Repre-sentation of equation (2.10).

2.2.4 Mixture model

Let z be a discrete random variable, which is a K-dimensional one-hot vector. A one-hot

vector has value 1 in one dimension and 0 in all the others. The RV z follows a multinomial

distribution (equation B.7), with probabilities w = w1, . . . , wK .

Given K probability distributions p1(x | θ1), . . . , pK(x | θK), a mixture model is defined as:

p(x,θ) =K∑k=1

p(zk = 1)pk(x | θk) =K∑k=1

wkpk(x | θk) (2.11)

which corresponds to the graphical model in Figure 2.2.

Each individual pk is known as a component. The latent variable z controls which

component a sample x is drawn from.

z vk

Kx

w θk

Figure 2.2: Graphical representation of a mixture model.

A commonly used mixture model is the Gaussian mixture model (GMM). It consists in

having all the components pk be a Gaussian distribution. The probability density induced

by a GMM is also known as a Mixture of Gaussians (MoG).

2.2.5 Variational inference

Suppose we have a probabilistic model p, with latent variables Z = z1, z2, . . . , zM , and

observed variables X = x1, . . . ,xN . In many interesting cases, computing the exact

Page 27: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.2. Probabilistic Models 15

posterior p(Z |X) is intractable.

A solution to this is variational inference (VI). It consists in defining a family of distributions

qφ(Z), parametrised by φ. The family q is often called the variational distribution.

The Kullback Leibler (KL) divergence is a measure of the difference between two probability

distributions, q(y) and p(y). Its definition is:

DKL(q||p) = Ey∼q

[log q(y)− log p(y)] (2.12)

In variational inference, we seek to minimise the KL divergence between q(Z) and p(Z |X).

Thus Bayesian inference has been reduced to an optimisation problem, that can be solved

with standard methods like block coordinate descent or gradient descent. Gradient descent

consists in computing the negative gradient of the objective and then adjusting the

parameters towards it. Block coordinate descent is explained in Algorithm 1, and is always

guaranteed to converge to a local minimum (Grippo and Sciandrone, 2000).

Algorithm 1 Block coordinate descent

Input: a function to minimise, f(φ). A tolerance parameter ε.

Initialise φ. Partition φ into blocks φ1, φ1, . . . , φM .

Initialise the value of the last iteration, v ←∞.

while f(φ)− v > ε, that is, f(φ) hasn’t converged to a minimum do

v ← f(φ).

for each block i = 1, . . . , B do

Analytically set φi ← arg maxφi f(φ1, . . . , φi, . . . , φB).

end for

end while

Output φ.

The KL divergence between qφ and the true posterior is:

DKL(q||p) = EZ∼qφ

[log qφ(Z)− log

p(X,Z)

p(X)

]= E

Z∼qφ[log qφ(Z)− log p(X,Z)] + log p(X)

Since log p(X) does not depend on φ, we can ignore it during our optimisation. In fact,

since the KL divergence is never negative, we have:

log p(X) ≥ Eqφ

[log p(X,Z)]− Eqφ

[log qφ(Z)] (2.13)

Page 28: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

16 Chapter 2. Background

This is known as the evidence lower bound (ELBO), and maximising it is equivalent

to minimising the KL divergence. The gap between the ELBO and log p(X) is the KL

divergence between qφ and p.

Usually, q is chosen such that it factorises. That is, the estimated distribution of each

latent variable is independent of all the others:

q(Z) =M∏i=1

qφi(zi)

This makes it possible to compute optimal updates for each φi analytically, and run block

coordinate descent. According to Bishop (2006), at each step, the optimal qφi is:

log q∗φi(zi) = Ej 6=i

[log p(X,Z)] + C (2.14)

where C is the normalising constant, which we can recover by inspection when deriving

updates. If the probability distribution of the right-hand side is not in the family of qφi , it

is sufficient to minimise the KL-divergence between qφi and Ej 6=i[log p(X,Z)].

2.2.6 Exponential and conjugate families of distributions

Let x be a random variable, and θ be some parameters. An exponential family is a family

of probability distributions over x that can be written as (Bishop, 2006):

p(x |θ) = a(x) exp(f(θ)>g(x)− h(θ)) (2.15)

for some fixed vector-valued functions f and g, and scalar-valued functions a and h. The

function f must have the same number of outputs as θ.

Essentially, in an exponential family the parameters and the RV over which the density is

defined can only interact via the dot product f(θ)>g(x). The transformed parameters f(θ)

are often called the natural parameters, and the transformed variables g(x) the sufficient

statistics.

Exponential families have several interesting properties. An important one is that they

have a conjugate family. Let P, and Q be parametrised families of distributions over

domain x. Then Q is conjugate to P if, for any p ∈ P and q ∈ Q, p(x)q(x) ∈ Q.

That is, if we multiply a distribution in P with one belonging to its conjugate family Q,

the result is also in the conjugate family. This simplifies Bayesian analysis: if the prior is

Page 29: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.3. Important probability distributions 17

conjugate to the likelihood, then the posterior belongs to the same family as the prior.

2.3 Important probability distributions

This section contains the definitions and some less well-known observations about probabil-

ity distributions, which will be used in Chapter 3. Definitions of other used distributions

can be found in section B.1.

Each of the two distributions presented in this section is an exponential family.

2.3.1 Multivariate Gaussian

Let µ ∈ RD be a D-dimensional vector representing the mean, and Σ ∈ RD×D the

covariance matrix, which must be positive semi-definite (section B.3). Let x be a RV

belonging to RD. Then, x is Gaussian distributed if (Bishop, 2006):

N (x |µ,Σ) =1√

(2π)d|Σ|exp

(−1

2(x− µ)>Σ−1(x− µ)

)(2.16)

The quantity ∆2 = (x− µ)>Σ−1(x− µ) is known as the squared Mahalanobis distance.

As an intuition, consider an uncountably infinite series of D-dimensional ellipsoids with

the same shape, and increasing “radius”, all centered on the mean µ. The Mahalanobis

distance is of a point from the centre is the “number” of ellipsoids between them. The

Gaussian density decreases exponentially with the Mahalanobis distance.

An important property of the multivariate Gaussian is that any subset of its variables

is also Gaussian distributed, taking the appropriate elements of the covariance matrix.

Let obs ⊆ {1, . . . , D} denote a subset of the elements of x, a RV. If x ∼ N (µ,Σ), then

xobs ∼ N (µobs,Σobs,obs). (Bishop, 2006)

Sometimes it is easier to manipulate the precision matrix (Λ) of a Gaussian, Λ = Σ−1.

The marginal distribution property above can be restated:

If x ∼ N (µ,Λ−1), then xobs ∼ N(µobs,

((Λ−1

)obs,obs)−1)

(2.17)

Finally, an intuitive explanation of the ·obs,obs style indexing in the covariance matrix, and

why it can be seen as a partition. Permuting the elements of a Gaussian RV yields a

gaussian RV, where the same permutation has been applied to the elements of µ, and

Page 30: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

18 Chapter 2. Background

to the rows and columns of Σ. Given a set of indices of observed variables (obs), we

can apply a permutation such that the observed variables are contiguous. The resulting

covariance matrix can then be partitioned into blocks, as in Figure 2.3.

The same indexing-as-partition view holds for the precision Λ, since it is also symmetric.

However, in that case the square matrices resulting from the partition do not describe the

marginal distributions of the original Gaussian.

=⇒Σ =

= Σobs,obs

= Σmis,obs =(Σobs,mis

)>=

>

= Σmis,mis

Figure 2.3: Partitioning a symmetric matrix into missing and observed variables.

2.3.2 Wishart

The Wishart distribution is a distribution over positive semi-definite matrices. In Bayesian

analysis, it is usually the distribution of the precision parameter of a Gaussian distribution.

Let Λ be a D ×D random matrix, W ∈ RD×D be the covariance parameter and ν ∈ Rthe degrees of fredom. Then Λ is Wishart distributed if (Bishop, 2006):

p(Λ |W, ν) = KW,ν |Λ|(ν−D−1)/2 exp

(−1

2tr(W−1Λ

))(2.18)

With KW,ν the normalisation constant:

KW,ν = |W|−ν/2(

2νD/2πD(D−1)/4

D∏i=1

Γ

(ν + 1− i

2

))−1

(2.19)

The following expectations of the Wishart distribution are well-known and frequently used

(Bishop, 2006, Appendix B):

EW(Λ |W,ν)

[log |Λ|] = log |W|+D∑i=1

Ψ

(ν + 1− i

2

)+D log 2 (2.20)

EW(Λ |W,ν)

[Λ] = νW (2.21)

where Ψ is the Digamma function (equation (B.9)).

Page 31: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.4. Missing data 19

Suppose the precision of a Gaussian is Wishart-distributed. Then, what is the distribution

of the precision of a marginal distribution of the Gaussian? That is, Equation (2.17). The

following theorem will be useful in Chapter 3.

Theorem 2.3.1 (Precision of a marginal Gaussian is Wishart-distributed). Let W be

a D × D positive semi-definite covariance matrix, and ν a real number, ν ≥ D. Let

Λ ∼ W(Σ, ν) be a Wishart-distributed square matrix. Let obs ⊆ {1, . . . , D} be a set of

indices of the columns/rows of Λ. Let mis = {1, . . . , D} \ obs, that is, the complement of

obs. Then: ((Λ−1

)obs,obs)−1

∼ W(Wobs,obs, ν −D + |obs|)

.

Proof. Recall the view of double indexing as a partition (Figure 2.3). Using a well-known

expression about the inverse of a partitioned matrix (Horn and Johnson, 2012, Section 0.7.1)

we get: (Λ−1

)obs,obs=(Λobs,obs −Λobs,mis

(Λmis,mis

)−1Λmis,obs

)−1

Inverting both sides we obtain our expression of interest on the left:((Λ−1

)obs,obs)−1

= Λobs,obs −Λobs,mis(Λmis,mis

)−1Λmis,obs

According to (Muirhead, 1982, Theorem 3.2.10), if Λ ∼ W(Σ, ν), then an expression like

the right-hand-side of the expression above is distributed according toW(Σ, ν−D+ |obs|).This implies the theorem.

2.4 Missing data

Suppose a data analyst has collected a set of points X = x1,x2, . . . ,xN , of D features

each. The points may or may not have associated labels y = y1, . . . , yN . The analyst then

wishes to perform some task with the data, for example:

• Supervised learning (section 2.1.3).

• Clustering. Find a number K of clusters c1, . . . , cN , K � N ; and an assignment

of points to those clusters, such that the points assigned to the same cluster are

“similar” in some way. An example of a similarity measure is the Euclidean distance.

• Statistical tests. For example, a Student’s t-test to estimate whether the mean of the

Page 32: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

20 Chapter 2. Background

variable xi = {xn,i : n = 1, . . . , N} is significantly different for points where yn = 1

than for points where yn = 0.

However, it could be that the analyst did not record a value for some points and variables

of the data set X, those values are missing. The input points xn now belong to the set

X ∪ {•}. Many analysis procedures require inputs and outputs in {0, . . . ,M} or R. What

can our analyst do?

2.4.1 Types of missing data: MCAR, MAR, NMAR

Little and Rubin (2002) distinguish three mechanisms that generate missing data. The

distinction is important because it dictates which procedures can be used to handle

the missing data: different methods are shown to work based on different mechanism

assumptions. Let mi,j be a random variable that is 1 if xi,j is missing and 0 if it is observed.

M is the matrix of all such variables, of the same size as X. We denote as M:,obs the set

of all entries of M that have value 1, and M:,mis the ones that have value 0. The same

indices apply to X.

• Missing completely at random (MCAR). Whether a variable is missing or not

is independent of the value of any variable. That is, p(M |X) = p(M). This

assumption is almost always unrealistic, however, some methods that assume it (such

as MissForest; Stekhoven and Buhlmann, 2011) perform well in other settings in

practice.

• Missing at random (MAR). The probability distribution of missing values depends

on the observed values only. That is, p(M |X) = p(M |Xobs). This assumption is

often falsified as well, but it can be made closer to truth by measuring other variables

that are correlated with M.

• Not missing at random (NMAR). The missingness indicator M depends on the

values that are missing themselves. In theory, it is impossible to reconstruct data

that is NMAR from the data alone: for all an algorithm knows, the distribution of

missing values is completely different from the one of observed values.

Note that MCAR implies MAR, and MAR implies NMAR. This is because the conditioning

that each method allows to exist can be vacuous.

Page 33: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.4. Missing data 21

2.4.2 Handling missing data

What follows describes three overarching strategies for dealing with missing data (Little

and Rubin, 1989), and some variations of them.

Weighting, complete-case analysis

The first and simplest way to handle data sets with missing data is to ignore the rows

that have missing variables. This is known as complete-case analysis, and it easily yields a

complete data set that the analyser can use standard methods on. This is only appropriate

if the data is MCAR, otherwise the newly created complete data set will be biased. A

more sophisticated version of this is weighting, which assigns numerical weights to the

complete rows that remain, to make the removal as unbiased as possible.

The strength of this approach is its simplicity, but it has a number of problems. First, it is

only possible to perform if there are some rows with complete data. Even when that is the

case, it will reduce the data set size considerably, discarding a lot of useful information,

thus reducing the significance of tests and the accuracy of supervised learning. Second,

calculating correct statistical tests while taking into account the weights is difficult.

Imputation: single, multiple and “density”

Another approach to analysis with missing data is imputation. Imputation consists in

filling the missing values in a data point, xmisn , with plausible values, given the observed

values of the data point xobsn . Once all the missing values have been filled, the analyst can

use standard methods.

A simple approach to imputation is to fill each variable’s missing values with the mean

(or mode, if it is categorical) of all observed values in the same variable. For a variable j,

we replace values {xn,j : j ∈ misn,∀n} with xj = mean({xn,j : j ∈ obsn,∀n}). However,

assuming that all missing xn,j = xj is very unrealistic. Most obviously, xj is a constant and

thus uncorrelated with the observed variables xobsn , which will bias statistical correlation

tests downwards.

To impute correctly, a procedure needs to estimate the correlations between each of the

variables. One popular way to do this is regression imputation (Little and Rubin, 2002;

Raghunathan et al., 2001). It consists in estimating the missing values for a dimension

j using a supervised learning algorithm, with the observed values of X:,j as labels and

the values of X:,i, i 6= j as inputs. The current state of the art in imputation algorithms

Page 34: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

22 Chapter 2. Background

(Van Buuren and Oudshoorn, 1999; Stekhoven and Buhlmann, 2011) are all variations of

regression imputation that use off-the-shelf supervised learning algorithms. Algorithm 2

describes regression imputation in detail.

The most prominent drawback of imputation is that any analysis performed on imputed

data will be unduly overconfident. Imputation algorithms cannot in general recover the

exact value that would be observed, they will make stochastic errors; but the subsequent

analysis will be unaware of this and take the data as if it was real. Rubin (2004) proposed

Multiple Imputation as a solution to this problem. It consists in creating m > 1 imputed

data sets. For each data set, and each missing value xn,i is replaced by a sample from the

estimated distribution p(xn,i |X). The data sets are then analysed using complete-data

techniques and the results combined.

In standard Multiple Imputation, the m samples that are used to impute xi,j are drawn

from the conditional distribution p(xn,i|xi,obs). This is the assumption Rubin (2004) uses

to show that the inferences from the multiple imputed data set are valid. Rubin also

argues that modest imputation sizes (m ∈ [2, 10]) are sufficient for analysing surveys.

This dissertation proposes probability density imputation, which attempts to estimate the

conditional distribution p(xmisn |X) in closed form, rather than just draw a few samples

from it. We argued in the introduction that some supervised learning methods can benefit

from that.

Algorithm 2 Regression imputation

Input: data set with missing data X of size N ×D, and its missingness indicator M.

Optionally : impute X with a simple procedure, such as mean imputation.

for each j ∈ 1, . . . , D do

mis:,j ← {i ∈ 1, . . . , N |mi,j = 1}. . The row indices where X:,j is missing.

obs:,j ← {1, . . . , D} \mis:,j. . The row indices where X:,j is observed.

y′obs ← {xi,j | i ∈ obs:,j}. . These labels are all observed.

X′obs ← {xi,k | i ∈ obs:,j ∧ k 6= j}. . Some of the values here may be unobserved.

“Regression” step: Using a supervised learning algorithm,

estimate the mapping g from X′obs to y′obs.

X′mis ← {xi,k | i ∈ mis:,j ∧ k 6= j}.Estimate the values y′mis = g(X′mis).

Update Xmis,j with the values of y′mis.

end for

Optionally : repeat the previous loop until X converges, or starts to diverge.

Output X.

Page 35: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

2.4. Missing data 23

Model estimation from incomplete data

It is also possible to design a model that can be inferred and used in the presence of

missing data. Arguably this is the most desirable approach, as it directly uses all the

available information for the task the analyst cares about, which is modelling or prediction.

However, in practice most algorithms that do this use an “internal” imputation step:

the aforementioned work by Damianou, Titsias, and Lawrence (2016) first estimates an

uncorrelated Gaussian distribution each missing value by using the fully observed part of

the data set.

If a supervised learning model is robust enough to noise, it may be able to learn with the

missing values replaced by special value (for example the mean), especially if M is also

given as input to the method. This is the approach taken by Che et al. (2016) and Cinar

et al. (2017) in the context of predicting the label of a time series with missing data; and

by MissForest (Stekhoven and Buhlmann, 2011) and MICE (Van Buuren and Oudshoorn,

1999) when performing supervised learning for imputation (Chapter 5).

Page 36: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 37: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Chapter 3

Partially Observed Infinite GMM

This chapter presents an algorithm for fully Bayesian inference of an infinite Gaussian

Mixture Model (GMM) from data that includes missing values. The algorithm defines

a stochastic process prior on the number and density of clusters. Then, given data, it

approximates the posterior distribution using a finite variational distribution.

The inferred variational distribution is chosen such that a reasonable approximation to

the posterior distribution p(xmis |xobs,X) is available analytically. Thus, this algorithm

can perform Bayesian imputation of a MAR data set.

The algorithm extends the work by Blei and Jordan (2006), who give a variational

inference algorithm for completely observed data, for an infinite mixture of exponential

family distributions. In fact, the prior and variational distributions of our algorithm are

the same as theirs, only the likelihood function is different. However, this makes one of

their coordinate descent updates analytically intractable. To sidestep this, we approximate

the marginal likelihood with the maximum likelihood of a partially observed point. This

is not the same as performing maximum likelihood estimation (Section 2.2.1).

This chapter is structured as follows. Section 3.1 is an exposition of the probabilistic model

by Blei and Jordan (2006), followed by some modifications made necessary by partially

observed points. Section 3.2 contains the derivation of our inference algorithm, which

heavily borrows from the finite Bayesian GMM derivation by Bishop (2006, Section 10.2).

However, necessarily, the steps that relate to partially observed data are original. Finally,

Section 3.3 describes the algorithm for imputation using an inferred variational distribution,

and Algorithm 3 summarises the POIGMM.

25

Page 38: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

26 Chapter 3. Partially Observed Infinite GMM

3.1 The Dirichlet Process Mixture Model

3.1.1 The Dirichlet Process Prior

Let α ∈ R≥0 be a nonnegative scaling parameter. Let G0 be a probability distribution over

continuous space Θ. Let v = v1, v2, . . . be an infinite collection of random variables drawn

independently from a Beta distribution, vk ∼ Beta(1, α). Similarly, let θ = θ1, θ2, . . . be

an infinite collection of random variables drawn from the distribution G0, θk ∼ G0.

Definition 3.1.1 (Dirichlet Process). A random variable θ∗ is distributed according to a

Dirichlet Process (DP) if:

p(θ∗ |θ,v) =∞∑k=1

πk(v)δ(θ∗ − θk) (3.1)

πk(v) = vk

k−1∏j=1

(1− vj) (3.2)

where δ(·) is a Dirac delta (equation B.10). Note that πk is not a probability distribution,

only a function that outputs probabilities: it doesn’t output the probability of its argument.

From (3.1) it is clear that the DP is a discrete stochastic process over the continuous

space Θ, with countably infinite possible values. The possible values (θ) are drawn

independently from G0. The probability of drawing each θk from the DP is determined by

the stick-breaking process πk(v). Intuitively, πk starts with a unit length “stick”. At each

step it draws vk from a Beta distribution, and discards a vk fraction of what is left in the

stick. The absolute length of the discarded fraction at step k is the probability of drawing

θk. This stick-breaking representation of the DP was originally formulated by Sethuraman

(1994).

Intuition behind the DP prior

The DP is ideally suited as a prior distribution for infinite mixture models. Let p(xn | θ∗)be a probability distribution over observations, parametrised by θ∗. Combining it with the

DP prior:

p(xn |θ,v) =

∫Θ

p(xn | θ∗)p(θ∗ |θ,v)dθ∗ =

∫Θ

p(xn | θ∗)∞∑k=1

πk(v)δ(θ∗ − θk)dθ∗

Page 39: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

3.1. The Dirichlet Process Mixture Model 27

Since all the quantities involved are nonnegative, and πk(v) does not depend on θ:

p(xn |θ,v) =∞∑k=1

πk(v)

∫Θ

p(xn | θ∗)δ(θ∗ − θk)dθ∗ =∞∑k=1

πk(v)p(xn | θk) (3.3)

It is clear that equation (3.3) is analogous to the expression of a mixture model (equation

2.11), with an infinite amount of components.

3.1.2 Joint likelihood of a DP mixture

Let X = N×D be a data set of N points. For each point, we introduce a latent variable zn

that represents a mixture component assignment. If point xn was drawn from component

k, which has parameters θk, then zn,k = 1 and zn,j = 0 for j 6= k. The distribution of zn is

an (infinite) multinomial distribution (equation B.7) with parameters πk(v).

The joint likelihood over all observations, parameters and latent variables is illustrated on

the left-hand-side of Figure 3.1, and is:

p(X,Z,v,θ |α,G0) =

[N∏n=1

p(xn | zn,θ)p(zn |v)

]∞∏k=1

p(vk |α)∞∏k=1

p(θk) (3.4)

Which corresponds to the following process for generating the data X:

1. For each component k = 1, 2, . . . :

(a) Draw its stick-breaking fraction: vk |α ∼ Beta(1, α)

(b) Draw its parameters: θk |G0 ∼ G0

2. For each data point n = 1, 2, . . . , N :

(a) Draw its component assignment: zn |v ∼ Mult({πk(v) : k = 1, 2, . . . }).

(b) Draw the point from the component: xn | zn,θ ∼ p(xn | θzn).

The scaling parameter α and component parameter distribution G0 are hyperparameters of

the model. Note that the likelihood function p(xn | θzn) only depends on the znth cluster,

so we can write:

p(xn | θzn) =∞∏k=1

p(xn | θk)zn (3.5)

We have assumed the data points are fully observed, which is why they are shaded in the

Page 40: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

28 Chapter 3. Partially Observed Infinite GMM

NW0θk

zn vk α

∞xn

n

θk

zn vk

xmisn

−→

xobsn ∞

n

NW0

α

Figure 3.1: Left: the original DP mixture. Right: our DP mixture with marginal Gaussianlikelihood

left side of Figure 3.1. However, this is not true: for some data points xn, some of the

values will not be observed. Accordingly, we decompose the joint likelihood of a point into

missing and observed values. By the product rule, we write:

p(xn | θk) = p(xmisn ,xobs

n | θk) = p(xmisn |xobs

n , θk)p(xobsn | θk)

and update the graphical model accordingly (Figure 3.1, right-hand-side).

The derivations above hold for any form of p. From now on, we fix the form of p. Recall

from section 2.3.1 that the marginal and conditional distributions of a Gaussian are easy

to compute. Because of that, we define the xn as a Gaussian. Then for the observed values

of the kth component:

p(xobsn | θk) = N

(xobsn |µobsn

k ,((

Λ−1k

)obsn,obsn)−1)

(3.6)

The missing values also follow a Gaussian. We write it explicitly in section 3.3.1.

We would like to compute the posterior distribution over our latent variables: Xmis, Z, v

and θ, after seeing some data Xobs. Following equation (2.2):

p(Xmis,Z,v,θ |Xobs) =p(Xmis,Xobs,Z,v,θ)

p(Xobs)

We can separate the distribution over Xmis and its conditional dependencies, following the

modified graphical model in Figure 3.1:

p(Xmis,Z,v,θ |Xobs) =p(Xmis |Z,θ,Xobs)p(Xobs,Z,v,θ)

p(Xobs)(3.7)

Thus, to compute p(Xmis |Xobs), it suffices to compute the partial posterior p(Z,v,θ |Xobs),

then multiply it by p(Xmis |Z,θ,Xobs) and integrate the latent variables out.

Page 41: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

3.1. The Dirichlet Process Mixture Model 29

This partial true posterior is intractable to compute, since the joint distribution has infinite

terms. Thus, we approximate it using a variational distribution, qφ (section 2.2.5).

3.1.3 The variational distribution

Blei and Jordan (2006) choose the following variational distribution family, for a Gaussian

likelihood:

qφ(Z,v,θ) =K−1∏k=1

qγk(vk)

K∏k=1

qηk(µk,Λk)

N∏n=1

qτn(zn) (3.8)

where:

qγk(vk) = Beta(vk | γk,1, γk,2)

qηk(µk,Λk) = NW (µk,Λk |mk, βk,Wk, νk) (3.9)

qτn(zn) = Multinomial(zn | τ n)

We have set θ = {µk,Λk : k ∈ 1, . . . , K}, and φ = {γk,ηk}k ∪ {τn}n represents all the

parameters each of the distributions are conditioned on. NW is the Gaussian-Wishart

distribution, defined in equation (B.1).

Since we separated xmis from the rest of the model in equation (3.7), we can use this

variational distribution to approximate the posterior of the other latent variables.

The first two terms of equation (3.8) represent a truncated Dirichlet Process. In effect

we have set to 1 the probability of the Kth stick-breaking fraction being 1, that is,

q(vK = 1) = 1. This implies that, under the variational distribution, the probability πk′(v)

of drawing a cluster k′ > K is zero. It follows that the distributions for θk′ will never be

drawn from, so they can be removed from qφ. We emphasise that only the variational

distribution qφ is finite. The model’s prior and true posterior are still infinite.

We also set the component parameter prior G0 to a Gaussian-Wishart:

G0(µk,Λk) = NW(µ0,Λ

−10 |m0, β0,W0, ν0

)(3.10)

The parameters of G0 are also hyperparameters. Usually we set a noninformative prior,

such as m0 = 0, β0 = 1, W0 = I, ν0 = D.

Recall that the Gaussian-Wishart distribution is the conjugate prior (section 2.2.6) of

the Gaussian distribution, which is our likelihood function (equation 3.6). In the fully

observed case, this makes the optimal partial coordinate descent update also be in the

Gaussian-Wishart. Thus, the KL-divergence of the update can be minimised in closed

Page 42: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

30 Chapter 3. Partially Observed Infinite GMM

form. However, this is not true in the partially observed case, as we show in section 3.2.3.

3.2 Inference in the partially observed infinite GMM

To perform variational inference of the posterior, we split the parameters to be optimised

in 3 blocks (γ, η and τ in equation 3.4), and follow the block coordinate descent scheme

exposed in Algorithm 1. We give closed form optimum updates for τ and γ. We give a

closed-form approximation to the minimum for η, which we argue eventually leads to a

local maximum as well, if perhaps slower.

3.2.1 Optimal cluster assignment (τ ) update

First, we can optimise the factor relating to the latent cluster assignments Z. Using

equation (2.14):

log q∗τ (Z) = Eφ\τ

[log p(X,Z,v,θ)]

=N∑n=1

[Eη

[log p(xn | zn,µzn ,Λzn)] + Eγ

[log p(zn |v)]

]+ C

(3.11)

where we wrote the parameters η, γ, as a shorthand for qη, qγ , which are the distributions

the expectations are calculated over.

We also grouped all the terms of the joint distribution that do not depend on Z into the

catch-all constant C. Henceforth the value of C varies from expression to expression, and

includes quantities irrelevant to our purpose, which is usually minimisation.

Let us focus on the second term, Eγ [log p(zn |v)]. Inspecting the probability of the stick-

breaking process (equation 3.2), we see that we can rewrite the multinomial distribution

p(zn |v) as follows:

p(zn |v) =∞∏k=1

(1− vk)zn,>k vzn,k

k (3.12)

We slightly abused the notation by writing zn,>k =∑∞

j=k+1 zn,k, which is 1 if the one-hot

vector zn has the one in a position indexed by a number > k.

Thus the second term inside the summation of equation (3.11) is:

[log p(zn |v)] =∞∑k=1

Eγk

[zn,>k log(1− vk)] + Eγk

[zn,k log vk] (3.13)

Page 43: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

3.2. Inference in the partially observed infinite GMM 31

Recall from 3.8 that we have set q(vK = 1) = 1. This does not depend on any parameters, so

it is part of the expectation. In particular, this implies that, for all j > K, zn,>j = zn,j = 0

with probability 1. Thus they are independent of their vj, and for all j > K:

[zn,j log vj] = Eγ

[zn,j]Eγ

[log vj] = 0 · Eγ

[log vj] = 0 (3.14)

Same process for the other term of the sum. This implies we can truncate the sum in

equation (3.13) at K.1 For the other terms of the sum, zn,k and zn,>k do not depend on

qγ so they can be extracted from the expectations.

Thus we expand equation (3.11), using equation (3.6) and a truncated equation (3.13):

log q∗τ (Z) =N∑n=1

( K∑k=1

Eηk

[log(Nn,k)zn,k ] +K∑k=1

zn,>k Eγ

[log(1− vk)] + zn,k Eγ

[log vk]

)+ C

log q∗τ (Z) =N∑n=1

K∑k=1

zn,k log ρn,k + C

(3.15)

where Nn,k is shorthand for N(

xobsn |µobsn

k ,((

Λ−1k

)obsn,obsn)−1)

and:

log ρn,k = Eηk

[logNn,k] + Eγ

[log vk] +k−1∑j=1

[log(1− vj)] (3.16)

Using Theorem 2.3.1, and the fact that qηkis a Gaussian-Wishart distribution (equation

3.9), we know that the precision matrix of Nn,k follows a Wishart distribution. From there,

we can compute Eηk [logNn,k] using Theorem 2.3.1 and some standard manipulations

(section C.1), to give:

Eηk

[logNn] =1

2log |Wobsn,obsn

k |+ 1

2

|obsn|∑i=1

Ψ

(νk −D + |obs|+ 1− i

2

)− βk(νk −D + |obs|)

2(xobs

n −mobsnk )>Wobsn,obsn

k (xobsn −mobsn

k ) + C

(3.17)

1 The original derivation by Blei and Jordan (2006) of the soundness of truncating the sum depends onon p(zn |X,v,θ) being an exponential family. When the data set has missing data, this is not necessarilythe case. We eventually prove it is, by deriving a closed form update. However, to do that, we needed toprove the soundness of truncating the sum first; so we give an alternative proof.

Page 44: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

32 Chapter 3. Partially Observed Infinite GMM

The other expectations have value:

[log vk] = Ψ(γk,1)−Ψ(γk,1 + γk,2) Eγ

[log(1− vk)] = Ψ(γk,2)−Ψ(γk,1 + γk,2)

where Ψ is the Digamma function (equation B.9).

Finally we exponentiate equation (3.15):

q∗τ (Z) = exp

(N∑n=1

K∑k=1

zn,k log ρn,k + C

)∝

N∏n=1

K∏k=1

ρzn,k

n,k (3.18)

which is a multinomial distribution. Thus each τn,k ∝ ρn,k. They can be computed by

using this formula, and then normalised such that∑K

k=1 τn,k = 1.

3.2.2 Optimising the weight distribution parameters (γ)

We proceed in the same way as for the cluster assignments, computing the expectation of

the prior over all variational factors except the ones involving γ.

log q∗γ(v) =

[N∑n=1

Eτn

[log p(zn |v)]

]+ E

τn

[∞∑k=1

log p(vk |α)

]+ C

The data set X is not involved in this calculation. Thus, no modifications are made

necessary because of partial observability. After some manipulations, we arrive at the

updates given by Blei and Jordan (2006):

γk,1 = 1 +N∑n=1

τn,k γk,2 = α +N∑n=1

K∑j=k+1

τn,j (3.19)

3.2.3 Optimising the component parameters (m, β,W, ν)

Again we use equation (2.14) to compute the optimal partial update:

log q∗η(µ,Λ) = Eγ,τ

[log p(θ) + log p(xobs

n | zn,θ)]

+ C

=K∑k=1

log p(θ) +N∑n=1

Eτn

[log p(xn | zn,θ)] + C(3.20)

Recall that the probability distribution over zn, parametrised by τ , is multinomial.

Page 45: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

3.2. Inference in the partially observed infinite GMM 33

Additionally substituting the component prior by equation (3.10), we obtain:

log q∗η(µ,Λ) =K∑k=1

logNW(µk,Λ

−1k |m0, β0,W0, ν0

)+

N∑n=1

K∑k=1

τ n log p(xobsn |θk) + C

(3.21)

This expression factors across components k. Henceforth we write the expression for one

component only. Expanding the Gaussian-Wishart and p(xobs |θk):

log q∗ηk(µk,Λk) = logN0

(µk |m0, (β0Λk)

−1)

+ logW0(Λk |W−10 , ν0)

+N∑n=1

τn,k logNn,k(

xobsn |µobs

k ,((

Λ−1k

)obs,obs)−1)

+ C(3.22)

Let us check if this is an exponential family, by looking at the sufficient statistics. The

sum of each logarithm of the Gaussian distributions over the observed points is:

N∑n=1

τn,k logNn,k(·) =N∑n=1

−1

2τn,k log

∣∣(Λ−1k )obs,obs

∣∣+ C

(note: C contains some terms with µ and Λ in this expression)

For each point xn, there is the log-determinant of a different sub-matrix of Λ−1k , that is

interacting with parameter τn,k. The quantities τn,k are not parameters of the Gaussian.

However, if they are not included as parameters, then we will obtain a different exponential

family at each step of the coordinate descent, which is also infeasible. Therefore, this

cannot be an exponential family, unless we allow the number of parameters to be as large

as the data.2 Instead, we assume that all the log-determinants are 1/2 log |Λk|.

The next obstacle to viewing equation (3.22) as an exponential family is the fact that each

Mahalanobis distance is over a space with a different number of dimensions. To overcome

that, the following theorem will be useful.

Theorem 3.2.1. Let x be a data point, let m be a D-dimensional vector. Its element mi

is 1 if xi is observed (that is, i ∈ obs), and 0 if xi is missing (that is, i 6∈ obs). Then, for

any D ×D matrix A and D × 1 vectors b, c we have:

(b�m)>A(c�m) = (bobs)>Aobs,obscobs

where � is the element-wise or Hadamard product.

2The number of parameters is also limited by 2D, where D is the dimensionality of the data, but inpractice we can assume the number of dimensions of a data set will increase with the number of pointsfast enough for this series to grow larger than N .

Page 46: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

34 Chapter 3. Partially Observed Infinite GMM

Proof. Expanding the left-hand-side:

(b�m)>A(c�m) =D∑i=1

(bi ·mi)D∑j=1

ai,j · (mj · cj) =D∑i=1

bimi

D∑j=1

mjai,jcj

=∑i∈obs

bi∑j∈obs

ai,j cj = (bobs)>Aobs,obscobs

Thus we can write the Mahalanobis distance term of the Gaussian in equation (3.22) as:

(xobsn − µobs

k )>((

Λ−1k

)obs,obs)−1

(xobsn − µobs

k ) = ((xn − µk)�mn)>Λk((xn − µk)�mn)

Note that this expression, in which the missing dimensions contribute zero to the Maha-

lanobis distance, is the point of the Gaussian likelihood with a highest value, subject to

being compatible with the partially-observed data point.

We can remove the Hadamard product with mn by setting xmisn := µmis

k , so that the

relevant terms are zero in the multiplication with the full precision matrix. However, this

brings us back to where we started, with the density not being an exponential family.

What is possible to do, however, is to fill in the missing values of xn with E[µmisk ] = mmis

k at

the previous iteration of the coordinate descent. We write this previous value as mmis,(t−1)k .

Intuitively, this implies that, if we use the approximation just developed to compute

the optimal parameters in closed form, the distance between this optimum and the

true optimum will become smaller as mmis,(t−1)k becomes closer to mmis

k . Essentially, the

convergence of the other two coordinate-descent blocks will make the third block converge

towards the optimum as well, even when the value it gets at the tth step is only an

approximation to the optimum.

The form in equation (3.22) then reduces to a Gaussian-Wishart distribution, like equa-

tion (3.9). The update equations are then analogous to the ones given by Bishop (2006).

Page 47: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

3.3. Imputation 35

Defining Nk =∑N

n=1 τn,k and xk = N−1k

∑Nn=1 τn,k[x

obsn ; m

misn,(t−1)k ]:

βk = β0 +Nk

νk = ν0 +Nk

mk =1

βk(β0m0 +Nkxk)

W−1k = W−1

0 +1

Nk

N∑n=1

τn,k([xobsn ; m

misn,(t−1)k ]− xk)([x

obsn ; m

misn,(t−1)k ]− xk)

>

+β0Nk

β0 +Nk

(xk −m0)(xk −m0)>

(3.23)

3.3 Imputation

Recall the variational distribution from equation (3.8):

qφ(Z,v,θ) =K−1∏k=1

qγk(vk)

K∏k=1

qηk(µk,Λk)

N∏n=1

qτn(zn) (3.24)

And the likelihood from equation (3.6):

p(xn | zn,θ) =∞∏k=1

N(

xobsn |µobsn

k ,(Λobsn,obsnk

)−1)zn,k

(3.25)

We describe a semi-Bayesian procedure for performing imputation. In this case, we

calculate the expected posterior of the parameters θ, and we use the resulting graphical

model to perform imputation.

We are interested in the posterior probability distribution p(xmisn |xobs

n ,X). Only the

likelihood p(xmisn |Z,θ,xobs

n ,X) and the probability distributions of its parameters are

relevant to compute the posterior.

Observe that the likelihood does not depend directly on v. In the Dirichlet Process prior,

xn depends indirectly on v, because xn depends on zn which depends on v. However, in

the variational distribution, Z is independent of v.

Thus, v has no bearing on the likelihood function, and it is possible to discard the

parameters γ. They make the coordinate descent optimisation possible, but are in fact

useless for answering imputation queries once the model is optimised. We write qφ(Z,µ,Λ)

for the marginal distribution of the remaining RVs, which is illustrated in Figure 3.2.

Page 48: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

36 Chapter 3. Partially Observed Infinite GMM

Contrast this with an ordinary setting, where we receive test points x∗ that are not the

same as the ones the model was trained with. In that case, it is the variational distribution

over the per-point cluster assignments Z that becomes useless: the new point has to get a

new cluster assignment, z∗, the distribution of which depends on v.

θk

zn

xmisn

xobsn N

NWk

K

qτ (Z)

Figure 3.2: The posterior distribution of a POIGMM for imputation.

3.3.1 Semi-Bayesian imputation

.

Given the variational posterior, we can compute the expected component parameters:

(µ, Λ) = Eqφ

(µ,Λ) (3.26)

• The component means µk follow a Gaussian distribution with mean mk. Thus

µk = mk.

• The component precisions Λk follow a Wishart distribution with covariance Wk.

Thus Λk = νkWk.

Using these point estimates of the cluster parameters, we can compute the desired

conditional distribution. For each point to impute xmisn :

p(xmisn |xobs

n ,Xn) =∑zn

p(xmisn , zn | Λ

−1, µ,xobs

n ,X)

=∑zn

p(xmisn | zn,xobs

n , µ, Λ−1

)p(zn |xobsn ,X)

Now we substitute the true p(zn |xobsn ,X) for the variational distribution qτn(zn). This is

possible because, during optimisation, we already took into account the point xobs. Using

Page 49: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

3.4. Summary 37

also the likelihood from equation (3.25), we obtain:

p(xmisn |xobs

n ,Xn) =∑k

[∞∏j=1

N (xmisn | µj, Λ

−1

j ,xobsn ,X)zn,j

]q(zn = k)

=∑k

τn,kN (xmisn | µk, Λ

−1

k ,xobsn )

where we implicitly removed the dependency on X, since the conditional expression of the

normal distributions do not use it.

The result is a mixture of Gaussians (Section 2.2.4). Each of the Gaussians is conditioned

on xobs. Using the standard formula for conditional Gaussians, we get the final expression

for the posterior probability of xmis:

p(xmisn |xobs

n ,X) =∑k

τn,kN(

xmisn |mn,k,

mis,mis

k

)−1)

(3.27)

where:

mn,k = µmisk −

mis,mis

k

)−1

Λmis,obs

(xobsn − µobs

k )

3.4 Summary

First we perform coordinate descent to optimise the variational parameters, and then we

compute the density imputation.

Algorithm 3 Inference and imputation with the POIGMM

Input: data set with missing data X of size N ×D, a tolerance ε.

Randomly initialise the parameters of the variational distribution: γ(0), η(0), τ (0)

(equation 3.8).

t← 1

while ||τ (t) − τ (t−1)||+ ||γ(t) − γ(t−1)||+ ||η(t) − η(t−1)|| > ε do

Update τ (t) (proportional to equation 3.18)

Update γ(t) (equation 3.19)

Update η(t) (equation 3.23)

t← t+ 1

end while

Compute {p(xmisn |xobs

n ,X) : ∀n} using equation (3.27).

Output {p(xmisn |xobs

n ,X) : ∀n}.

Page 50: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 51: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Chapter 4

Experiments

4.1 Data sets

We employed two data sets, obtained from the mlbench (Leisch and Dimitriadou, 2010)

package for the R language. Both sets contain labelling information for each data point,

but this is not used in our experiments.

Our method expects continuous variables as inputs, however, each of the data sets contains

one boolean variable. Accordingly, we encode the boolean as a floating-point 0 or 1.

BostonHousing (Harrison and Rubinfeld, 1978). Contains information about 506 tracts

of land near Boston, Massachussets, USA. The data was collected during the 1970 census.

Examples of features in it include the crime rate, the concentration of nitric oxides in the

atmosphere, and the property-tax rate. The boolean variable in it indicates whether the

tract is adjacent to the River Charles.

Ionosphere (Lichman, 2013). The data in this set represents the returning signal of

radars pointed at the ionosphere. One of the features in this data set has the same value

for all points, so it was discarded prior to the analysis.

Name Ionosphere BostonHousingn. data points 351 506

n. numerical features 32 12n. boolean features 1 1

Table 4.1: Summary of the features of each used data set.

39

Page 52: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

40 Chapter 4. Experiments

4.2 Methodology

None of the data sets have missing values. We created missing values according to different

mechanisms and parameters. Then, we computed each algorithm’s performance when

reconstructing the data, for several measures of performance.

Concretely, the experiments were conducted according to the following procedure. Starting

from a full data set X of N points with D dimensions:

1. Generate a data set with missing data Xmis, using one of the mechanisms described

in section 4.2.1. Each value that is deemed to be missing now has value xi,j = •.

2. Using only the remaining observed entries in Xmis, we normalise the data set to have

mean 0 and standard deviation 1. That is:

(a) For each feature j, let Sj = {xi,j : xmisi,j 6= •, i = 1, . . . , N}

(b) Compute µj = mean(Sj) and σj = std deviation(Sj).

(c) Normalise: set Xmis = [(Xmis:,j − µj)/σj : j = 1, . . . , D].

3. Using an imputation algorithm (from section 4.2.2), create Ximp by imputing Xmis.

4. Recover the imputed data set by reversing the affine transform introduced to nor-

malise:

Ximp = [Ximp · σj + µj : j = 1, . . . , D]

5. For each reconstruction metric F in section 4.2.3, compute F (Xtrue,Ximp). Depending

on the metric being called, Ximp may contain continuous values, boolean values, a

list of points or probability distributions.

Each experiment was repeated several times, to calculate sample standard deviations of

the results.

4.2.1 Creating missing data

We created 6 patterns of missing data, following different mechanisms (Section 2.4.1) and

patterns. The function to create each pattern takes one argument, r, which is the desired

overall proportion of missing data.

• MCAR_total: for each entry of X, set it to missing with probability r. Recall that,

for a binomial distribution with N trials, the expected value is rN ; so in expectation

the proportion of missing data is indeed r.

Page 53: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

4.2. Methodology 41

• MCAR_rows: mark each row of X with probability√r. Then, for each marked row,

set a randomly selected√r fraction of the elements to missing.

• MAR_rows:

1. For 1/5 of the features (named “controlling”), draw random coefficients wj from

a uniform distribution over [−1/2, 1/2].

2. Next, for each point i, compute αi =∑

j∈C wjxi,j , for the set C of columns that

are controlling. Order the rows by increasing αi, and mark the first 5/4N√r.

3. For each marked row, drop each non-controlling value with probability√r.

The overall expected proportion of dropped values is r: 4/5 are non-controlling, of

these 5/4√r are marked, then out of these

√r on average are dropped.

• NMAR: randomly shuffle the columns, and mark the first bD√rc of them. From each

of these columns, set the N√r lowest values to missing.

• NMAR_random: same procedure as NMAR, except: for each column, merely mark the

7/5N√r lowest values. Then, randomly drop 5/7 of those.

4.2.2 Algorithms

The algorithms participating in the experiments are:

• mean. Impute the normalised data set with a Gaussian with mean 0 and standard

deviation 1. This one is merely a baseline, to check if using any of the algorithms is

better than doing nothing.

• MICE. The MICE algorithm, which is a kind of regression imputation (Algorithm 2)

that predicts a value using linear regression and then chooses at random from the k

closest values to the prediction. See Section 5.1.

• MissForest. The MissForest algorithm, which is also a kind of regression imputation

(Algorithm 2) that uses Random Forests. See Section 5.2.

• GMM. The Partially Observed Infinite Gaussian Mixture Model, see Chapter 3. The

prior used for the POIGMM is the non-informative one described in section 3.1.3.

4.2.3 Metrics

• Normalised Root Mean Squared Error (NRMSE; Oba et al., 2003). Based on MSE

Page 54: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

42 Chapter 4. Experiments

(section 2.1.3), NRMSE ranges from 0 to 1 in most cases Its definition is:

NRMSE(vtrue,vimp) =

√mean ((vtrue − vimp)2)

variance(vtrue)(4.1)

• Proportion of Falsely Classified values (PFC). The relative amount of boolean values

that are wrongly classified.

PFC(vtrue,vimp) = meani(1(vimpi 6= vtrue

i )) (4.2)

with 1 the indicator function (equation B.11).

Both NRMSE and PFC were used only with single imputation, since they are biased

against multiple and density imputation. The squared error grows fast with distance, so

if a multiple imputation algorithm makes several similar guesses, and some of them are

slightly more off-track, it will be severely penalised.

In the case of MICE and MissForest, we took the average of the multiple imputations

as their answer. For POIGMMs, we computed the analytic mean: the weighted average

of the means of the Gaussians. This was found to improve the performance of all the

methods, except of course mean.

• Negative Log-likelihood (equation 2.7). To compute this metric with the POIGMM,

knowing vimp and vtrue is not sufficient: the original Xmis is needed, since the

log-likelihood of each point depends on how the values are correlated with each

other.

MICE and MissForest give only samples, so the log-likelihood cannot be directly computed.

Instead, the mean and variance for the imputations each missing measurement is computed.

Based on that mean and variance, a Gaussian distribution with diagonal covariance matrix

is fit. Then, the log-likelihood of this Gaussian is computed.

4.3 Results and discussion

The results are all in the form of whole-page graphs. Thus, they have all been put into

Appendix A.

Page 55: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

4.3. Results and discussion 43

4.3.1 How to read the graphs

Each of the Figures A.3, A.2, A.1, A.4, A.5 has the same structure. Each figure represents

a missing data generation mechanism. Each column of plots represents a data set, and

each row a reconstruction metric. Finally, within each plot, there are several clusters

of vertical bars. The clusters indicate the proportion of missing data requested to the

missing-data-creation routine (Section 4.2.1). Each bar then represents an algorithm, the

height is their performance and the black bars are ±2σ, two standard deviations up, and

two standard deviations down. In all metrics, lower is better.

4.3.2 Discussion

We can clearly see that, most of the time, MissForest is the dominating algorithm, at least

on the NRMSE metric. Next usually comes our method, the POIGMM (here written just

GMM), then MICE, and finally the mean baseline.

The exception are the data sets with NMAR_rows missing data mechanism. This data

mechanism involves removing the N√r lowest values. Thus, frequentist methods (that

behave like MLE, Section 2.2.1), will always overestimate the mean of the missing values.

The GMM, instead, has a prior that specifies that values closer to 0 are more likely, and

thus does not incur so much error in this case. It is however an empty victory, all methods

work roughly as badly as mean imputation here.

Where we see some improvement with respect to the other methods is, as expected,

in the negative log-likelihood of the missing data. Even when using the logarithmic

representation of numbers (Section 2.2.1), which greatly increases range between 0 and 1,

many algorithms exceeded the range. This is perhaps because the Gaussian distribution

assigns very, very low likelihood to elements that are several standard deviations away.

For future experiments, a more tolerant distribution like the Student t-distribution should

be used.

In any case, all the methods are using Gaussians. What is most surprising about the

negative log-likelihood results is that all methods (even the POIGMM) do not work very

well compared to mean imputation.

In the case of PFC, the differences between algorithms seem to be mostly noise. There is

only one boolean feature per data set and, in both data sets, the number of ones in the

feature is much larger than the number of zeroes. In fact, there is no PFC plot for any of

the NMAR data because in many experiments all the zeroes disappeared, the problem

thus becoming single-class classification.

Page 56: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using
Page 57: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Chapter 5

Related Work

5.1 Multiple Imputation by Chained Equations

(MICE)

Multiple Imputation by Chained Equations (Van Buuren and Oudshoorn, 1999) is a

multiple imputation algorithm that is designed to work in the MAR setting. In the original

report, the algorithm is presented as a Gibbs sampling method. Gibbs sampling consists in

iteratively drawing samples the conditional distributions p(xi |X−i), for each i. The first

sample is drawn uniformly randomly from the values of the other samples in the data set.

In the limit, this is guaranteed to converge to a sample from the true posterior. However,

it is famously difficult to judge when the sequence of samples is close to converging. The

solution MICE uses is to run several Gibbs sampling procedures, round-robin style, and

stop when the variance between different sequences (which starts large) is not greater than

the variance across the past samples of the same sequence.

MICE resembles coordinate descent (algorithm 1) in the sense that it does partial condi-

tional updates that eventually converge to a solution. However, MICE is stochastic while

coordinate descent is deterministic. Furthermore, MICE does not have an optimisation

objective.

MICE is also an instance of regression imputation (Algorithm 2). The most common

method used to draw from p(xi |X−i) at each step consists in fitting a linear model to

xi |X−i, using that linear model to predict a value from xi, and then drawing a value from

the k values of xi present in the data set that are closest to xi. This is known as predictive

mean matching.

45

Page 58: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

46 Chapter 5. Related Work

Arguably, MICE could be called “stochastic regression imputation”, since the authors

do not specify that predictive mean matching is the partial sampling method to be used.

Instead, they postulate it as a general framework, with which any conditional sampling

procedure can be used.

5.2 MissForest: Random Forests for imputation

MissForest (Stekhoven and Buhlmann, 2011) is the current state of the art in imputation.

The algorithm is incredibly simple: regression imputation (Algorithm 2), but using Random

Forests.

Random Forests (Breiman, 2001) are a regression and classification algorithm that consists

in stochastically growing many decision trees (for example: “if x > 3 then y = 1 else if

. . . ”) and taking a majority vote of their predictions. RFs are very popular in machine

learning contests, because of their high robustness to noise, and their ability to learn with

a moderate amount of data.

Random forests, compared to our work, are best explained through the following analogy.

Gaussian mixture models (and thus the POIGMM) attempt to “envelope” the data with

one or more elliptical, Gaussian “domes”, and then derive the imputations based on the

shape and location of those domes.

In contrast, Random Forests (and thus MissForest) create many layers of partitions in the

observation space, assigning a label or correlated output to each partition. Then, hundreds

of these layers are combined via majority (in classification) or mean (in regression) votes.

Perhaps the extra flexibility of these hundreds of layers of space partitions is required to

bridge the performance gap between the POIGMM and MissForest.

MissForest does not have any kind of convergence guarantees. Indeed, the stopping

criterion for regression imputation with RFs is to stop when the estimate starts to diverge.

That is, MissForest stops the regression imputation procedure when the difference between

the current imputed matrix and the previous one is larger than the difference between the

previous imputed matrix and the one that came before it.

Additionally, Stekhoven and Buhlmann (2011) only show empirically that MissForest

works with data that is MCAR. However, in this thesis we learned that it also works better

for data that is MAR and NMAR.

Page 59: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Chapter 6

Conclusions

The goal of this project was, nearly from the beginning, to create a density imputation

algorithm that would improve the state of the art of standard multiple imputation methods.

In a few ways, this goal has been achieved, but not in many others. Indeed, our approach

seems to be better than others at representing uncertainty. However, it is likely that, in

many cases, this does not compensate for worse prediction accuracy. As an example, in

our experiments, often the second best method for the log-likelihood metric was simply

mean imputation. On the other hand, we did improve on most metrics compared to the

MICE algorithm, which still sees wide use.

6.1 Lessons Learned

One of the main mistakes to learn from is not starting to write earlier: not because of

lack of time, but because committing ideas to paper, in a more involved way than quickly

scribbled research notes, forces you to think about their shortcomings and implications in

a detailed way. Thus by writing, many good ideas come to mind, and it is a waste to not

be able to test and use them.

Another mistake is that of not committing to a topic at the beginning of the project

and then going deeply into it. This project included several algorithm designs and

implementations that did not appear in this report. This is partly to them being less

effective, and partly to being restricted in number of pages and time. At the beginning,

the goal was to study time-series data using neural networks, exactly like the Met Office

example from the introduction. Then, the goal became imputation in i.i.d. settings; we

attempted auto-encoders, multi-layer perceptrons (neural networks), mixtures of Gaussians

(the final subject of this thesis) and, in the end, Gaussian processes.

47

Page 60: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

48 Chapter 6. Conclusions

6.2 Future Directions

There are many ways to extend the research that is written in this document. The first

and most obvious one, and actually one that we have partially done already, is to derive a

correct variational inference algorithm that provably minimises the KL-divergence. This

is likely possible to do with gradient descent, or perhaps hybrid coordinate and gradient

descent. Gradient descent does not require analytically available partial globally optimal

updates. As a potential positive side effect, much larger data sets could be handled, by

using stochastic gradient descent and only accounting for a small subset of the points at

every update.

Components of different types, not just Gaussians, could be added. With every component

being a Gaussian multiplied by a Multinomial, for example, it would become possible to

perform imputation of mixed categorical and numerical data sets. If it proves possible to

have several types of components, a combination of Gaussians and Dirichlet distributions

would likely make it easier to model densities accurately.

Another line of extension would be to include the conditional distribution of the missing

values into variational inference. This would allow us to take into account all the uncertainty

we sidestepped in Section 3.3. It is possible that this has been done already, since it

would be the Bayesian counterpart to the Expectation Maximisation type of imputation

algorithms (Gold and Bentler, 2000).

Finally, and perhaps most importantly, it would be interesting to verify by extensive

experiments the potential benefits of density imputation described in the introduction. The

presented POIGMM algorithm, combined with the machine learning methods mentioned

there, would be enough to do that.

Page 61: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Appendix A

Experiments

0.0

0.5

1.0

1.5

2.0

NRM

SE

GMMMICEMissforest_multmean

0.1

0.3

0.5

proportion missing (BostonHousing)

103

104

norm

_log_

l

0.1

0.3

0.5

proportion missing (Ionosphere)

Figure A.1: NMAR_random missingness (Section 4.2.1). In all metrics, lower is better.

49

Page 62: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

50 Appendix A. Experiments

0

2

4

6

NRM

SE

GMMMICEMissforest_multmean

0.1

0.3

0.5

proportion missing (BostonHousing)

104

105

106

107

108

norm

_log_

l

0.1

0.3

0.5

proportion missing (Ionosphere)

Figure A.2: NMAR missingness (Section 4.2.1). In all metrics, lower is better.

Page 63: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

51

0.0

0.2

0.4

0.6

0.8

1.0

1.2

NRM

SE

103

104

105

norm

_log_

l

0.1

0.3

0.5

proportion missing (BostonHousing)

0.00

0.05

0.10

0.15

0.20

0.25

PFC

GMMMICEMissforest_multmean

0.1

0.3

0.5

proportion missing (Ionosphere)Figure A.3: MAR_rows missingness (Section 4.2.1). In all metrics, lower is better.

Page 64: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

52 Appendix A. Experiments

0.0

0.2

0.4

0.6

0.8

NRM

SE

GMMMICEMissforest_multmean

103

104

norm

_log_

l

0.1

0.3

0.5

proportion missing (BostonHousing)

0.00

0.02

0.04

0.06

0.08

0.10

0.12

PFC

0.1

0.3

0.5

proportion missing (Ionosphere)

Figure A.4: MCAR_total missingness (Section 4.2.1). In all metrics, lower is better.

Page 65: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

53

0.0

0.2

0.4

0.6

0.8NR

MSE

103

104

105

norm

_log_

l

mean

0.1

0.3

0.5

proportion missing (BostonHousing)

0.05

0.00

0.05

0.10

0.15

0.20

PFC

0.1

0.3

0.5

proportion missing (Ionosphere)

Figure A.5: MCAR_rows missingness (Section 4.2.1). In all metrics, lower is better.

Page 66: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Appendix B

Mathematical definitions

B.1 Probability distributions

B.1.1 Gaussian-Wishart

It is a distribution over a D-dimensional vector and a D ×D PSD matrix. Its expression

is (Bishop, 2006):

NW(µ,Λ |m, β,W, ν) = N (µ |m, (βΛ)−1)W(Λ |W, ν) (B.1)

with N as in equation (2.16) and Wishart distribution W as in equation (2.18). Note that

the Gaussian’s precision depends on the output of the Wishart.

The Gaussian-Wishart is the conjugate distributionsection 2.2.6 of the Gaussian. It is also

an exponential family (section 2.2.6):

NW(µ,Λ |m, β,W, ν) = exp(g(m, β,W, ν) · f(µ,Λ)− h(m, β,W, ν))

Where f and g are vectors, · is their dot product, and:

f(µ,Λ) =[log |Λ|; µ>Λµ; Λµ; vec(Λ)

](B.2)

g(m, β,W, ν) =[ν−D

2; β; −2βm; vec(βmm> + W−1)

](B.3)

h(m, β,W, ν) = log

[2(ν+1)D/2πD(D+1)/4β−D/2|W |ν/2

D∏i=1

Γ

(ν + 1− i

2

)](B.4)

54

Page 67: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

B.1. Probability distributions 55

Derivation of the exponential family form

According to Teh (2007), the Gaussian-Wishart distribution can be written as follows:

NW(µ,Λ |m, β,W, ν) = h(m, β,W, ν) exp(f(µ,Λ,m, β,W, ν))

With h given in section B.1.1 and:

f(·) =ν − d

2log |Λ| − 1

2tr(Λ(βµµ> − 2µ(βm)> + βmm> + W−1

))We take (log |Λ|)/2 as the first sufficient statistic. Observe that the trace on the right can

be written as:

tr (·) = βµ>Λµ− (2βm)>Λµ+ tr(Λ(βmm> + W−1

))By inspection, we can separate the natural parameters from the sufficient statistics. The

result is in section B.1.1

B.1.2 The Beta distribution

The Beta distribution has support 0 ≤ x ≤ 1, for x ∈ R. A variable x is Beta distributed

with shape parameters α and β ∈ R if:

p(x |α, β) ∝ xα−1(1− x)β−1 (B.5)

with normalisation constant, known as the Beta function:

B(α, β) =Γ(α + β)

Γ(α)Γ(β)(B.6)

B.1.3 Multinomial Distribution

Let x be a random K-dimensional one-hot vector. Then the multinomial distribution

assigns probability wk to the vector having the kth dimension equal to 1:

p(x) =K∏k=1

wxkk (B.7)

Page 68: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

56 Appendix B. Mathematical definitions

B.2 Functions

Gamma function

The Γ function is defined as the integral:

Γ(x) =

∫ ∞0

zx−1e−zdz (B.8)

It can be thought of as a generalisation of factorials to all the real numbers.

Digamma function

The Digamma function is the first derivative of the logarithm of the Gamma function. It

is usually calculated using a series approximation.

Ψ(x) =d

dxlog Γ(x) (B.9)

Dirac delta

The Dirac delta is an infinite impulse function at the coordinate origin. It has the following

properties: ∫RD

δ(x)dx = 1

∫RD

f(x)δ(x− x0)dx = f(x0) (B.10)

Indicator function

Let p be a predicate. The indicator function 1[p] is:

1[p] =

0 if p is false

1 if p is true(B.11)

B.3 Linear Algebra

Positive semidefinite (PSD) matrix

A matrix M is PSD if and only if, for all vectors of the appropriate dimensionality x 6= 0,

x>Mx ≥ 0. PSD matrices are always Hermitian, that is, symmetric and with real values.

Page 69: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Appendix C

Extra derivations

This appendix contains the derivation of some results that are long, or not relevant for the

experiments carried out.

C.1 Expectation of the logarithm of the normal

We wish to compute the value of EWkENk

[Nn,k]. That is, the expected value under a

Gaussian-Wishart distribution, as defined in equation (3.9), of the normal distribution

Nn,k, which is a distribution over a |obs|-dimensional subset of the mean of NWk, such as

the one above Equation (3.16).

We set Σk = Λ−1k and apply the logarithm:

EWk

ENk

[− logNn(xobs

n |µobsnk ,Σobsn,obsn

k )]

= EWk(Σk |Wk,νk)

[1

2log |Σobsn,obsn

k |

+ ENk(µobsn

k |mobsnk ,β−1

k Σobsn,obsnk )

[−1

2(xobs

n − µobsnk )>βk

(Σobsn,obsnk

)−1

(xobsn − µobsn

k )

] ](C.1)

Focusing on the second expectation term, we change the random variable to µobsnk =

y + mobsnk :

ENk(y | 0,β−1

k Σobsn,obsnk )

[−βk

2(xobs

n − y −mobsnk )>Σobsn,obsn

k (xobsn − y −mobsn

k )

]=− βk

2(xobs

n −mobsnk )>

(Σobsn,obsnk

)−1

(xobsn −mobsn

k )

+ ENk(y | 0,β−1

k Σobsn,obsnk )

[−βk

2y>(Σobsn,obsnk

)−1

y + βky>(Σobsn,obsnk

)−1

(xobsn + mobsn

k )

]57

Page 70: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

58 Appendix C. Extra derivations

By linearity of expectation and since ENk[y] = 0, the expectation term is:

ENk(y | 0,β−1

k Σobsn,obsnk )

[−βk

2y>(Σobsn,obsnk

)−1

y

]

Let K = (β−1/2k L)(β

−1/2k L)> = β−1

k Σobsn,obsnk . K is the covariance matrix of the distri-

bution y follows, so y = β−1/2k Lz for z ∼ N (0, I) a normally distributed vector. Also(

Σobsn,obsnk

)−1

= (L−1)>L−1. We rewrite the above expectation:

ENk(z | 0,I)

[−βk

2(β−1/2k Lz)>(L−1)>L−1(β

−1/2k Lz)

]= ENk(z | 0,I)

[−1

2z>z

]= −1

2|obs|

since z>z follows a χ2 distribution with |obs| degrees of freedom.

We substitute the terms we just derived into equation (C.1). Recall that we are deriving

an expression for optimising a KL-divergence, so we can ignore the terms that do not

depend on the variational parameters.

EWk

ENk

[logNn] = EWk(Σk |Wk,νk)

[− 1

2log |Σobsn,obsn

k |

− βk2

(xobsn −mobsn

k )>(Σobsn,obsnk

)−1

(xobsn −mobsn

k )

]+ C

By Theorem 2.3.1, Σobsn,obsnk follows a Wishart distribution with covariance Wobsn,obsn

k

and degrees of freedom νk − D + |obs|. Using equations (2.20 and 2.21) the resulting

expectation is:

EWk

ENk

[logNn] =1

2log |Wobsn,obsn

k |+ 1

2

|obsn|∑i=1

Ψ

(νk −D + |obs|+ 1− i

2

)− βk(νk −D + |obs|)

2(xobs

n −mobsnk )>Wobsn,obsn

k (xobsn −mobsn

k ) + C

Page 71: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Bibliography

Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Information

Science and Statistics. Springer. isbn: 9780387310732.

Blei, David M, Michael I Jordan, et al. (2006). “Variational inference for Dirichlet process

mixtures”. In: Bayesian analysis 1.1, pp. 121–143.

Breiman, Leo (2001). “Random forests”. In: Machine learning 45.1, pp. 5–32.

Che, Zhengping et al. (2016). “Recurrent neural networks for multivariate time series with

missing values”. In: arXiv preprint arXiv:1606.01865.

Cinar, Yagmur G et al. (2017). “Time Series Forecasting using RNNs: an Extended

Attention Mechanism to Model Periods and Handle Missing Values”. In: arXiv preprint

arXiv:1703.10089.

Damianou, Andreas C, Michalis K Titsias, and Neil D Lawrence (2016). “Variational

inference for latent variables and uncertain inputs in Gaussian processes”. In: The

Journal of Machine Learning Research 17.1, pp. 1425–1486.

Dilokthanakul, Nat et al. (2016). “Deep unsupervised clustering with gaussian mixture

variational autoencoders”. In: arXiv preprint arXiv:1611.02648.

Gold, Michael Steven and Peter M Bentler (2000). “Treatments of missing data: A Monte

Carlo comparison of RBHDI, iterative stochastic regression imputation, and expectation-

maximization”. In: Structural Equation Modeling 7.3, pp. 319–355.

Grippo, L. and M. Sciandrone (2000). “On the convergence of the block nonlinear Gauss–

Seidel method under convex constraints”. In: Operations Research Letters 26.3, pp. 127

–136. issn: 0167-6377. doi: http://dx.doi.org/10.1016/S0167-6377(99)00074-7.

Harrison, David and Daniel L Rubinfeld (1978). “Hedonic housing prices and the demand

for clean air”. In: Journal of environmental economics and management 5.1, pp. 81–102.

Horn, Roger A. and Charles R. Johnson (2012). Matrix Analysis. 2nd ed. Cambridge

University Press. doi: 10.1017/9781139020411.

Iwata, Tomoharu, David Duvenaud, and Zoubin Ghahramani (2012). “Warped mixtures

for nonparametric cluster shapes”. In: arXiv preprint arXiv:1206.1846.

59

Page 72: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

60 Bibliography

Johnson, Matthew et al. (2016). “Composing graphical models with neural networks for

structured representations and fast inference”. In: Advances in Neural Information

Processing Systems 29. Ed. by D. D. Lee et al. Curran Associates, Inc., pp. 2946–2954.

Leisch, Friedrich and Evgenia Dimitriadou (2010). mlbench: Machine Learning Benchmark

Problems. R package version 2.1-1.

Lichman, M. (2013). UCI Machine Learning Repository. url: http://archive.ics.uci.edu/ml.

Little, Roderick J. A. and Donald B. Rubin (1989). “The Analysis of Social Science Data

with Missing Values”. In: Sociological Methods & Research 18.2-3, pp. 292–326. doi:

10.1177/0049124189018002004.

Little, Roderick J.A. and Donald B Rubin (2002). Statistical analysis with missing data.

2nd ed. John Wiley & Sons.

Met Office Weather Stations. http://www.metoffice.gov.uk/learning/science/first-

steps/observations/weather-stations. Last retrieved: 26 August 2017. Link to Internet

Archive.

Muirhead, R.J. (1982). Aspects of Multivariate Statistical Theory. Wiley Series in Proba-

bility and Statistics. Wiley. isbn: 9780471094425.

Nebot-Troyano, Guillermo and Lluıs A Belanche-Munoz (2010). “A kernel extension to

handle missing data”. In: Research and Development in Intelligent Systems XXVI.

Springer, pp. 165–178.

Oba, Shigeyuki et al. (2003). “A Bayesian missing value estimation method for gene expres-

sion profile data”. In: Bioinformatics 19.16, pp. 2088–2096. doi: 10.1093/bioinformatics/

btg287.

Raghunathan, Trivellore E et al. (2001). “A multivariate technique for multiply imputing

missing values using a sequence of regression models”. In: Survey methodology 27.1,

pp. 85–96.

Rubin, Donald B (2004). Multiple imputation for nonresponse in surveys. Vol. 81. John

Wiley & Sons.

Sethuraman, Jayaram (1994). “A constructive definition of Dirichlet priors”. In: Statistica

sinica, pp. 639–650.

Srivastava, Nitish et al. (2014). “Dropout: a simple way to prevent neural networks from

overfitting.” In: Journal of machine learning research 15.1, pp. 1929–1958.

Stekhoven, Daniel J and Peter Buhlmann (2011). “MissForest—non-parametric missing

value imputation for mixed-type data”. In: Bioinformatics 28.1, pp. 112–118.

Teh, Yee Whye (2007). Exponential Families: Gaussian, Gaussian-Gamma, Gaussian-

Wishart, Multinomial.

Van Buuren, Stef and Karin Oudshoorn (1999). Flexible multivariate imputation by MICE.

Leiden: TNO.

Page 73: Probability density imputation of missing data with ... · Multiple imputation (Rubin,2004) is a procedure for conducting unbiased data analysis of an incomplete data set X, using

Bibliography 61

Zio, Marco Di, Ugo Guarnera, and Orietta Luzi (2007). “Imputation through finite Gaussian

mixture models”. In: Computational Statistics & Data Analysis 51.11. Advances in

Mixture Models, pp. 5305 –5316. issn: 0167-9473. doi: http://dx.doi.org/10.1016/j.

csda.2006.10.002.


Recommended