Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis...

Post on 13-Dec-2015

216 views 0 download

transcript

Project 11: Determining the Intrinsic Dimensionality of a

Distribution

Okke Formsma, Nicolas Roussis and Per Løwenborg

Outline

• About the project• What is intrinsic dimensionality?• How can we assess the ID?– PCA– Neural Network– Nearest Neighbour

• Experimental Results

Why did we chose this Project?

• We wanted to learn more about developing and experiment with algorithms for analyzing high-dimensional data

• We want to see how we can implement this into a program

Papers

N. Kambhatla, T. Leen, “Dimension Reduction by Local Principal Component Analysis”

J. Bruske and G. Sommer, “Intrinsic Dimensionality Estimation with Optimally Topology Preserving Maps”

P. Verveer, R. Duin, “An evaluation of intrinsic dimensionality estimators”

How does dimensionality reduction influence our lives?

• Compress images, audio and video• Redusing noise • Editing• Reconstruction

This is a image going through different steps in a reconstruction

Intrinsic Dimensionality

The number of ‘free’ parameters needed to generate a pattern

Ex:• f(x) = -x² => 1 dimensional• f(x,y) = -x² => 1 dimensional

PRINCIPAL COMPONENT ANALYSIS

Principal Component Analysis (PCA)

• The classic technique for linear dimension reduction.

• It is a vector space transformation which reduce multidimensional data sets to lower dimensions for analysis.

• It is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences.

Advantages of PCA

• Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analysing data.

• Once you have found these patterns in the data, you can compress the data, -by reducing the number of dimensions- without much loss of information.

Example

Problems with PCA

• Data might be uncorrelated, but PCA relies on second-order statistics (correlation), so sometimes it fails to find the most compact description of the data.

Problems with PCA

First eigenvector

Second eigenvector

A better solution?

Local eigenvector

Local eigenvectors

Local eigenvectors

Another problem

Is this the principal eigenvector?

Or do we need more than one?

Choose

The answer depends on your application

Low resolution High resolution

Challenges

• How to partition the space?• How many partitions should we use?• How many dimensions should we retain?

How to partition the space?

Vector Quantization

Lloyd AlgorithmPartition the space in k setsRepeat until convergence:

Calculate the centroids of each setAssociate each point with the nearest centroid

Lloyd Algorithm

Set 1

Set 2

Step 1: randomly assign

Lloyd Algorithm

Set 1

Set 2

Step 2: Calculate centriods

Lloyd Algorithm

Set 1

Set 2

Step 3: Associate points with nearest centroid

Lloyd Algorithm

Set 1

Set 2

Step 2: Calculate centroids

Lloyd Algorithm

Set 1

Set 2

Step 3: Associate points with nearest centroid

Lloyd Algorithm

Set 1

Set 2

Result after 2 iterations:

How many partitions should we use?

Bruske & Sommer: “just try them all”

For k = 1 to k ≤ dimension(set):Subdivide the space in k regionsPerform PCA on each regionRetain significant eigenvalues per region

Which eigenvalues are significant?

Depends on:• Intrinsic dimensionality• Curvature of the surface• Noise

Which eigenvalues are significant?

Discussed in class:• Largest-n

In papers:• Cutoff after normalization (Bruske & Summer)• Statistical method (Verveer & Duin)

Which eigenvalues are significant?

Cutoff after normalizationµx is the xth eigenvalue

With α = 5, 10 or 20.

%max

jj

ii

Which eigenvalues are significant?

Statistical method (Verveer & Duin)

Calculate the error rate on the reconstructed data if the lowest eigenvalue is dropped

Decide whether this error rate is significant

Results

• One dimensional space, embedded in 256*256 = 65,536 dimensions

• 180 images of rotatingcylinder

• ID = 1

Results

NEURAL NETWORK PCA

Basic Computational Element - Neuron

• Inputs/Outputs, Synaptic Weights, Activation Function

3-Layer Autoassociators

• N input, N output and M<N hidden neurons.• Drawbacks for this model. The optimal solution

remains the PCA projection.

5-Layer Autoassociators Neural Network Approximators for principal surfaces

using 5-layers of neurons. Global, non-linear dimension reduction technique. Succesfull implementation of nonlinear PCA using these

networks for image and speech dimension reduction and for obtaining concise representations of color.

• Third layer carries the dimension reduced representation, has width M<N

• Linear functions used for representation layer.

• The networks are trained to minimize MSE training criteria.

• Approximators of principal surfaces.

Locally Linear Approach to nonlinear dimension reduction (VQPCA Algorithm)

• Much faster than to train five-layer autoassociators and provide superior solutions.

• This algorithm attempts to minimize the MSE (like 5-layers autoassociators) between the original data and its reconstruction from a low-dimensional representation. (reconstruction error)

• 2 Steps in Algorithm:1) Partition the data space by VQ (clustering).2) Performs local PCA about each cluster

center.

VQPCA

VQPCA is actually a local PCA to each cluster.

We can use 2 kinds of distances measures in VQPCA:1) Euclidean Distance2) Reconstruction Distance

Example intended for a 1D local PCA:

5-layer Autoassociators vs VQPCA

• Difficulty to train 5-layer autoassociators. Faster training in VQPCA algorithm. (VQ can be accelerated using tree-structured

or multistage VQ)• 5-layer autoassociators are prone to trapping

in poor local optimal.• VQPCA slower for encoding new data but

much faster for decoding.