+ All Categories
Home > Documents > Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics...

Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics...

Date post: 13-Jul-2020
Category:
Upload: others
View: 8 times
Download: 0 times
Share this document with a friend
27
Variational Auto-Encoders Stéphane d’Ascoli
Transcript
Page 1: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Variational Auto-Encoders

Stéphane d’Ascoli

Page 2: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Roadmap1. A reminder on auto-encoders

a. Basicsb. Denoising and sparse encodersc. Why do we need VAEs ?

2. Understanding variational auto-encodersa. Key ingredientsb. The reparametrization trichc. The underlying math

3. Applications and perspectivesa. Disentanglementb. Adding a discrete conditionc. Applicationsd. Comparison with GANs

4. Do it yourself in PyTorcha. Build a basic denoising encoderb. Build a conditional VAE

Page 3: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Auto-Encoders

Page 4: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Basics

Page 5: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Denoising and Sparse Auto-EncodersDenoising :

Sparse : enforces specialization of hidden units

Contractive : enforces that close inputs give close outputs

Page 6: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Why do we need VAE ?VAE’s are used as generative models : sample a latent vector, decode and you have a new sample

Q : Why can’t we use normal auto-encoders ?A : If we choose an arbitrary latent vector, we aren’t close to any points in the training set and the reconstruction is garbage !

Q : How can we avoid this ?A : Compactify the latent space !

Q : How can we do this ?A : Two ingredients :1. Encode into balls rather than points2. Bring the balls closer together

Page 7: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Variational Auto-Encoders

Page 8: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Key IngredientsGenerative models : unsupervised learning, aim to learn the distribution underlying the input data

VAEs : Map the complicated data distribution to a simpler distribution (encoder) we can sample from (Kingma & Welling 2014) to generate images (decoder)

Page 9: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Q : Why encode into distributions rather than deterministic values ?

A1 : This creates balls in latent spaceA2 : This ensures that close points in latent space lead to the same reconstruction. This gives “meaning” to the latent space.

First Ingredient : Encode into Distributions

Page 10: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Second Ingredient : impose structureQ : How can I bring the balls together to compactify latent space ?A : Make sure that Q(z|x) for different x’s are close together !

Page 11: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Second Ingredient : impose structureQ : How do we keep the balls close together ? A : By adding springs the balls which pull them towards the center

Q : How ?A : KL divergence with N(0,1) prior !

Page 12: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

The Reparametrization TrickQ : How can we backpropagate when one of the nodes is non-deterministic ?A : Put the random process outside the network !

Page 13: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

The Underlying Information Theory

Page 14: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

The Underlying Information Theory

Page 15: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

The Underlying Information Theory

Page 16: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

The Underlying Information Theory

Page 17: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

The Underlying Information Theory

Page 18: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

VAEs in Practice

Page 19: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Disentanglement : Beta-VaeWe saw that the objective function is made of a reconstruction and a regularization part.

By adding a tuning parameter we can control the tradeoff.

If we increase beta:- The dimensions of the latent representation are more disentangled- But the reconstruction loss is less good

Page 20: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Generating Conditionally : CVAEsAdd a one-hot encoded vector to the latent space and use it as categorical variable, hoping that it will encode discrete features in data (digits in MNIST)

Q : The usual reparametrization trick doesn’t work here, because we need to sample discrete values from the distribution ! What can we do ?A : Gumbel-Max trick

Q : How do I balance the regularization terms for the continuous and discrete parts ?A : Control the KL divergences independently

Page 21: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

ApplicationsImage generation : Dupont et al. 2018

Text generation : Bowman et al. 2016

Page 22: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Comparison with GANS

VAE GAN

Easy metric : reconstruction loss Metric is hard to interpret

Interpretable and disentangled latent space Low interpretability

Easy to train Tedious hyperparameter searching

Noisy generation Clean generation

Page 23: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Towards a Mix of the Two ?

Page 24: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Do It YourselfIn Pytorch

Page 25: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Auto-Encoder

2. DIY: implement a denoising convolutional auto-encoder for MNIST

1. Example: a simple fully-connected auto-encoder

Page 26: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Variational Auto-Encoder1. Example: a simple VAE

2. DIY: implement a conditional VAE for MNIST

Page 27: Variational Auto-Encoders - GitHub Pages · 2020-05-12 · 1. A reminder on auto-encoders a. Basics b. Denoising and sparse encoders c. Why do we need VAEs ? 2. Understanding variational

Questions


Recommended