+ All Categories
Home > Documents > Subset-Conditioned Generation Using Variational ...bayesiandeeplearning.org/2018/papers/55.pdf ·...

Subset-Conditioned Generation Using Variational ...bayesiandeeplearning.org/2018/papers/55.pdf ·...

Date post: 28-May-2020
Category:
Upload: others
View: 20 times
Download: 0 times
Share this document with a friend
8
Subset-Conditioned Generation Using Variational Autoencoder With A Learnable Tensor-Train Induced Prior Maksim Kuznetsov Insilico Medicine Rockville, MD 20850 [email protected] Daniil Polykovskiy Insilico Medicine Rockville, MD 20850 [email protected] Dmitry Vetrov Higher School of Economics Moscow, Russia [email protected] Alexander Zhebrak Insilico Medicine Rockville, MD 20850 [email protected] 1 Introduction Generative models appear in many applications from text [1, 2] and audio generation [3] to molecular design [410] Variational Autoencoders[11] are powerful generative models that find applications in many domains [12, 13]. In this paper, we explore the problem of subset-conditioned generation [14]— training a generative model that can sample objects conditioned on a subset of available properties. Unspecified properties may either be unknown (missing), or irrelevant for the given generation round. A straightforward approach to conditional generation with Conditional Variational Autoencoders (CVAE)[15] does not support omitted conditions, but can be turned into a subset-conditioned model by predicting missing variables. In this paper, we parametrize a joint distribution on the conditions and the latent codes of VAE in a Tensor-Train format. The resulting model can efficiently sample from the conditional distributions for an arbitrary subset of conditions. The model also learns a flexible grid-structured prior distribution on the latent codes that can be used for down-stream tasks such as classification or clusterization. 2 Background Tensor-Train (TT) decomposition [16] is a method for approximating high-dimensional tensors with a relatively small number of parameters. We use it to represent discrete distributions: consider a joint distribution p(r 1 ,r 2 ,...r n ) of n discrete random variables r k taking values from {0, 1,...N k }. Let us denote this distribution as an n-dimensional tensor P [r 1 ,r 2 ,...r n ]= p(r 1 ,r 2 ,...r n ). The number of elements in P grows exponentially with the number of dimensions. TT format reduces the number of parameters by approximating the tensor P using low-rank matrices—coresQ k [r k ] R m k ×m k+1 : P [r 1 ,r 2 ,...,r n ] 1 T m1 · Q 1 [r 1 ] · Q 2 [r 2 ] ····· Q n [r n ] · 1 mn+1 , (1) where 1 m R m×1 is a column-vector of ones. In this format, the number of parameters grows linearly with the number of dimensions. With larger cores, TT format can represent more complex distributions. Third workshop on Bayesian Deep Learning (NeurIPS 2018), Montréal, Canada.
Transcript

Subset-Conditioned Generation Using VariationalAutoencoder With A Learnable Tensor-Train Induced

Prior

Maksim KuznetsovInsilico Medicine

Rockville, MD [email protected]

Daniil PolykovskiyInsilico Medicine

Rockville, MD [email protected]

Dmitry VetrovHigher School of Economics

Moscow, [email protected]

Alexander ZhebrakInsilico Medicine

Rockville, MD [email protected]

1 Introduction

Generative models appear in many applications from text [1, 2] and audio generation [3] to moleculardesign [4–10] Variational Autoencoders[11] are powerful generative models that find applications inmany domains [12, 13]. In this paper, we explore the problem of subset-conditioned generation [14]—training a generative model that can sample objects conditioned on a subset of available properties.Unspecified properties may either be unknown (missing), or irrelevant for the given generation round.A straightforward approach to conditional generation with Conditional Variational Autoencoders(CVAE)[15] does not support omitted conditions, but can be turned into a subset-conditioned modelby predicting missing variables.

In this paper, we parametrize a joint distribution on the conditions and the latent codes of VAE in aTensor-Train format. The resulting model can efficiently sample from the conditional distributions foran arbitrary subset of conditions. The model also learns a flexible grid-structured prior distributionon the latent codes that can be used for down-stream tasks such as classification or clusterization.

2 Background

Tensor-Train (TT) decomposition [16] is a method for approximating high-dimensional tensors witha relatively small number of parameters. We use it to represent discrete distributions: consider ajoint distribution p(r1, r2, . . . rn) of n discrete random variables rk taking values from {0, 1, . . . Nk}.Let us denote this distribution as an n-dimensional tensor P [r1, r2, . . . rn] = p(r1, r2, . . . rn). Thenumber of elements in P grows exponentially with the number of dimensions. TT format reduces thenumber of parameters by approximating the tensor P using low-rank matrices—cores—Qk[rk] ∈Rmk×mk+1 :

P [r1, r2, . . . , rn] ≈ 1Tm1·Q1[r1] ·Q2[r2] · · · · ·Qn[rn] · 1mn+1

, (1)

where 1m ∈ Rm×1 is a column-vector of ones. In this format, the number of parameters growslinearly with the number of dimensions. With larger cores, TT format can represent more complexdistributions.

Third workshop on Bayesian Deep Learning (NeurIPS 2018), Montréal, Canada.

In TT format, we can compute marginal and conditional distributions and sample from them inpolynomial time, without computing the whole tensor P [r1, r2, ..., rn]. For example, to marginalizeout the random variable rk, we marginalize Qk cores:

P [r1, . . . , rk−1, rk+1, . . . rn] = 1Tm1·k−1∏j=1

Qj [rj ] ·

(Nk∑rk=1

Qk[rk]

n∏j=k+1

Qj [rj ] · 1mn+1(2)

We can compute a normalizing constant as a marginal over all variables and sample with the chainrule.

3 Variational Autoencoder with a Tensor-Train Induced Learnable Prior

In this section, we introduce Variational Autoencoder with a Tensor-Train Induced Learnable Prior(VAE-TTLP) and apply it to the subset-conditioned generation. As we described in Section 2, Tensor-Train approximation of a distribution can be used to efficiently compute conditionals and marginals.In VAE-TTLP model, we estimate a joint distribution p(z, y) of VAE latent code z and conditions yin a Tensor-Train format. With this distribution, we can compute the likelihood of a partially labeleddata by marginalizing out the unobserved conditions. During generation, we sample from conditionaldistribution over latent codes, given observed conditions.

3.1 Tensor-Train induced distribution with continuous variables

Tensor-Train decomposition generally works only with discrete distributions, while generativeautoencoders mostly use continuous variables in the latent space. We assume that latent codesz = [z1, . . . zd] are continuous, while conditions y = [y1, . . . yn] are discrete. Our goal is to build aTensor-Train representation for a joint distribution pψ(z, y) that can model dependencies between zand y, as well as inner dependencies in z and y.

We parameterize continuous distribution as a mixture model and define a joint distribution byintroducing auxiliary categorical random variables sk ∈ {1, . . . Nk} as the component indices:

pψ(z, y) =∑

s1,...,sd

pψ(z, s, y) =∑

s1,...,sd

pψ(s, y)pψ(z|s) (3)

We parameterize pψ(z|s) as a fully factorized Gaussian. The distribution pψ(s, y) is a distributionover discrete variables and can be seen as a tensor pψ(s, y) = W [s1, . . . , sd, y1, . . . , yn]. Werepresent W in a Tensor-Train format. The resulting prior distribution becomes:

pψ(z, y) =∑

s1,...,sd

W [s1, . . . , sd, y1, . . . , yn]︸ ︷︷ ︸pψ(s,y)

d∏k=1

N (zk | µk,sk , σ2k,sk

)︸ ︷︷ ︸pψ(z|s,y)

(4)

3.2 VAE-TTLP training

Consider a training example (x, yob), where x is an object and yob are observed conditions. The lowerbound for log p(x, yob) in the VAE model with a learnable prior is:

LTTLP(θ, φ, ψ) =Eqφ(z|x,yob)log pθ(x | z, yob)

−KL(qφ(z | x, yob) || pψ(z | yob)

)+ log pψ(yob)

(5)

We make two assumptions: first, that all information about y is contained in the object x itself. Forexample, a hand-written digit (x) already contains the information about its label (y). Under thisassumption, qφ(z | x, yob) = qφ(z | x). Second, we assume that pθ(x | z, yob) = pθ(x | z). In other

2

words, an object is fully defined by its latent code. This results in a final ELBO:

LTTLP(θ, φ, ψ) =Eqφ(z|x)log pθ(x | z)−KL(qφ(z | x) || pψ(z | yob)

)+ log pψ(yob)

≈1

l

l∑i=1

[log pθ(x | zi)− log

qφ(z | x)pψ(z | yob)

]+ log pψ(yob).

(6)

where zi ∼ qφ(z | x). Since the joint distribution pψ(z, y) is parameterized in TT format, wecan compute posterior distribution on the latent code given the observed conditions pψ(z | yob)analytically. This model can also be used to fill in missing conditions by sampling pψ(yun | z).

3.3 VAE-TTLP sampling

We produce samples from p(x | yob) using the chain rule:

p(z | yob) =

d∏k=1

p(zk | yob, z<k) (7)

We then pass the latent code z ∼ pψ(z | yob) through the decoder: x ∼ pθ(x | z). Note that theconditional distribution on zk is a Gaussian mixture with unchanged centers µk,sk and variancesσ2k,sk

, but with different components weights:

p(zk | yob, z1, . . . , zk−1) =p(yob, z1, . . . , zk−1, zk)

p(yob, z1, . . . , zk−1)

=∑s,yun

[W [s, yob, yun]∑

s′,yunW [s′, yob, yun]

]N (zk | µk,sk , σ2

k,sk)

(8)

Weights of the components can be efficiently computed, since we represent W in a Tensor-Trainformat. The overall architecture is schematically shown in Figure 1.

4 Experiments

We provide experiments on two image datasets: MNIST and CelebA [17]. Both datasets haveattributes which can be used for conditional learning and generation. MNIST has a categorical classlabel feature, while for CelebA we selected 10 binary attributes, including gender, hair color, smile,eyeglasses. Details on the experimental setup can be found in Appendix (Section 6.1).

Table 1: Numerical comparison of generated images from different models.

Model MNIST CelebA

FID Accuracy, % FID Accuracy, %

CVAE 39.10 86.34 220.53 82.89VAE-TTLP 40.80 89.94 165.33 88.79

VAE-TTLP (pretrained VAE) 47.53 75.39 162.73 87.5

In the experiments (Section 6.2), we show that VAE-TTLP model trained on MNIST dataset separateslatent space into sub-manifolds with objects from one class.

We also compare VAE-TTLP with CVAE for conditional generation problem (Sec. 6.3) (withoutany missing variables in conditions) and show that the proposed model outperforms CVAE in imagequality and satisfaction of conditions (Table 1).

In Section 6.5, we also evaluate the performance for various percentages of missing conditionsand show that VAE-TTLP produces images consistent with conditions even when many of theconditions are missing. We also assess the joint distribution pψ(z, y) by imputing missing variablesyun ∼ pψ(yun | z, yob).

3

5 Conclusion

We introduced a Variational Autoencoder with a Tensor-Train Induced Learnable Prior (VAE-TTLP)and applied it to the problem of subset-conditioned generation. Tensor-Train format allows our modelto capture underlying dependencies between latent codes and labels. The model provides diversesamples satisfying specified conditions. VAE-TTLP can be extended to any auto-encoding model asa parameterization of the latent space.

References[1] Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. Improved

variational autoencoders for text modeling using dilated convolutions. In Doina Precup andYee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning,volume 70 of Proceedings of Machine Learning Research, pages 3881–3890, InternationalConvention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/yang17d.html.

[2] Hanjun Dai, Yingtao Tian, Bo Dai, Steven Skiena, and Le Song. Syntax-directed variationalautoencoder for structured data. In International Conference on Learning Representations,2018.

[3] Adam Roberts, Jesse Engel, and Douglas Eck, editors. Hierarchical Variational Autoencodersfor Music, 2017. URL https://nips2017creativity.github.io/doc/Hierarchical_Variational_Autoencoders_for_Music.pdf.

[4] Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, and Alexander Gaunt. Constrained graph vari-ational autoencoders for molecule design. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman,N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems31, pages 7805–7814. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/8005-constrained-graph-variational-autoencoders-for-molecule-design.pdf.

[5] E. Putin, A. Asadulaev, Y. Ivanenkov, V. Aladinskiy, B. Sanchez-Lengeling, A. Aspuru-Guzik,and A. Zhavoronkov. Reinforced Adversarial Neural Computer for de Novo Molecular Design.J Chem Inf Model, 58(6):1194–1204, Jun 2018.

[6] Xiang Ren Will Hamilton Jiaxuan You, Rex (Zhitao) Ying and Jure Leskovec. Graphrnn:Generating realistic graphs with deep auto-regressive models. ICML, 2018.

[7] E. Putin, A. Asadulaev, Q. Vanhaelen, Y. Ivanenkov, A. V. Aladinskaya, A. Aliper, and A. Zha-voronkov. Adversarial Threshold Neural Computer for Molecular de Novo Design. Mol. Pharm.,Mar 2018.

[8] Regina Barzilay Wengong Jin and Tommi Jaakkola. Junction tree variational autoencoder formolecular graph generation. ICML, 2018.

[9] Daniil Polykovskiy, Alexander Zhebrak, Dmitry Vetrov, Yan Ivanenkov, Vladimir Aladin-skiy, Marine Bozdaganyan, Polina Mamoshina, Alex Aliper, Alex Zhavoronkov, and Ar-tur Kadurin. Entangled conditional adversarial autoencoder for de-novo drug discovery.Molecular Pharmaceutics, sep 2018. doi: 10.1021/acs.molpharmaceut.8b00839. URLhttps://doi.org/10.1021/acs.molpharmaceut.8b00839.

[10] Artur Kadurin, Sergey Nikolenko, Kuzma Khrabrov, Alex Aliper, and Alex Zhavoronkov.druGAN: An advanced generative adversarial autoencoder model for de novo generation ofnew molecules with desired molecular properties in silico. Molecular Pharmaceutics, 14(9):3098–3104, aug 2017.

[11] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2013.

[12] Rafael Gómez-Bombarelli, David K. Duvenaud, José Miguel Hernández-Lobato, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic chemicaldesign using a data-driven continuous representation of molecules. CoRR, abs/1610.02415,2016.

4

[13] Wei-Ning Hsu, Yu Zhang, and James Glass. Unsupervised learning of disentangled and inter-pretable representations from sequential data. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach,R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information ProcessingSystems 30, pages 1878–1889. Curran Associates, Inc., 2017.

[14] Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin Murphy. Generative modelsof visually grounded imagination. In International Conference on Learning Representations,2018.

[15] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Z. Ghahramani, M. Welling, C. Cortes,N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information ProcessingSystems 27, pages 3581–3589. Curran Associates, Inc., 2014.

[16] Ivan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295–2317, 2011.

[17] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in thewild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.

[18] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advancesin Neural Information Processing Systems, pages 6629–6640, 2017.

5

6 Appendix

Encoder Decoder

p (z, y)<latexit sha1_base64="KMZ1y/CAS2cYQzz9nqc9YxgAwXU=">AAAB9XicbVBNSwMxEJ31s9avqkcvwSJUkLIrgh6LXjxWsB/QriWbZtvQbDYkWWVd+j+8eFDEq//Fm//GtN2Dtj4YeLw3w8y8QHKmjet+O0vLK6tr64WN4ubW9s5uaW+/qeNEEdogMY9VO8CaciZowzDDaVsqiqOA01Ywup74rQeqNIvFnUkl9SM8ECxkBBsr3cte1pWajStPpyg96ZXKbtWdAi0SLydlyFHvlb66/ZgkERWGcKx1x3Ol8TOsDCOcjovdRFOJyQgPaMdSgSOq/Wx69RgdW6WPwljZEgZN1d8TGY60TqPAdkbYDPW8NxH/8zqJCS/9jAmZGCrIbFGYcGRiNIkA9ZmixPDUEkwUs7ciMsQKE2ODKtoQvPmXF0nzrOq5Ve/2vFy7yuMowCEcQQU8uIAa3EAdGkBAwTO8wpvz6Lw4787HrHXJyWcO4A+czx/H05IF</latexit><latexit sha1_base64="KMZ1y/CAS2cYQzz9nqc9YxgAwXU=">AAAB9XicbVBNSwMxEJ31s9avqkcvwSJUkLIrgh6LXjxWsB/QriWbZtvQbDYkWWVd+j+8eFDEq//Fm//GtN2Dtj4YeLw3w8y8QHKmjet+O0vLK6tr64WN4ubW9s5uaW+/qeNEEdogMY9VO8CaciZowzDDaVsqiqOA01Ywup74rQeqNIvFnUkl9SM8ECxkBBsr3cte1pWajStPpyg96ZXKbtWdAi0SLydlyFHvlb66/ZgkERWGcKx1x3Ol8TOsDCOcjovdRFOJyQgPaMdSgSOq/Wx69RgdW6WPwljZEgZN1d8TGY60TqPAdkbYDPW8NxH/8zqJCS/9jAmZGCrIbFGYcGRiNIkA9ZmixPDUEkwUs7ciMsQKE2ODKtoQvPmXF0nzrOq5Ve/2vFy7yuMowCEcQQU8uIAa3EAdGkBAwTO8wpvz6Lw4787HrHXJyWcO4A+czx/H05IF</latexit><latexit sha1_base64="KMZ1y/CAS2cYQzz9nqc9YxgAwXU=">AAAB9XicbVBNSwMxEJ31s9avqkcvwSJUkLIrgh6LXjxWsB/QriWbZtvQbDYkWWVd+j+8eFDEq//Fm//GtN2Dtj4YeLw3w8y8QHKmjet+O0vLK6tr64WN4ubW9s5uaW+/qeNEEdogMY9VO8CaciZowzDDaVsqiqOA01Ywup74rQeqNIvFnUkl9SM8ECxkBBsr3cte1pWajStPpyg96ZXKbtWdAi0SLydlyFHvlb66/ZgkERWGcKx1x3Ol8TOsDCOcjovdRFOJyQgPaMdSgSOq/Wx69RgdW6WPwljZEgZN1d8TGY60TqPAdkbYDPW8NxH/8zqJCS/9jAmZGCrIbFGYcGRiNIkA9ZmixPDUEkwUs7ciMsQKE2ODKtoQvPmXF0nzrOq5Ve/2vFy7yuMowCEcQQU8uIAa3EAdGkBAwTO8wpvz6Lw4787HrHXJyWcO4A+czx/H05IF</latexit><latexit sha1_base64="KMZ1y/CAS2cYQzz9nqc9YxgAwXU=">AAAB9XicbVBNSwMxEJ31s9avqkcvwSJUkLIrgh6LXjxWsB/QriWbZtvQbDYkWWVd+j+8eFDEq//Fm//GtN2Dtj4YeLw3w8y8QHKmjet+O0vLK6tr64WN4ubW9s5uaW+/qeNEEdogMY9VO8CaciZowzDDaVsqiqOA01Ywup74rQeqNIvFnUkl9SM8ECxkBBsr3cte1pWajStPpyg96ZXKbtWdAi0SLydlyFHvlb66/ZgkERWGcKx1x3Ol8TOsDCOcjovdRFOJyQgPaMdSgSOq/Wx69RgdW6WPwljZEgZN1d8TGY60TqPAdkbYDPW8NxH/8zqJCS/9jAmZGCrIbFGYcGRiNIkA9ZmixPDUEkwUs7ciMsQKE2ODKtoQvPmXF0nzrOq5Ve/2vFy7yuMowCEcQQU8uIAa3EAdGkBAwTO8wpvz6Lw4787HrHXJyWcO4A+czx/H05IF</latexit>

x<latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit><latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit><latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit><latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit>

y1<latexit sha1_base64="/r5WAkNiG2i9aMZEGKjm+iJfFuw=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthbaUDbbSbt0dxN2N0II/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ncra+sbmVnW7trO7t39QPzzqmijRDDssEpHuBdSg4Ao7lluBvVgjlYHAx2B6W/iPT6gNj9SDTWP0JR0rHnJGbSGlQ682rDfcpjsHWSVeSRpQoj2sfw1GEUskKssENabvubH1M6otZwJntUFiMKZsSsfYz6miEo2fzW+dkbNcGZEw0nkpS+bq74mMSmNSGeSdktqJWfYK8T+vn9jw2s+4ihOLii0WhYkgNiLF42TENTIr0pxQpnl+K2ETqimzeTxFCN7yy6uke9H03KZ3f9lo3ZRxVOEETuEcPLiCFtxBGzrAYALP8ApvjnRenHfnY9FaccqZY/gD5/MHQk+NtQ==</latexit><latexit sha1_base64="/r5WAkNiG2i9aMZEGKjm+iJfFuw=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthbaUDbbSbt0dxN2N0II/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ncra+sbmVnW7trO7t39QPzzqmijRDDssEpHuBdSg4Ao7lluBvVgjlYHAx2B6W/iPT6gNj9SDTWP0JR0rHnJGbSGlQ682rDfcpjsHWSVeSRpQoj2sfw1GEUskKssENabvubH1M6otZwJntUFiMKZsSsfYz6miEo2fzW+dkbNcGZEw0nkpS+bq74mMSmNSGeSdktqJWfYK8T+vn9jw2s+4ihOLii0WhYkgNiLF42TENTIr0pxQpnl+K2ETqimzeTxFCN7yy6uke9H03KZ3f9lo3ZRxVOEETuEcPLiCFtxBGzrAYALP8ApvjnRenHfnY9FaccqZY/gD5/MHQk+NtQ==</latexit><latexit sha1_base64="/r5WAkNiG2i9aMZEGKjm+iJfFuw=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthbaUDbbSbt0dxN2N0II/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ncra+sbmVnW7trO7t39QPzzqmijRDDssEpHuBdSg4Ao7lluBvVgjlYHAx2B6W/iPT6gNj9SDTWP0JR0rHnJGbSGlQ682rDfcpjsHWSVeSRpQoj2sfw1GEUskKssENabvubH1M6otZwJntUFiMKZsSsfYz6miEo2fzW+dkbNcGZEw0nkpS+bq74mMSmNSGeSdktqJWfYK8T+vn9jw2s+4ihOLii0WhYkgNiLF42TENTIr0pxQpnl+K2ETqimzeTxFCN7yy6uke9H03KZ3f9lo3ZRxVOEETuEcPLiCFtxBGzrAYALP8ApvjnRenHfnY9FaccqZY/gD5/MHQk+NtQ==</latexit><latexit sha1_base64="/r5WAkNiG2i9aMZEGKjm+iJfFuw=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cKthbaUDbbSbt0dxN2N0II/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ncra+sbmVnW7trO7t39QPzzqmijRDDssEpHuBdSg4Ao7lluBvVgjlYHAx2B6W/iPT6gNj9SDTWP0JR0rHnJGbSGlQ682rDfcpjsHWSVeSRpQoj2sfw1GEUskKssENabvubH1M6otZwJntUFiMKZsSsfYz6miEo2fzW+dkbNcGZEw0nkpS+bq74mMSmNSGeSdktqJWfYK8T+vn9jw2s+4ihOLii0WhYkgNiLF42TENTIr0pxQpnl+K2ETqimzeTxFCN7yy6uke9H03KZ3f9lo3ZRxVOEETuEcPLiCFtxBGzrAYALP8ApvjnRenHfnY9FaccqZY/gD5/MHQk+NtQ==</latexit>

y2<latexit sha1_base64="uKJ8prhvuqRfMz87m9DuouYgDZM=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoMeiF48V7Ae0oWy223bp7ibsToQQ+he8eFDEq3/Im//GpM1BWx8MPN6bYWZeEElh0XW/ndLG5tb2Tnm3srd/cHhUPT7p2DA2jLdZKEPTC6jlUmjeRoGS9yLDqQok7wazu9zvPnFjRagfMYm4r+hEi7FgFHMpGTYqw2rNrbsLkHXiFaQGBVrD6tdgFLJYcY1MUmv7nhuhn1KDgkk+rwxiyyPKZnTC+xnVVHHrp4tb5+QiU0ZkHJqsNJKF+nsipcraRAVZp6I4tateLv7n9WMc3/ip0FGMXLPlonEsCYYkf5yMhOEMZZIRyozIbiVsSg1lmMWTh+CtvrxOOo2659a9h6ta87aIowxncA6X4ME1NOEeWtAGBlN4hld4c5Tz4rw7H8vWklPMnMIfOJ8/Q9SNtg==</latexit><latexit sha1_base64="uKJ8prhvuqRfMz87m9DuouYgDZM=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoMeiF48V7Ae0oWy223bp7ibsToQQ+he8eFDEq3/Im//GpM1BWx8MPN6bYWZeEElh0XW/ndLG5tb2Tnm3srd/cHhUPT7p2DA2jLdZKEPTC6jlUmjeRoGS9yLDqQok7wazu9zvPnFjRagfMYm4r+hEi7FgFHMpGTYqw2rNrbsLkHXiFaQGBVrD6tdgFLJYcY1MUmv7nhuhn1KDgkk+rwxiyyPKZnTC+xnVVHHrp4tb5+QiU0ZkHJqsNJKF+nsipcraRAVZp6I4tateLv7n9WMc3/ip0FGMXLPlonEsCYYkf5yMhOEMZZIRyozIbiVsSg1lmMWTh+CtvrxOOo2659a9h6ta87aIowxncA6X4ME1NOEeWtAGBlN4hld4c5Tz4rw7H8vWklPMnMIfOJ8/Q9SNtg==</latexit><latexit sha1_base64="uKJ8prhvuqRfMz87m9DuouYgDZM=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoMeiF48V7Ae0oWy223bp7ibsToQQ+he8eFDEq3/Im//GpM1BWx8MPN6bYWZeEElh0XW/ndLG5tb2Tnm3srd/cHhUPT7p2DA2jLdZKEPTC6jlUmjeRoGS9yLDqQok7wazu9zvPnFjRagfMYm4r+hEi7FgFHMpGTYqw2rNrbsLkHXiFaQGBVrD6tdgFLJYcY1MUmv7nhuhn1KDgkk+rwxiyyPKZnTC+xnVVHHrp4tb5+QiU0ZkHJqsNJKF+nsipcraRAVZp6I4tateLv7n9WMc3/ip0FGMXLPlonEsCYYkf5yMhOEMZZIRyozIbiVsSg1lmMWTh+CtvrxOOo2659a9h6ta87aIowxncA6X4ME1NOEeWtAGBlN4hld4c5Tz4rw7H8vWklPMnMIfOJ8/Q9SNtg==</latexit><latexit sha1_base64="uKJ8prhvuqRfMz87m9DuouYgDZM=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0mKoMeiF48V7Ae0oWy223bp7ibsToQQ+he8eFDEq3/Im//GpM1BWx8MPN6bYWZeEElh0XW/ndLG5tb2Tnm3srd/cHhUPT7p2DA2jLdZKEPTC6jlUmjeRoGS9yLDqQok7wazu9zvPnFjRagfMYm4r+hEi7FgFHMpGTYqw2rNrbsLkHXiFaQGBVrD6tdgFLJYcY1MUmv7nhuhn1KDgkk+rwxiyyPKZnTC+xnVVHHrp4tb5+QiU0ZkHJqsNJKF+nsipcraRAVZp6I4tateLv7n9WMc3/ip0FGMXLPlonEsCYYkf5yMhOEMZZIRyozIbiVsSg1lmMWTh+CtvrxOOo2659a9h6ta87aIowxncA6X4ME1NOEeWtAGBlN4hld4c5Tz4rw7H8vWklPMnMIfOJ8/Q9SNtg==</latexit>

y3<latexit sha1_base64="XRGllm8CSIywIP8LyrrORLy50sY=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cK9gPaUDbbTbt0dxN2J0Io/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJXe53nrixItKPmMbcV3SkRSgYxVxKB5eVQbXm1t05yCrxClKDAs1B9as/jFiiuEYmqbU9z43Rn1KDgkk+q/QTy2PKJnTEexnVVHHrT+e3zshZpgxJGJmsNJK5+ntiSpW1qQqyTkVxbJe9XPzP6yUY3vhToeMEuWaLRWEiCUYkf5wMheEMZZoRyozIbiVsTA1lmMWTh+Atv7xK2hd1z617D1e1xm0RRxlO4BTOwYNraMA9NKEFDMbwDK/w5ijnxXl3PhatJaeYOYY/cD5/AEVZjbc=</latexit><latexit sha1_base64="XRGllm8CSIywIP8LyrrORLy50sY=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cK9gPaUDbbTbt0dxN2J0Io/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJXe53nrixItKPmMbcV3SkRSgYxVxKB5eVQbXm1t05yCrxClKDAs1B9as/jFiiuEYmqbU9z43Rn1KDgkk+q/QTy2PKJnTEexnVVHHrT+e3zshZpgxJGJmsNJK5+ntiSpW1qQqyTkVxbJe9XPzP6yUY3vhToeMEuWaLRWEiCUYkf5wMheEMZZoRyozIbiVsTA1lmMWTh+Atv7xK2hd1z617D1e1xm0RRxlO4BTOwYNraMA9NKEFDMbwDK/w5ijnxXl3PhatJaeYOYY/cD5/AEVZjbc=</latexit><latexit sha1_base64="XRGllm8CSIywIP8LyrrORLy50sY=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cK9gPaUDbbTbt0dxN2J0Io/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJXe53nrixItKPmMbcV3SkRSgYxVxKB5eVQbXm1t05yCrxClKDAs1B9as/jFiiuEYmqbU9z43Rn1KDgkk+q/QTy2PKJnTEexnVVHHrT+e3zshZpgxJGJmsNJK5+ntiSpW1qQqyTkVxbJe9XPzP6yUY3vhToeMEuWaLRWEiCUYkf5wMheEMZZoRyozIbiVsTA1lmMWTh+Atv7xK2hd1z617D1e1xm0RRxlO4BTOwYNraMA9NKEFDMbwDK/w5ijnxXl3PhatJaeYOYY/cD5/AEVZjbc=</latexit><latexit sha1_base64="XRGllm8CSIywIP8LyrrORLy50sY=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cK9gPaUDbbTbt0dxN2J0Io/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJXe53nrixItKPmMbcV3SkRSgYxVxKB5eVQbXm1t05yCrxClKDAs1B9as/jFiiuEYmqbU9z43Rn1KDgkk+q/QTy2PKJnTEexnVVHHrT+e3zshZpgxJGJmsNJK5+ntiSpW1qQqyTkVxbJe9XPzP6yUY3vhToeMEuWaLRWEiCUYkf5wMheEMZZoRyozIbiVsTA1lmMWTh+Atv7xK2hd1z617D1e1xm0RRxlO4BTOwYNraMA9NKEFDMbwDK/w5ijnxXl3PhatJaeYOYY/cD5/AEVZjbc=</latexit>

z1<latexit sha1_base64="7r39qbbl0bbq2mjssVVGihyUSdU=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJbe53HrmxItIPOI25r+hIi1Awirn0NPAqg2rNrbtzkFXiFaQGBZqD6ld/GLFEcY1MUmt7nhujn1KDgkk+q/QTy2PKJnTEexnVVHHrp/NbZ+QsU4YkjExWGslc/T2RUmXtVAVZp6I4tsteLv7n9RIMr/1U6DhBrtliUZhIghHJHydDYThDOc0IZUZktxI2poYyzOLJQ/CWX14l7Yu659a9+8ta46aIowwncArn4MEVNOAOmtACBmN4hld4c5Tz4rw7H4vWklPMHMMfOJ8/Q9aNtg==</latexit><latexit sha1_base64="7r39qbbl0bbq2mjssVVGihyUSdU=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJbe53HrmxItIPOI25r+hIi1Awirn0NPAqg2rNrbtzkFXiFaQGBZqD6ld/GLFEcY1MUmt7nhujn1KDgkk+q/QTy2PKJnTEexnVVHHrp/NbZ+QsU4YkjExWGslc/T2RUmXtVAVZp6I4tsteLv7n9RIMr/1U6DhBrtliUZhIghHJHydDYThDOc0IZUZktxI2poYyzOLJQ/CWX14l7Yu659a9+8ta46aIowwncArn4MEVNOAOmtACBmN4hld4c5Tz4rw7H4vWklPMHMMfOJ8/Q9aNtg==</latexit><latexit sha1_base64="7r39qbbl0bbq2mjssVVGihyUSdU=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJbe53HrmxItIPOI25r+hIi1Awirn0NPAqg2rNrbtzkFXiFaQGBZqD6ld/GLFEcY1MUmt7nhujn1KDgkk+q/QTy2PKJnTEexnVVHHrp/NbZ+QsU4YkjExWGslc/T2RUmXtVAVZp6I4tsteLv7n9RIMr/1U6DhBrtliUZhIghHJHydDYThDOc0IZUZktxI2poYyzOLJQ/CWX14l7Yu659a9+8ta46aIowwncArn4MEVNOAOmtACBmN4hld4c5Tz4rw7H4vWklPMHMMfOJ8/Q9aNtg==</latexit><latexit sha1_base64="7r39qbbl0bbq2mjssVVGihyUSdU=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJbe53HrmxItIPOI25r+hIi1Awirn0NPAqg2rNrbtzkFXiFaQGBZqD6ld/GLFEcY1MUmt7nhujn1KDgkk+q/QTy2PKJnTEexnVVHHrp/NbZ+QsU4YkjExWGslc/T2RUmXtVAVZp6I4tsteLv7n9RIMr/1U6DhBrtliUZhIghHJHydDYThDOc0IZUZktxI2poYyzOLJQ/CWX14l7Yu659a9+8ta46aIowwncArn4MEVNOAOmtACBmN4hld4c5Tz4rw7H4vWklPMHMMfOJ8/Q9aNtg==</latexit>

z2<latexit sha1_base64="Xo+X8wW2IZCvP1EGDLOsG1y7FT8=">AAAB63icbVBNS8NAEJ34WetX1aOXxSJ4KkkR9Fj04rGC/YA2lM120i7d3YTdjVBD/4IXD4p49Q9589+YtDlo64OBx3szzMwLYsGNdd1vZ219Y3Nru7RT3t3bPzisHB23TZRohi0WiUh3A2pQcIUty63AbqyRykBgJ5jc5n7nEbXhkXqw0xh9SUeKh5xRm0tPg3p5UKm6NXcOskq8glShQHNQ+eoPI5ZIVJYJakzPc2Prp1RbzgTOyv3EYEzZhI6wl1FFJRo/nd86I+eZMiRhpLNSlszV3xMplcZMZZB1SmrHZtnLxf+8XmLDaz/lKk4sKrZYFCaC2Ijkj5Mh18ismGaEMs2zWwkbU02ZzeLJQ/CWX14l7XrNc2ve/WW1cVPEUYJTOIML8OAKGnAHTWgBgzE8wyu8OdJ5cd6dj0XrmlPMnMAfOJ8/RVuNtw==</latexit><latexit sha1_base64="Xo+X8wW2IZCvP1EGDLOsG1y7FT8=">AAAB63icbVBNS8NAEJ34WetX1aOXxSJ4KkkR9Fj04rGC/YA2lM120i7d3YTdjVBD/4IXD4p49Q9589+YtDlo64OBx3szzMwLYsGNdd1vZ219Y3Nru7RT3t3bPzisHB23TZRohi0WiUh3A2pQcIUty63AbqyRykBgJ5jc5n7nEbXhkXqw0xh9SUeKh5xRm0tPg3p5UKm6NXcOskq8glShQHNQ+eoPI5ZIVJYJakzPc2Prp1RbzgTOyv3EYEzZhI6wl1FFJRo/nd86I+eZMiRhpLNSlszV3xMplcZMZZB1SmrHZtnLxf+8XmLDaz/lKk4sKrZYFCaC2Ijkj5Mh18ismGaEMs2zWwkbU02ZzeLJQ/CWX14l7XrNc2ve/WW1cVPEUYJTOIML8OAKGnAHTWgBgzE8wyu8OdJ5cd6dj0XrmlPMnMAfOJ8/RVuNtw==</latexit><latexit sha1_base64="Xo+X8wW2IZCvP1EGDLOsG1y7FT8=">AAAB63icbVBNS8NAEJ34WetX1aOXxSJ4KkkR9Fj04rGC/YA2lM120i7d3YTdjVBD/4IXD4p49Q9589+YtDlo64OBx3szzMwLYsGNdd1vZ219Y3Nru7RT3t3bPzisHB23TZRohi0WiUh3A2pQcIUty63AbqyRykBgJ5jc5n7nEbXhkXqw0xh9SUeKh5xRm0tPg3p5UKm6NXcOskq8glShQHNQ+eoPI5ZIVJYJakzPc2Prp1RbzgTOyv3EYEzZhI6wl1FFJRo/nd86I+eZMiRhpLNSlszV3xMplcZMZZB1SmrHZtnLxf+8XmLDaz/lKk4sKrZYFCaC2Ijkj5Mh18ismGaEMs2zWwkbU02ZzeLJQ/CWX14l7XrNc2ve/WW1cVPEUYJTOIML8OAKGnAHTWgBgzE8wyu8OdJ5cd6dj0XrmlPMnMAfOJ8/RVuNtw==</latexit><latexit sha1_base64="Xo+X8wW2IZCvP1EGDLOsG1y7FT8=">AAAB63icbVBNS8NAEJ34WetX1aOXxSJ4KkkR9Fj04rGC/YA2lM120i7d3YTdjVBD/4IXD4p49Q9589+YtDlo64OBx3szzMwLYsGNdd1vZ219Y3Nru7RT3t3bPzisHB23TZRohi0WiUh3A2pQcIUty63AbqyRykBgJ5jc5n7nEbXhkXqw0xh9SUeKh5xRm0tPg3p5UKm6NXcOskq8glShQHNQ+eoPI5ZIVJYJakzPc2Prp1RbzgTOyv3EYEzZhI6wl1FFJRo/nd86I+eZMiRhpLNSlszV3xMplcZMZZB1SmrHZtnLxf+8XmLDaz/lKk4sKrZYFCaC2Ijkj5Mh18ismGaEMs2zWwkbU02ZzeLJQ/CWX14l7XrNc2ve/WW1cVPEUYJTOIML8OAKGnAHTWgBgzE8wyu8OdJ5cd6dj0XrmlPMnMAfOJ8/RVuNtw==</latexit>

z3<latexit sha1_base64="tFipv3ovnmtzTOa0zH+8ohXdOYk=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cKthbaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmlldW19o7xZ2dre2d2r7h+0bZQYxlsskpHpBNRyKTRvoUDJO7HhVAWSPwTjm9x/eOTGikjf4yTmvqJDLULBKObSU/+80q/W3Lo7A1kmXkFqUKDZr371BhFLFNfIJLW267kx+ik1KJjk00ovsTymbEyHvJtRTRW3fjq7dUpOMmVAwshkpZHM1N8TKVXWTlSQdSqKI7vo5eJ/XjfB8MpPhY4T5JrNF4WJJBiR/HEyEIYzlJOMUGZEdithI2oowyyePARv8eVl0j6re27du7uoNa6LOMpwBMdwCh5cQgNuoQktYDCCZ3iFN0c5L8678zFvLTnFzCH8gfP5A0bgjbg=</latexit><latexit sha1_base64="tFipv3ovnmtzTOa0zH+8ohXdOYk=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cKthbaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmlldW19o7xZ2dre2d2r7h+0bZQYxlsskpHpBNRyKTRvoUDJO7HhVAWSPwTjm9x/eOTGikjf4yTmvqJDLULBKObSU/+80q/W3Lo7A1kmXkFqUKDZr371BhFLFNfIJLW267kx+ik1KJjk00ovsTymbEyHvJtRTRW3fjq7dUpOMmVAwshkpZHM1N8TKVXWTlSQdSqKI7vo5eJ/XjfB8MpPhY4T5JrNF4WJJBiR/HEyEIYzlJOMUGZEdithI2oowyyePARv8eVl0j6re27du7uoNa6LOMpwBMdwCh5cQgNuoQktYDCCZ3iFN0c5L8678zFvLTnFzCH8gfP5A0bgjbg=</latexit><latexit sha1_base64="tFipv3ovnmtzTOa0zH+8ohXdOYk=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cKthbaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmlldW19o7xZ2dre2d2r7h+0bZQYxlsskpHpBNRyKTRvoUDJO7HhVAWSPwTjm9x/eOTGikjf4yTmvqJDLULBKObSU/+80q/W3Lo7A1kmXkFqUKDZr371BhFLFNfIJLW267kx+ik1KJjk00ovsTymbEyHvJtRTRW3fjq7dUpOMmVAwshkpZHM1N8TKVXWTlSQdSqKI7vo5eJ/XjfB8MpPhY4T5JrNF4WJJBiR/HEyEIYzlJOMUGZEdithI2oowyyePARv8eVl0j6re27du7uoNa6LOMpwBMdwCh5cQgNuoQktYDCCZ3iFN0c5L8678zFvLTnFzCH8gfP5A0bgjbg=</latexit><latexit sha1_base64="tFipv3ovnmtzTOa0zH+8ohXdOYk=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lU0GPRi8cKthbaUDbbTbt0dxN2J0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmlldW19o7xZ2dre2d2r7h+0bZQYxlsskpHpBNRyKTRvoUDJO7HhVAWSPwTjm9x/eOTGikjf4yTmvqJDLULBKObSU/+80q/W3Lo7A1kmXkFqUKDZr371BhFLFNfIJLW267kx+ik1KJjk00ovsTymbEyHvJtRTRW3fjq7dUpOMmVAwshkpZHM1N8TKVXWTlSQdSqKI7vo5eJ/XjfB8MpPhY4T5JrNF4WJJBiR/HEyEIYzlJOMUGZEdithI2oowyyePARv8eVl0j6re27du7uoNa6LOMpwBMdwCh5cQgNuoQktYDCCZ3iFN0c5L8678zFvLTnFzCH8gfP5A0bgjbg=</latexit>

z4<latexit sha1_base64="uLqo3UFk19HrBJvzbCb/yEUX73g=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbSbt0dxN2N0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ndLa+sbmVnm7srO7t39QPTxqmyjRDFssEpHuBtSg4ApblluB3VgjlYHATjC5zf3OI2rDI/VgpzH6ko4UDzmjNpeeBpeVQbXm1t05yCrxClKDAs1B9as/jFgiUVkmqDE9z42tn1JtORM4q/QTgzFlEzrCXkYVlWj8dH7rjJxlypCEkc5KWTJXf0+kVBozlUHWKakdm2UvF//zeokNr/2UqzixqNhiUZgIYiOSP06GXCOzYpoRyjTPbiVsTDVlNosnD8FbfnmVtC/qnlv37i9rjZsijjKcwCmcgwdX0IA7aEILGIzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AEhljbk=</latexit><latexit sha1_base64="uLqo3UFk19HrBJvzbCb/yEUX73g=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbSbt0dxN2N0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ndLa+sbmVnm7srO7t39QPTxqmyjRDFssEpHuBtSg4ApblluB3VgjlYHATjC5zf3OI2rDI/VgpzH6ko4UDzmjNpeeBpeVQbXm1t05yCrxClKDAs1B9as/jFgiUVkmqDE9z42tn1JtORM4q/QTgzFlEzrCXkYVlWj8dH7rjJxlypCEkc5KWTJXf0+kVBozlUHWKakdm2UvF//zeokNr/2UqzixqNhiUZgIYiOSP06GXCOzYpoRyjTPbiVsTDVlNosnD8FbfnmVtC/qnlv37i9rjZsijjKcwCmcgwdX0IA7aEILGIzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AEhljbk=</latexit><latexit sha1_base64="uLqo3UFk19HrBJvzbCb/yEUX73g=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbSbt0dxN2N0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ndLa+sbmVnm7srO7t39QPTxqmyjRDFssEpHuBtSg4ApblluB3VgjlYHATjC5zf3OI2rDI/VgpzH6ko4UDzmjNpeeBpeVQbXm1t05yCrxClKDAs1B9as/jFgiUVkmqDE9z42tn1JtORM4q/QTgzFlEzrCXkYVlWj8dH7rjJxlypCEkc5KWTJXf0+kVBozlUHWKakdm2UvF//zeokNr/2UqzixqNhiUZgIYiOSP06GXCOzYpoRyjTPbiVsTDVlNosnD8FbfnmVtC/qnlv37i9rjZsijjKcwCmcgwdX0IA7aEILGIzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AEhljbk=</latexit><latexit sha1_base64="uLqo3UFk19HrBJvzbCb/yEUX73g=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbSbt0dxN2N0IN/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviAU31nW/ndLa+sbmVnm7srO7t39QPTxqmyjRDFssEpHuBtSg4ApblluB3VgjlYHATjC5zf3OI2rDI/VgpzH6ko4UDzmjNpeeBpeVQbXm1t05yCrxClKDAs1B9as/jFgiUVkmqDE9z42tn1JtORM4q/QTgzFlEzrCXkYVlWj8dH7rjJxlypCEkc5KWTJXf0+kVBozlUHWKakdm2UvF//zeokNr/2UqzixqNhiUZgIYiOSP06GXCOzYpoRyjTPbiVsTDVlNosnD8FbfnmVtC/qnlv37i9rjZsijjKcwCmcgwdX0IA7aEILGIzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AEhljbk=</latexit>

Figure 1: VAE-TTLP model. Autoencoder is trained to map object x to the latent code z. Jointdistribution on conditions y and the latent code z is trained in a Tensor-Train format. This model canbe used to generate samples from a subset of all possible conditions. For example, we can conditionsamples on properties y1 and y3 and omit y2.

6.1 Experimental setup

We used 8-layer convolutional neural network (6 convolutional layers followed by 2 fully-connectedlayers) for the encoder and a symmetric architecture with deconvolutions for the decoder. MNISTimages are 28x28 gray-scale images. In CelebA, we worked with images in 64x64 resolution.

6.2 Latent space structure

For the first experiment, we visualize the latent space of VAE-TTLP model trained on MNIST data.In Figure 2, we can see that the model assigned a separate cluster in the latent space for each label.

Figure 2: Samples from VAE-TTLP model trained on the MNIST dataset. Left: Learned latent space,right: Samples from the model

6

6.3 Generated images

In this experiment, we use CelebA dataset [17] to visually and numerically compare the quality ofimages generated with three models: CVAE, VAE-TTLP, and VAE-TTLP with a pretrained VAE.To estimate the visual quality of samples we calculate Fréchet Inception Distance (FID) [18] thatwas shown to correlate with assessor’s opinion. To estimate how well a generated image matchesspecified condition, we predict image attributes with a separately trained predictor. Results are shownin Table 1 along with samples for visual analysis in Figure 3. Experiments suggest that VAE-TTLPoutperforms or gives comparable results to CVAE in both visual quality and condition matching.Also, pretrained VAE model performs reasonably well, indicating that the model can be pretrained onunlabeled datasets.

Figure 3: Samples from models trained on the CelebA dataset. Left: VAE-TTLP, right: CVAE

6.4 Training on partially labeled data

In this section, we estimate the performance of the model for different percentages of missing labels:fully labeled data, data with 30% and 70% randomly missing attributes. During the generationprocedure we conditioned the model on all of the attributes. Numerical results are reported in Table 2.As seen in the results, the model is quite stable even when the dataset is sparsely labeled.

Table 2: Performance of VAE-TTLP on dataset with different percentage of missing attributes.

Missing attributes, % MNIST CelebA

FID Accuracy, % FID Accuracy, %

0 40.80 89.94 165.33 88.730 41.33 89.84 178.32 84.8970 41.86 88.97 169.10 87.08

6.5 Imputing missing conditions with VAE-TTLP

As discussed in Section 3.2, VAE-TTLP model can be used for imputing missing conditions bysampling from distribution pψ(y | x). On MNIST dataset we got 95.4% accuracy and 89.21%accuracy on CelebA. While state of the art models predict MNIST digits with over 99% accuracyand facial attributes with more than 93%, our model still performed reasonably well even though thepredictive model was obtained as a by-product of VAE-TTLP.

7

6.6 Generated images conditioned on the subset of features

Finally, we generate images for a specified subset of conditions to estimate diversity of generatedimages. For example, if we specify an image to generate ‘Young man’, we would expect differentimages to have different hair colors, presence and absence of glasses and hat. Generated imagesshown in Figure 3 indicate that the model learned to produce diverse images with multiple varyingattributes.

Table 3: Generated images for different attributes. Notice high diversity across rows.

Young man

Smiling woman in eyeglasses

Smiling woman wearing hat

Blond haired woman wearing eyeglasses

Attractive bearded man wearing hat

8


Recommended