论文标题

方差约束自动编码

Variance Constrained Autoencoding

论文作者

Braithwaite, D. T., O'Connor, M., Kleijn, W. B.

论文摘要

最新的基于自动编码器的最新生成模型具有编码器解码器结构,并学习具有可以从中采样的预定义分布的潜在表示。以随机方式实现这些模型的编码网络提供了一种自然而常见的方法,以避免过度拟合并执行平滑的解码器函数。但是,我们表明,对于随机编码器,同时尝试执行分布约束并最小化输出失真会导致生成和重建质量的降低。此外,在执行分离时,尝试执行潜在分布约束是不合理的。因此,我们提出了方差约束的自动编码器(VCAE),该自动编码器(VCAE)仅对潜在分布执行方差约束。我们的实验表明,VCAE在MNIST和CELEBA的重建和生成质量方面改进了Wasserstein AutoCododer和变异自动编码器。此外,我们表明,配备了总相关性刑期的VCAE在3D形状的学习分解表示的同时,在学习分解的表示方面,同时是一种更有原则的方法。

Recent state-of-the-art autoencoder based generative models have an encoder-decoder structure and learn a latent representation with a pre-defined distribution that can be sampled from. Implementing the encoder networks of these models in a stochastic manner provides a natural and common approach to avoid overfitting and enforce a smooth decoder function. However, we show that for stochastic encoders, simultaneously attempting to enforce a distribution constraint and minimising an output distortion leads to a reduction in generative and reconstruction quality. In addition, attempting to enforce a latent distribution constraint is not reasonable when performing disentanglement. Hence, we propose the variance-constrained autoencoder (VCAE), which only enforces a variance constraint on the latent distribution. Our experiments show that VCAE improves upon Wasserstein Autoencoder and the Variational Autoencoder in both reconstruction and generative quality on MNIST and CelebA. Moreover, we show that VCAE equipped with a total correlation penalty term performs equivalently to FactorVAE at learning disentangled representations on 3D-Shapes while being a more principled approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源