Learning autoencoders with relational regularization

Hongteng Xu*, Dixin Luo*, Ricardo Henao, Svati Shah, Lawrence Carin

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

19 引用 (Scopus)

摘要

A new algorithmic framework is proposed for learning autoencoders of data distributions. We minimize the discrepancy between the model and target distributions, with a relational regularization on the learnable latent prior. This regularization penalizes the fused Gromov-Wasserstein (FGW) distance between the latent prior and its corresponding posterior, allowing one to flexibly learn a structured prior distribution associated with the generative model. Moreover, it helps co-training of multiple autoencoders even if they have heterogeneous architectures and incomparable latent spaces. We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders. Our relational regularized autoencoder (RAE) outperforms existing methods, e.g., the variational autoencoder, Wasserstein autoencoder, and their variants, on generating images. Additionally, our relational co-training strategy for autoencoders achieves encouraging results in both synthesis and real-world multi-view learning tasks. The code is at https://github.com/HongtengXu/ Relational-AutoEncoders.

源语言英语
主期刊名37th International Conference on Machine Learning, ICML 2020
编辑Hal Daume, Aarti Singh
出版商International Machine Learning Society (IMLS)
10507-10517
页数11
ISBN(电子版)9781713821120
出版状态已出版 - 2020
已对外发布
活动37th International Conference on Machine Learning, ICML 2020 - Virtual, Online
期限: 13 7月 202018 7月 2020

出版系列

姓名37th International Conference on Machine Learning, ICML 2020
PartF168147-14

会议

会议37th International Conference on Machine Learning, ICML 2020
Virtual, Online
时期13/07/2018/07/20

指纹

探究 'Learning autoencoders with relational regularization' 的科研主题。它们共同构成独一无二的指纹。

引用此