On the Compressive Power of Boolean Threshold Autoencoders

Avraham A. Melkman, Sini Guo, Wai Ki Ching, Pengyu Liu, Tatsuya Akutsu*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

An autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector to a lower dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector (or one that is very similar). In this article, we explore the compressive power of autoencoders that are Boolean threshold networks by studying the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back to its original. We show that for any set of n distinct vectors there exists a seven-layer autoencoder with the optimal compression ratio, (i.e., the size of the middle layer is logarithmic in n), but that there is a set of n vectors for which there is no three-layer autoencoder with a middle layer of logarithmic size. In addition, we present a kind of tradeoff: if the compression ratio is allowed to be considerably larger than the optimal, then there is a five-layer autoencoder. We also study the numbers of nodes and layers required only for encoding, and the results suggest that the decoding part is the bottleneck of autoencoding. For example, there always is a three-layer Boolean threshold encoder that compresses n vectors into a dimension that is twice the logarithm of n.

源语言英语
页(从-至)921-931
页数11
期刊IEEE Transactions on Neural Networks and Learning Systems
34
2
DOI
出版状态已出版 - 1 2月 2023
已对外发布

指纹

探究 'On the Compressive Power of Boolean Threshold Autoencoders' 的科研主题。它们共同构成独一无二的指纹。

引用此