On the Compressive Power of Boolean Threshold Autoencoders

Avraham A. Melkman, Sini Guo, Wai Ki Ching, Pengyu Liu, Tatsuya Akutsu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

An autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector to a lower dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector (or one that is very similar). In this article, we explore the compressive power of autoencoders that are Boolean threshold networks by studying the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back to its original. We show that for any set of n distinct vectors there exists a seven-layer autoencoder with the optimal compression ratio, (i.e., the size of the middle layer is logarithmic in n), but that there is a set of n vectors for which there is no three-layer autoencoder with a middle layer of logarithmic size. In addition, we present a kind of tradeoff: if the compression ratio is allowed to be considerably larger than the optimal, then there is a five-layer autoencoder. We also study the numbers of nodes and layers required only for encoding, and the results suggest that the decoding part is the bottleneck of autoencoding. For example, there always is a three-layer Boolean threshold encoder that compresses n vectors into a dimension that is twice the logarithm of n.

Original languageEnglish
Pages (from-to)921-931
Number of pages11
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number2
DOIs
Publication statusPublished - 1 Feb 2023
Externally publishedYes

Keywords

  • Autoencoders
  • Boolean functions
  • neural networks
  • threshold functions

Fingerprint

Dive into the research topics of 'On the Compressive Power of Boolean Threshold Autoencoders'. Together they form a unique fingerprint.

Cite this