TY - JOUR
T1 - Attract-Repel Encoder
T2 - Learning Anomaly Representation Away From Landmarks
AU - Zhao, Jiachen
AU - Deng, Fang
AU - Li, Yongling
AU - Chen, Jie
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2022/6/1
Y1 - 2022/6/1
N2 - Anomaly detection (AD) has attracted great interest in the data mining community. With the development of deep learning, various deep autoencoders have been used and modified to solve AD problems due to their efficient data coding and reconstruction mechanisms. However, such methods still suffer challenges when solving some practical AD tasks. On the one hand, an AD dataset may contain diverse normal patterns rather than a universal pattern. Specifically, the normal data usually distribute in multiple clusters; meanwhile, the exact number of clusters is hard to know in practice. On the other hand, most existing autoencoder-based methods focus on encoding normal features but have not considered exploring the characteristics of abnormal data. To tackle these challenges, this article proposes a novel autoencoder-based AD model, the attract-repel encoder (ARE). ARE selects some landmarks in the encoding space to represent the diverse normal patterns. Besides, ARE can adaptively update the landmarks and their quantity during training. Then this article proposes the attract-repel loss (AR loss) function to train ARE. AR loss attracts normal samples to landmarks and repels anomalies away from landmarks so that it can learn both normal and abnormal features. Finally, ARE computes a sample's anomaly score by summing up its reconstruction error and its distance to the landmarks. Moreover, ARE can be trained either semisupervised or unsupervised. This article presents comprehensive experiments to evaluate the effectiveness of our approach.
AB - Anomaly detection (AD) has attracted great interest in the data mining community. With the development of deep learning, various deep autoencoders have been used and modified to solve AD problems due to their efficient data coding and reconstruction mechanisms. However, such methods still suffer challenges when solving some practical AD tasks. On the one hand, an AD dataset may contain diverse normal patterns rather than a universal pattern. Specifically, the normal data usually distribute in multiple clusters; meanwhile, the exact number of clusters is hard to know in practice. On the other hand, most existing autoencoder-based methods focus on encoding normal features but have not considered exploring the characteristics of abnormal data. To tackle these challenges, this article proposes a novel autoencoder-based AD model, the attract-repel encoder (ARE). ARE selects some landmarks in the encoding space to represent the diverse normal patterns. Besides, ARE can adaptively update the landmarks and their quantity during training. Then this article proposes the attract-repel loss (AR loss) function to train ARE. AR loss attracts normal samples to landmarks and repels anomalies away from landmarks so that it can learn both normal and abnormal features. Finally, ARE computes a sample's anomaly score by summing up its reconstruction error and its distance to the landmarks. Moreover, ARE can be trained either semisupervised or unsupervised. This article presents comprehensive experiments to evaluate the effectiveness of our approach.
KW - Anomaly detection (AD)
KW - contrastive loss
KW - deep autoencoder (DAE)
KW - semisupervised learning
KW - unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85131269443&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2021.3105400
DO - 10.1109/TNNLS.2021.3105400
M3 - Article
C2 - 34460394
AN - SCOPUS:85131269443
SN - 2162-237X
VL - 33
SP - 2466
EP - 2479
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 6
ER -