Neural Variational Gaussian Mixture Topic Model

Yi Kun Tang*, Heyan Huang, Xuewen Shi, Xian Ling Mao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Neural variational inference-based topic modeling has gained great success in mining abstract topics from documents. However, these topic models usually mainly focus on optimizing the topic proportions for documents, while the quality and the internal construction of topics are usually neglected. Specifically, these models lack the guarantee that semantically related words are supposed to be assigned to the same topic and are difficult to ensure the interpretability of topics. Moreover, many topical words recur frequently in the top words of different topics, which makes the learned topics semantically redundant and similar, and of little significance for further study. To solve the above problems, we propose a novel neural topic model called Neural Variational Gaussian Mixture Topic Model (NVGMTM). We use Gaussian distribution to depict the semantic relevance between words in the topics. Each topic in NVGMTM is considered as a multivariate Gaussian distribution over words in the word-embedding space. Thus, semantically related words share similar probabilities in each topic, which makes the topics more coherent and interpretable. Experimental results on two public corpora show the proposed model outperforms the state-of-the-art baselines.

Original languageEnglish
Article number110
JournalACM Transactions on Asian and Low-Resource Language Information Processing
Volume22
Issue number4
DOIs
Publication statusPublished - 25 Mar 2023

Keywords

  • Neural variational gaussian mixture topic model
  • topic discrimination
  • topic quality

Fingerprint

Dive into the research topics of 'Neural Variational Gaussian Mixture Topic Model'. Together they form a unique fingerprint.

Cite this