Applicable and Partial Learning of Graph Topology Without Sparsity Priors

Yanli Yuan, Dewen Soh, Zehui Xiong*, Tony Q.S. Quek

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper considers the problem of learning the underlying graph topology of Gaussian Graphical Models (GGMs) from observations. Under high-dimensional settings, to achieve low sample complexity, many existing graph topology learning algorithms assume structural constraints such as sparsity to hold. Without prior knowledge of graph sparsity, the correctness of their results is difficult to check. In this paper, we aim to do away with these assumptions by developing algorithms for learning degree-bounded GGMs and separable GGMs without any sparsity priors. The proposed algorithms, which are based only on the knowledge of conditional independence relations in the data distribution, require minimal structural assumptions while still achieving low sample complexity, and hence are 'applicable'. Specifically, for any user defined sparsity parameter k, we prove that the proposed algorithms can consistently identify whether a p-dimensional GGM is degree-bounded by k (or strongly k-separable) with Ω (k\log p) sample complexity. Besides, our algorithms also demonstrate 'partial' learning properties whenever the overall graph is not entirely sparse, that is, not all nodes are degree-bounded (or are strongly separable). In this case, we can still learn the sparse portions of the graph, with theoretical guarantees included. Numerical results show that existing algorithms fail even in some simple settings where sparsity assumptions do not hold, whereas our algorithms do not.

Original languageEnglish
Pages (from-to)360-371
Number of pages12
JournalIEEE Transactions on Network Science and Engineering
Volume10
Issue number1
DOIs
Publication statusPublished - 1 Jan 2023
Externally publishedYes

Keywords

  • Gaussian Graphical Models (GGMs)
  • Graph Topology Learning
  • High-dimensional Statistical Learning
  • Sparsity

Fingerprint

Dive into the research topics of 'Applicable and Partial Learning of Graph Topology Without Sparsity Priors'. Together they form a unique fingerprint.

Cite this