Edge Learning via Message Passing: Distributed Estimation Framework Based on Gaussian Mixture Model

Xiang Li, Weijie Yuan*, Kecheng Zhang, Nan Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To leverage distributed data communication and learning in sensor networks effectively, edge learning (EL) methods have garnered significant attention. In the realm of distributed sensor networks, achieving consensus estimation of interested variables stands as a pivotal challenge. To address this challenge using EL methods, several approaches have been proposed combining message passing (MP) algorithms. In this article, we first describe the distributed consensus algorithm based on MP and summarize the sampling-based and parameter-based representation of the beliefs exchanged in the distributed MP algorithm. To improve the accuracy of estimation while retaining the low-complexity advantage of the parametric representation method, we propose a distributed consensus framework based on the Gaussian mixture model (GMM) MP. We approximate and keep the form beliefs as GMM in the iterations. Two different simulation scenarios are performed to shed light on the proposed distributed consensus estimation framework, i.e., static target localization and dynamic target tracking. Finally, simulation results show the performance advantages of the algorithm proposed.

Original languageEnglish
Pages (from-to)34409-34419
Number of pages11
JournalIEEE Internet of Things Journal
Volume11
Issue number21
DOIs
Publication statusPublished - 2024

Keywords

  • Consensus algorithm
  • distributed estimation
  • edge learning (EL)
  • factor graph
  • Gaussian mixture model (GMM)
  • message passing (MP)

Fingerprint

Dive into the research topics of 'Edge Learning via Message Passing: Distributed Estimation Framework Based on Gaussian Mixture Model'. Together they form a unique fingerprint.

Cite this