Communication-efficient hierarchical distributed optimization for multi-agent policy evaluation

Jineng Ren*, Jarvis Haupt, Zehua Guo

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

12 引用 (Scopus)

摘要

Policy evaluation problems in multi-agent reinforcement learning (MARL) have attracted growing interest recently. In this setting, agents collaborate to learn the value of a given policy with private local rewards and jointly observed state-action pairs. However, existing fully decentralized algorithms treat each agent equally, without considering the communication structure of the agents over a given network, and the corresponding effects on communication and computation efficiency. In this paper, we propose a hierarchical distributed algorithm that differentiates the roles of each of the agents during the evaluation process. This method allows us to freely choose various mixing schemes (and corresponding mixing matrices that are not necessarily symmetric or doubly stochastic), in order to reduce the communication and computation cost, while still maintaining convergence at rates as fast as or even faster than the previous distributed algorithms. Theoretically, we show the proposed method, which contains existing distributed methods as a special case, achieves the same order of convergence rate as state-of-the-art methods. Extensive numerical experiments on real datasets verify that the performance of our approach indeed improves – sometimes significantly – over other advanced algorithms in terms of convergence and total communication efficiency.

源语言英语
文章编号101280
期刊Journal of Computational Science
49
DOI
出版状态已出版 - 2月 2021

指纹

探究 'Communication-efficient hierarchical distributed optimization for multi-agent policy evaluation' 的科研主题。它们共同构成独一无二的指纹。

引用此