TY - JOUR
T1 - Communication-efficient hierarchical distributed optimization for multi-agent policy evaluation
AU - Ren, Jineng
AU - Haupt, Jarvis
AU - Guo, Zehua
N1 - Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2021/2
Y1 - 2021/2
N2 - Policy evaluation problems in multi-agent reinforcement learning (MARL) have attracted growing interest recently. In this setting, agents collaborate to learn the value of a given policy with private local rewards and jointly observed state-action pairs. However, existing fully decentralized algorithms treat each agent equally, without considering the communication structure of the agents over a given network, and the corresponding effects on communication and computation efficiency. In this paper, we propose a hierarchical distributed algorithm that differentiates the roles of each of the agents during the evaluation process. This method allows us to freely choose various mixing schemes (and corresponding mixing matrices that are not necessarily symmetric or doubly stochastic), in order to reduce the communication and computation cost, while still maintaining convergence at rates as fast as or even faster than the previous distributed algorithms. Theoretically, we show the proposed method, which contains existing distributed methods as a special case, achieves the same order of convergence rate as state-of-the-art methods. Extensive numerical experiments on real datasets verify that the performance of our approach indeed improves – sometimes significantly – over other advanced algorithms in terms of convergence and total communication efficiency.
AB - Policy evaluation problems in multi-agent reinforcement learning (MARL) have attracted growing interest recently. In this setting, agents collaborate to learn the value of a given policy with private local rewards and jointly observed state-action pairs. However, existing fully decentralized algorithms treat each agent equally, without considering the communication structure of the agents over a given network, and the corresponding effects on communication and computation efficiency. In this paper, we propose a hierarchical distributed algorithm that differentiates the roles of each of the agents during the evaluation process. This method allows us to freely choose various mixing schemes (and corresponding mixing matrices that are not necessarily symmetric or doubly stochastic), in order to reduce the communication and computation cost, while still maintaining convergence at rates as fast as or even faster than the previous distributed algorithms. Theoretically, we show the proposed method, which contains existing distributed methods as a special case, achieves the same order of convergence rate as state-of-the-art methods. Extensive numerical experiments on real datasets verify that the performance of our approach indeed improves – sometimes significantly – over other advanced algorithms in terms of convergence and total communication efficiency.
KW - Communication efficiency
KW - Distributed algorithm
KW - Hierarchical
KW - Multi-agent policy evaluation
KW - Optimization algorithm
UR - http://www.scopus.com/inward/record.url?scp=85099465946&partnerID=8YFLogxK
U2 - 10.1016/j.jocs.2020.101280
DO - 10.1016/j.jocs.2020.101280
M3 - Article
AN - SCOPUS:85099465946
SN - 1877-7503
VL - 49
JO - Journal of Computational Science
JF - Journal of Computational Science
M1 - 101280
ER -