Meta-Inverse Reinforcement Learning for Mean Field Games via Probabilistic Context Variables

Yang Chen, Xiao Lin, Bo Yan, Libo Zhang, Jiamou Liu, Neset Özkan Tan, Michael Witbrock

Research output: Contribution to journalConference articlepeer-review

Abstract

Designing suitable reward functions for numerous interacting intelligent agents is challenging in real-world applications. Inverse reinforcement learning (IRL) in mean field games (MFGs) offers a practical framework to infer reward functions from expert demonstrations. While promising, the assumption of agent homogeneity limits the capability of existing methods to handle demonstrations with heterogeneous and unknown objectives, which are common in practice. To this end, we propose a deep latent variable MFG model and an associated IRL method. Critically, our method can infer rewards from different yet structurally similar tasks without prior knowledge about underlying contexts or modifying the MFG model itself. Our experiments, conducted on simulated scenarios and a real-world spatial taxi-ride pricing problem, demonstrate the superiority of our approach over state-of-the-art IRL methods in MFGs.

Original languageEnglish
Pages (from-to)11407-11415
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number10
DOIs
Publication statusPublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024

Fingerprint

Dive into the research topics of 'Meta-Inverse Reinforcement Learning for Mean Field Games via Probabilistic Context Variables'. Together they form a unique fingerprint.

Cite this