Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Analysing factorizations of action-value networks for cooperative multi-agent reinforcement learning

Jacopo Castellini, Frans A. Oliehoek, Rahul Savani, and Shimon Whiteson. Analysing factorizations of action-value networks for cooperative multi-agent reinforcement learning. Autonomous Agents and Multi-Agent Systems, 35(25), June 2021.

Download

pdf [3.2MB]  ps.gz ps HTML 

Abstract

Recent years have seen the application of deep reinforcement learning techniques to coop-erative multi-agent systems, with great empirical success. However, given the lack of theo-retical insight, it remains unclear what the employed neural networks are learning, or howwe should enhance their learning power to address the problems on which they fail. Inthis work, we empirically investigate the learning power of various network architectureson a series of one-shot games. Despite their simplicity, these games capture many of thecrucial problems that arise in the multi-agent setting, such as an exponential number ofjoint actions or the lack of an explicit coordination mechanism. Our results extend those inCastellini et al. and quantify how well various approachescan represent the requisite value functions, and help us identify the reasons that can impedegood performance, like sparsity of the values or too tight coordination requirements.

BibTeX Entry

@article{Castellini21JAAMAS,
    author =    {Castellini, Jacopo and 
                 Oliehoek, Frans A. and 
                 Savani, Rahul and
                 Whiteson, Shimon},
    title =     {Analysing factorizations of action-value networks for cooperative multi-agent reinforcement learning},
    journal =   JAAMAS,
    year =      2021,
    month =     jun,
    volume=     {35},
    number=     {25},
    OPTpages =     {online},
    doi =       {10.1007/s10458-021-09506-w},
    url =       {https://doi.org/10.1007/s10458-021-09506-w},
    keywords =   {refereed},
    abstract = {
Recent years have seen the application of deep reinforcement learning techniques to coop-
erative multi-agent systems, with great empirical success. However, given the lack of theo-
retical insight, it remains unclear what the employed neural networks are learning, or how
we should enhance their learning power to address the problems on which they fail. In
this work, we empirically investigate the learning power of various network architectures
on a series of one-shot games. Despite their simplicity, these games capture many of the
crucial problems that arise in the multi-agent setting, such as an exponential number of
joint actions or the lack of an explicit coordination mechanism. Our results extend those in
Castellini et al. and quantify how well various approaches
can represent the requisite value functions, and help us identify the reasons that can impede
good performance, like sparsity of the values or too tight coordination requirements.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Mon Oct 07, 2024 14:17:04 UTC