Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

The Representational Capacity of Action-Value Networks for Multi-Agent Reinforcement Learning

Jacopo Castellini, Frans A. Oliehoek, Rahul Savani, and Shimon Whiteson. The Representational Capacity of Action-Value Networks for Multi-Agent Reinforcement Learning. In Proceedings of the Eighteenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1862–1864, May 2019.
Extended Abstract, Please also see the extended version on arXiv.

Download

pdf [593.2kB]  

Abstract

Recent years have seen the application of deep reinforcement learning techniques to cooperative multi-agent systems, with great empirical success. However, given the lack of theoretical insight, it remains unclear what the employed neural networks are learning, or how we should enhance their representational power to address the problems on which they fail. In this work, we empirically investigate the representational power of various network architectures on a series of one-shot games. Despite their simplicity, these games capture many of the crucial problems that arise in the multi-agent setting, such as an exponential number of joint actions or the lack of an explicit coordination mechanism. Our results quantify how well various approaches can represent the requisite value functions, and help us identify issues that can impede good performance.

BibTeX Entry

@inproceedings{Castellini19AAMAS,
    author =    {Castellini, Jacopo and 
                 Oliehoek, Frans A. and 
                 Savani, Rahul and
                 Whiteson, Shimon},
    title =     {The Representational Capacity of Action-Value Networks for 
                 Multi-Agent Reinforcement Learning},
    booktitle = AAMAS19,
    year =      2019,
    month =     may,
    pages =     {1862--1864},
    wwwnote      = {Extended Abstract, Please also see the <a href="https://arxiv.org/abs/1902.07497">extended version on arXiv</a>. },
    abstract = {
    Recent years have seen the application of deep reinforcement learning
    techniques to cooperative multi-agent systems, with great empirical
    success. However, given the lack of theoretical insight, it remains
    unclear what the employed neural networks are learning, or how we
    should enhance their representational power to address the problems on
    which they fail. In this work, we empirically investigate the
    representational power of various network architectures on a series of
    one-shot games. Despite their simplicity, these games capture many of
    the crucial problems that arise in the multi-agent setting, such as an
    exponential number of joint actions or the lack of an explicit
    coordination mechanism. Our results quantify how well various
    approaches can represent the requisite value functions, and help us
        identify issues that can impede good performance. 
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Mon Oct 07, 2024 14:17:04 UTC