Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Best Response Bayesian Reinforcement Learning for Multiagent Systems with State Uncertainty

Frans A. Oliehoek and Christopher Amato. Best Response Bayesian Reinforcement Learning for Multiagent Systems with State Uncertainty. In Proceedings of the Ninth AAMAS Workshop on Multi-Agent Sequential Decision Making in Uncertain Domains (MSDM), 2014.

Download

pdf [231.1kB]  

Abstract

It is often assumed that agents in multiagent systems with state uncertainty have full knowledge of the model of dy- namics and sensors, but in many cases this is not feasible. A more realistic assumption is that agents must learn about the environment and other agents while acting. Bayesian methods for reinforcement learning are promising for this type of learning because they allow model uncertainty to be considered explicitly and offer a principled way of dealing with the exploration/exploitation tradeoff. In this paper, we propose a Bayesian RL framework for best response learn- ing in which an agent has uncertainty over the environment and the policies of the other agents. This is a very general model that can incorporate different assumptions about the form of other policies. We seek to maximize performance and learn the appropriate models while acting in an online fashion by using sample-based planning built from power- ful Monte-Carlo tree search methods. We discuss the theo- retical properties of this approach and experimental results show that the learning approaches can significantly increase value when compared to initial models and policies.

BibTeX Entry

@inproceedings{Oliehoek14MSDM,
    author =    {Frans A. Oliehoek and Christopher Amato},
    title =     {Best Response {Bayesian} Reinforcement Learning for Multiagent Systems with State Uncertainty},
    booktitle = MSDM14,
    year =      2014,
    keywords =  {workshop},
    abstract = {
    It is often assumed that agents in multiagent systems with
    state uncertainty have full knowledge of the model of dy-
    namics and sensors, but in many cases this is not feasible.
    A more realistic assumption is that agents must learn about
    the environment and other agents while acting. Bayesian
    methods for reinforcement learning are promising for this
    type of learning because they allow model uncertainty to be
    considered explicitly and offer a principled way of dealing
    with the exploration/exploitation tradeoff. In this paper, we
    propose a Bayesian RL framework for best response learn-
    ing in which an agent has uncertainty over the environment
    and the policies of the other agents. This is a very general
    model that can incorporate different assumptions about the
    form of other policies. We seek to maximize performance
    and learn the appropriate models while acting in an online
    fashion by using sample-based planning built from power-
    ful Monte-Carlo tree search methods. We discuss the theo-
    retical properties of this approach and experimental results
    show that the learning approaches can significantly increase
    value when compared to initial models and policies.
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Tue Oct 22, 2024 18:00:49 UTC