Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Exploiting Locality of Interaction in Factored Dec-POMDPs

Frans A. Oliehoek, Matthijs T. J. Spaan, Shimon Whiteson, and Nikos Vlassis. Exploiting Locality of Interaction in Factored Dec-POMDPs. In Proceedings of the Seventh Joint International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 517–524, May 2008.

Download

pdf [2.1MB]  ps.gz [375.3kB]  

Abstract

Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute a generic and expressive framework for multiagent planning under uncertainty, but solving them exactly is provably intractable. In this paper we demonstrate how their scalability can be improved by exploiting locality of interaction between agents by a factored representation. Factored Dec-POMDP representations have been proposed before, but only for Dec-POMDPs whose transition and observation models are fully independent. Such strong assumptions simplify the planning problem, but they result in models with limited applicability. On the other hand, we consider general factored Dec-POMDPs for which we analyze the model dependencies over space (locality of iteraction) and time (horizon of the problem). We also present a formulation of decomposable optimal and approximate value functions for our model. Together, our results allow us to exploit the problem structure as well as heuristics in a single framework that is based on collaborative graphical Bayesian games (CGBGs). Our experiments show a speedup of two orders of magnitude.

BibTeX Entry

@InProceedings{Oliehoek08AAMAS,
    author =       {Frans A. Oliehoek and Matthijs T. J. Spaan and
                 Shimon Whiteson and Nikos Vlassis},
    title =     {Exploiting Locality of Interaction in Factored 
                 {Dec-POMDPs}},
    booktitle = AAMAS08,
    month =     may,
    year =      2008,
    pages =     {517--524},
    url =       {www.ifaamas.org/Proceedings/aamas08/proceedings/pdf/paper/AAMAS08_0189.pdf},
    abstract = 	{
      Decentralized partially observable Markov decision processes
      (Dec-POMDPs) constitute a generic and expressive framework for
      multiagent planning under uncertainty, but solving them exactly
      is provably intractable. In this paper we demonstrate how their
      scalability can be improved by exploiting locality of
      interaction between agents by a factored representation.
      Factored Dec-POMDP representations have been proposed before,
      but only for Dec-POMDPs whose transition and observation models
      are fully independent. Such strong assumptions simplify the
      planning problem, but they result in models with limited
      applicability. On the other hand, we consider general factored
      Dec-POMDPs for which we analyze the model dependencies over
      space (locality of iteraction) and time (horizon of the
      problem). We also present a formulation of decomposable optimal
      and approximate value functions for our model. Together, our
      results allow us to exploit the problem structure as well as
      heuristics in a single framework that is based on collaborative
      graphical Bayesian games (CGBGs). Our experiments show a speedup
      of two orders of magnitude.
  }
}

Generated by bib2html.pl (written by Patrick Riley) on Mon Feb 12, 2024 16:22:35 UTC