Publications

Sorted by DateClassified by Publication TypeClassified by Research Category

Beyond Local Nash Equilibria for Adversarial Networks

Frans A. Oliehoek, Rahul Savani, Jose Gallego, Elise van der Pol, and Roderich Groß. Beyond Local Nash Equilibria for Adversarial Networks. In Artificial Intelligence, pp. 73–89, Springer International Publishing, September 2019.
Also see arXiv version.

Download

pdf [1.3MB]  ps.gz ps HTML 

Abstract

Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a `local Nash equilibrium' (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or classifier. This paper proposes to model GANs explicitly as finite games in mixed strategies, thereby ensuring that every LNE is an NE. We use the Parallel Nash Memory as a solution method, which is proven to monotonically converge to a resource-bounded Nash equilibrium. We empirically demonstrate that our method is less prone to typical GAN problems such as mode collapse and produces solutions that are less exploitable than those produced by GANs and MGANs.

BibTeX Entry

@inproceedings{Oliehoek19BNAIC_pp,
    author= {Oliehoek, Frans A.
             and Savani, Rahul
             and Gallego, Jose
             and van der Pol, Elise
             and Gro{\ss}, Roderich},
    editor= {Atzmueller, Martin
             and Duivesteijn, Wouter},
    title=      {Beyond Local {Nash} Equilibria for Adversarial Networks},
    booktitle=  {Artificial Intelligence},
    year=       {2019},
    publisher=  {Springer International Publishing},
    pages=      {73--89},
    month =     sep,
    OPTurl =       {https://doi.org/10.1007/978-3-030-31978-6_7},
    doi =       {10.1007/978-3-030-31978-6\_7},
    wwwnote =   {Also see <a href="https://arxiv.org/abs/1806.07268">arXiv version</a>.},
    keywords =   {refereed},
    abstract = {
    Save for some special cases, current training methods for Generative
    Adversarial Networks (GANs) are at best guaranteed to converge to a `local
    Nash equilibrium' (LNE). Such LNEs, however, can be arbitrarily far from an
    actual Nash equilibrium (NE), which implies that there are no guarantees on
    the quality of the found generator or classifier. This paper proposes to
    model GANs explicitly as finite games in mixed strategies, thereby ensuring
    that every LNE is an NE. We use the Parallel Nash Memory as a solution
    method, which is proven to monotonically converge to a resource-bounded
    Nash equilibrium. We empirically demonstrate that our method is less prone
    to typical GAN problems such as mode collapse and produces solutions that
    are less exploitable than those produced by GANs and MGANs.        
    }
}

Generated by bib2html.pl (written by Patrick Riley) on Mon Oct 07, 2024 14:17:04 UTC