Publications• Sorted by Date • Classified by Publication Type • Classified by Research Category • Beyond Local Nash Equilibria for Adversarial Networks Frans A. Oliehoek, Rahul Savani, Jose Gallego-Posada, Elise van der Pol, and Roderich Gross. Beyond Local Nash Equilibria for Adversarial Networks. ArXiv e-prints, June 2018. DownloadAbstractSave for some special cases, current training meth-ods for Generative Adversarial Networks (GANs)are at best guaranteed to converge to a `local Nashequilibrium' (LNE). Such LNEs, however, canbe arbitrarily far from an actual Nash equilibrium(NE), which implies that there are no guaranteeson the quality of the found generator or classifier.This paper proposes to model GANs explicitlyas finite games in mixed strategies, thereby en-suring that every LNE is an NE. With this for-mulation, we propose a solution method that isproven to monotonically converge to aresource-bounded Nash equilibrium (RB-NE): by increas-ing computational resources we can find bettersolutions. We empirically demonstrate that ourmethod is less prone to typical GAN problemssuch as mode collapse, and produces solutionsthat are less exploitable than those produced byGANs and MGANs, and closely resemble theo-retical predictions about NEs BibTeX Entry@article{Oliehoek18arxiv__beyond,
author = {Frans A. Oliehoek and
Rahul Savani and
Jose Gallego{-}Posada and
Elise van der Pol and
Roderich Gross},
title = {Beyond Local {Nash} Equilibria for Adversarial Networks},
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1806.07268},
primaryClass = "cs.LG",
keywords = {Computer Science - Learning, Computer Science - Computer Science and Game Theory, Statistics - Machine Learning},
year = 2018,
month = jun,
wwwnote = {Also available from <a href="https://arxiv.org/abs/1806.07268">arXiv</a>.},
keywords = {nonrefereed, arxiv},
abstract = {
Save for some special cases, current training meth-
ods for Generative Adversarial Networks (GANs)
are at best guaranteed to converge to a `local Nash
equilibrium' (LNE). Such LNEs, however, can
be arbitrarily far from an actual Nash equilibrium
(NE), which implies that there are no guarantees
on the quality of the found generator or classifier.
This paper proposes to model GANs explicitly
as finite games in mixed strategies, thereby en-
suring that every LNE is an NE. With this for-
mulation, we propose a solution method that is
proven to monotonically converge to a
resource-bounded Nash equilibrium (RB-NE): by increas-
ing computational resources we can find better
solutions. We empirically demonstrate that our
method is less prone to typical GAN problems
such as mode collapse, and produces solutions
that are less exploitable than those produced by
GANs and MGANs, and closely resemble theo-
retical predictions about NEs
}
}
Generated by
bib2html.pl
(written by Patrick Riley) on
Thu Nov 06, 2025 10:14:50 UTC |