Publications• Sorted by Date • Classified by Publication Type • Classified by Research Category • Using Bisimulation Metrics to Analyze and Evaluate Latent State RepresentationsNele Albers, Miguel Suau, and Frans A. Oliehoek. Using Bisimulation Metrics to Analyze and Evaluate Latent State Representations. In Proceedings of the 33rd Benelux Conference on Artificial Intelligence (BNAIC) and the 29th Belgian Dutch Conference on Machine Learning (Benelearn), pp. 320–334, November 2021. DownloadAbstractDeep Reinforcement Learning (RL) is a promising technique towards constructing intelligent agents, but it is not always easy to understand the learning process and the factors that impact it. To shed some light on this, we analyze the Latent State Representations (LSRs) that deep RL agents learn, and compare them to what such agents should ideally learn. We propose a crisp definition of ’ideal LSR’ based on a bisimulation metric, which measures how behaviorally similar states are. The ideal LSR is that in which the distance between two states is proportional to this bisimulation metric. Intuitively, forming such an ideal representation is highly favorable due to its compactness and generalization properties. Here we investigate if this type of representation is also desirable in practice. Our experiments suggest that learning representations that are close to this ideal LSR may improve upon generalization to new irrelevant feature values and modified dynamics. Yet, we show empirically that the extent to which such representations are learned depends on both the network capacity and the state encoding, and that with the current techniques the exact ideal LSR is never formed. BibTeX Entry@inproceedings{Albers21BNAICBenelearn,
author= {Albers, Nele
and Suau, Miguel
and Oliehoek, Frans A.},
title = {Using Bisimulation Metrics to Analyze and Evaluate Latent State Representations},
booktitle = BNAICBenelearn21,
year = 2021,
month = nov,
pages = {320--334},
keywords = {refereed},
abstract = {
Deep Reinforcement Learning (RL) is a promising technique towards constructing
intelligent agents, but it is not always easy to understand the learning
process and the factors that impact it. To shed some light on this, we
analyze the Latent State Representations (LSRs) that deep RL agents learn,
and compare them to what such agents should ideally learn. We propose a crisp
definition of ’ideal LSR’ based on a bisimulation metric, which measures
how behaviorally similar states are. The ideal LSR is that in which the
distance between two states is proportional to this bisimulation metric.
Intuitively, forming such an ideal representation is highly favorable due
to its compactness and generalization properties. Here we investigate if
this type of representation is also desirable in practice. Our experiments
suggest that learning representations that are close to this ideal LSR may
improve upon generalization to new irrelevant feature values and modified
dynamics. Yet, we show empirically that the extent to which such
representations are learned depends on both the network capacity and the
state encoding, and that with the current techniques the exact ideal LSR is
never formed.
}
}
Generated by
bib2html.pl
(written by Patrick Riley) on
Thu Nov 06, 2025 10:14:50 UTC |