Scaling Bayesian RL for Factored POMDPs

Reinforcement learning is tough. POMDPs are hard. And doing RL in partially observable problems is a huge challenge. With Sammie and Chris Amato, I have been making some progress to get a principled method (based on Monte Carlo tree search) too scale for structured problems. We can learn both how to act, as well as the structure of the problem at the same time. See the paper and bib.

Video Demo with ALA submission ‘Influence-Based Abstraction in Deep Reinforcement Learning’

On this page, we show some videos of our experimental results in two different environments, Myopic Breakout and Traffic Control.

Myopic Breakout

The InfluenceNet model (PPO-InfluenceNet) is able to learn the “tunnel” strategy, where it creates an opening on the left (or right) side and plays the ball in there to score a lot of points:

 

 

The feedforward network with no internal memory performs considerably worse than the InfluenceNet model:

 

 

Traffic Control

The Traffic Control task was modified as follows:

  • The size of the observable region was slightly reduced and the delay between the moment an action is taken and the time the lights switch was increased to 6 seconds. During these 6 seconds the green light turns yellow.
  • The speed penalty was removed and there is only a negative reward of -0.1 for every car that is stopped at a traffic light.

As shown in the video below, a memoryless agent can only switch the lights when a car enters the local region. With the new changes, this means that the light turns green too late and the cars have to stop:

 

 

On the other hand, the InfluenceNet agent is able to anticipate that a car will be entering the local region and thus switch the lights just in time for the cars to continue without stopping:

 

At AAMAS: Deep learning of Coordination…?

Can deep Q-networks etc. brute force their way through tough coordination problems…? Perhaps not. Jacopo’s work, accepted as an extended abstract at AAMAS’19, takes a first step in exploring this in the one-shot setting.

Not so surprising: “joint Q-learner” can be too large/slow and “individual Q-learners” can fail to find good representations.

But good to know: “factored Q-value functions” which represent the Q-function as a random mixture of components involving 2 or 3 agents, can do quite well, even for hard coordination tasks!