We have some first results on using influence-based abstractions in the context of deep reinforcement learning, which will be presented at the ALA workshop in Montreal.
See the paper here or have a look at some of the videos.
Reinforcement learning is tough. POMDPs are hard. And doing RL in partially observable problems is a huge challenge. With Sammie and Chris Amato, I have been making some progress to get a principled method (based on Monte Carlo tree search) too scale for structured problems. We can learn both how to act, as well as the structure of the problem at the same time. See the paper and bib.