Aleksander Czechowski got his paper on Decentralized MCTS via Learned Teammate Models accepted at IJCAI 2020.
In this paper we learn the models of other agents that each agent then uses to predict the future with. Stay tuned for the camready.
Aleksander Czechowski got his paper on Decentralized MCTS via Learned Teammate Models accepted at IJCAI 2020.
In this paper we learn the models of other agents that each agent then uses to predict the future with. Stay tuned for the camready.
Together with Thomas Kipf, Max Welling and myself, Elise van der Pol did some excellent work on model-based RL.
See:
I will be co-organizing a AAAI spring symposium on “Challenges and Opportunities for Multi-Agent Reinforcement Learning”. We want to make it a workshop with some actual ‘work’. Please read here for more info.
This is the question that De Volkskrant asked me to comment on. Find the piece here (in Dutch).
Reinforcement learning is tough. POMDPs are hard. And doing RL in partially observable problems is a huge challenge. With Sammie and Chris Amato, I have been making some progress to get a principled method (based on Monte Carlo tree search) too scale for structured problems. We can learn both how to act, as well as the structure of the problem at the same time. See the paper and bib.
Can deep Q-networks etc. brute force their way through tough coordination problems…? Perhaps not. Jacopo’s work, accepted as an extended abstract at AAMAS’19, takes a first step in exploring this in the one-shot setting.
Not so surprising: “joint Q-learner” can be too large/slow and “individual Q-learners” can fail to find good representations.
But good to know: “factored Q-value functions” which represent the Q-function as a random mixture of components involving 2 or 3 agents, can do quite well, even for hard coordination tasks!
A popular piece on my ERC research:
We are looking for faculty in AI!