home

I am Associate Professor at the Interactive Intelligence group at TU Delft.

My main research interests lie in what I call interactive learning and decision making: the intersection of AI, machine learning and game theory. I try to generate fundamental knowledge about algorithms and models for complex tasks. In addition, I think about how such abstract models might be applied to challenging real-world tasks such as collaboration in multi-robot systems, optimization of traffic control systems, intelligent e-commerce agents, etc.

For more information about my research, look here.

For research opportunities, look here.

For more information about possible student projects (current TU Delft students), look here.

News

  • October 1st, 2020:    Three papers at NeurIPS 2020

    Three of our papers were accepted at NeurIPS. For short descriptions, see my tweet.

    (Updated) arxiv links will follow…

  • July 13th, 2020:    Vacancy: looking for 3-year postdoc

    I’m looking for a postdoc to work on learning in interactive settings. Please see https://www.fransoliehoek.net/wp/vacancies/.

  • April 24th, 2020:    AAMAS: Maximizing Information Gain via Prediction Rewards

    This paper tackles the problem of active perception: taking actions to minimize one’s uncertainty. It further formalizes the link between information gain and prediction rewards, and uses this to propose a deep-learning approach to optimize active perception from a data set, thus obviating the need for a complex POMDP model.

  • April 24th, 2020:    IJCAI paper: Decentralized MCTS via Learned Teammate Models

    Aleksander Czechowski got his paper on Decentralized MCTS via Learned Teammate Models accepted at IJCAI 2020.

    In this paper we learn the models of other agents that each agent then uses to predict the future with. Stay tuned for the camready.

  • March 2nd, 2020:    AAMAS camready: Model-based RL

    Together with Thomas Kipf, Max Welling and myself, Elise van der Pol did some excellent work on model-based RL.

    See:

  • August 19th, 2019:    Co-organizing COMARL at the AAAI Spring Symposia

    I will be co-organizing a AAAI spring symposium on “Challenges and Opportunities for Multi-Agent Reinforcement Learning”. We want to make it a workshop with some actual ‘work’. Please read here for more info.

  • July 8th, 2019:    Why should we care about AI playing video games?

    This is the question that De Volkskrant asked me to comment on. Find the piece here (in Dutch).

  • April 25th, 2019:    Influence-Based Abstraction in Deep Reinforcement Learning

    We have some first results on using influence-based abstractions in the context of deep reinforcement learning, which will be presented at the ALA workshop in Montreal.

    See the paper here or have a look at some of the videos.

     

  • April 12th, 2019:    Scaling Bayesian RL for Factored POMDPs

    Reinforcement learning is tough. POMDPs are hard. And doing RL in partially observable problems is a huge challenge. With Sammie and Chris Amato, I have been making some progress to get a principled method (based on Monte Carlo tree search) too scale for structured problems. We can learn both how to act, as well as the structure of the problem at the same time. See the paper and bib.

  • February 27th, 2019:    At AAMAS: Deep learning of Coordination…?

    Can deep Q-networks etc. brute force their way through tough coordination problems…? Perhaps not. Jacopo’s work, accepted as an extended abstract at AAMAS’19, takes a first step in exploring this in the one-shot setting.

    Not so surprising: “joint Q-learner” can be too large/slow and “individual Q-learners” can fail to find good representations.

    But good to know: “factored Q-value functions” which represent the Q-function as a random mixture of components involving 2 or 3 agents, can do quite well, even for hard coordination tasks!

Old news