Just a short update for the people who applied to our open positions in the MMLL. Due to a combination of holidays, new recruitment system, and a large number of applications, we are still screening. We hope to have more news soon!
Category: news
Wanted: 2 PhD students in Reinforcement Learning
Within the Mercury Machine Learning Lab with booking.com and UvA, we are recruiting for 2 PhD students that want to tackle reinforcement learning for the real world!
Details here.
If you have a strong track record in ML and RL, and are interested, please do get in touch!
Announcing the Mercury Machine Learning Lab
As one of the scientific directors, I am co-leading the new Mercury Machine Learning Lab: a new ICAI lab in collaboration with the University of Amsterdam and booking.com.
At Delft, we will be looking for 2 PhDs and a postdoc, so keep an eye out on adverts or follow me on twiter if interested in applying reinforcement learning in a real world context!
ALA’21 best paper
Jacopo, Rahul, Sam and I won the best paper award at ALA’21!
-> check out the paper here.
Informal discussion on MARL @ AAMAS’21
Tomorrow, wed 5th of May, I will lead an informal discussion on multiagent RL. Details can be found here: https://aamas2021.soton.ac.uk/programme/detailed-programme/#Wednesday-M-INF
Looking forward to discuss!
AAMAS’21: Difference Rewards Policy Gradients
At next AAMAS, Jacopo Castellini, Sam Devlin, Rahul Savani and myself, will present our work on combining difference rewards and policy gradient methods.
Main idea: for differencing the function needs to be quite accurate. As such doing differencing on Q-functions (as COMA) might not be ideal. We instead perform the differencing on the reward function, which may be known and otherwise easier to learn (stationary). Our results show potential for great improvements especially for larger number of agents.
Are Multiple Agents the Solution, and not the Problem, to Non-Stationarity?
That is what we explore in our AAMAS’21 blue sky paper.
The idea is to explicitly model non-stationarity as part of an environmental shift game (ESG). This enables us to predict and even steer the shifts that would occur, while dealing with epistemic uncertainty in a robust manner.
AAMAS’21 camready: AIP loss bounds
Our AAMAS’21 paper on loss bounds for influence-based abstraction is online.
In this paper, we derive conditions for ‘approximate influence predictors’ to give small value-loss when used in small (abstracted) MDPs. From these conditions we conclude that that learning such AIPs with cross-entropy loss seems sensible.
3 Year Postdoc
Do you have experience in multiagent reinforcement learning, game theory and/or other forms of interactive learning? Then have a look at this vacancy and contact me!
NeurIPS camready: MDP Homomorphic Networks
In this work we show how symmetries that can occur in MDPs can be exploited for more efficient deep reinforcement learning.