deep learning practitioners


Deep Learning and Reinforcement Learning is a thriving field of science. Properly managed, such algorithms can obtain great results. However, there are many pitfalls, that many of us have bumped into and overcome individually.

In order to streamline this process and possibly avoid such pitfalls we are organizing joint sessions between machine learning enthusiasts in Delft and Leiden, to exchange experiences and knowledge. The goal is to discuss low level practicalities that arose while working with Neural Networks and machine learning methods more generally.


We are aiming to have a meeting every 2 months, alternating between Delft and Leiden. These are announced over a dedicated mailing list, please contact Rolf Starre if you want to be added to this.


Joost Broekens,
Jan van Rijn (main contact Leiden)
Frans Oliehoek
Rolf Starre (main contact Delft)

Upcoming meetings

November 13, 2020

We will have a digital session, in the following Kaltura Live Room:

The program is as follows:
10:00 – 10:30 Presentation Elise van der Pol
10:30 – 11:00 Presentation Miguel Suau
11:00 – 11:30 Discussion about Future Format

Title: Equivariance, deep reinforcement learning, and making it work.
By Elise van der Pol (UvA)

Abstract: Deep reinforcement learning (DRL) is concerned with learning neural function approximators for decision making strategies. While DRL algorithms have been shown to solve complex, high-dimensional problems, and are therefore a popular field of study, they tend to require many samples before convergence. In this talk, I will introduce our recent work on incorporating equivariance in deep reinforcement learning to tackle this data efficiency problem. Specifically, we will look at action-equivariance in the context of model-based RL, and at MDP homomorphic networks, a type of equivariant network that is able to find good policies in fewer environmental interactions. I will also go into the practicals of these projects and discuss engineering specifics, such as common pitfalls in DRL, tips and tricks for training neural networks and thoughts on code base management. 

Title: Lessons I learned while working on my first deep learning paper.
By Miguel Suau (TU Delft)

Abstract: In this talk, I will try to walk you through the process of designing, implementing, and training a deep neural network. I will talk about those specific details that we, deep learning researchers, normally omit from our papers in favor of SOTA results and flashy images. Details like architectural design problems, magic hyperparameter choices, and engineering, tons of engineering. To illustrate all this I will use the paper I wrote last year as an example. This work combines the three most fundamental building blocks of modern neural network architectures: convolutional, recurrent, and attention layers.

Previous meetings

29 January 2020

The topic will be Automated Machine Learning for Deep Learning algorithms, which is extremely relevant as hyperparameter optimization is a crucial step towards getting good performance for Neural Networks. We have found Dr. Joaquin Vanschoren willing to share his expertise with us (title and abstract will follow). Joaquin is a good friend and trusted collaborator, but also author of the book “AutoML: Methods, Systems, Challenges”, freely available here:

Furthermore, we have found several students from Leiden willing to present their findings on some recent papers, i.e.,

  • Weight Agnostic Neural Networks:
  • Snapshot Ensembles:

In short, this will be another great day full of knowledge sharing and scientific entertainment.

Snijderszaal (LB 01.010)
Mekelweg 4
2628 CD, Delft

19 September 2019

The program will be as follows:
10:00 – 12:00 technical session:

  • presentation “Asking questions about Deep Learning Research” by Jan van Gemert,
  • presentation “Scale selective structured filters: How to learn scale in a deep network?” by Silvia Pintea.
    12:00 – 13:00 informal lunch together (not provided)

Title: Asking questions about Deep Learning Research
By Jan van Gemert (TU Delft)

Abstract: How to go about doing research in deep learning? Datasets are massive,
the computational resources have huge requirements but are only
scarcely available, the downloaded code does not compile, and other
‘details’ should not prevent doing ‘good’ scientific research. What is
good research anyway? And how to communicate the findings to the rest of
the world.

Title: Scale selective structured filters: How to learn scale in a deep network?
By Silvia Pintea (TU Delft)

Abstract: I will be presenting my ongoing work about automatically learning scale in deep networks.
The proposed idea is based on scale-space theory and uses Gaussian derivatives to define a basis for constructing network filters.
There are challenges to overcome in terms of memory/computational demands of this new model,
as well as in terms of identifying a scenario in which the proposed method has added value over existing pixelwise filter learning methods.

Snellius Building, Room 413
Niels Bohrweg 1, 2333 CA Leiden

26 June 2019

The program will be as follows:
10:00 – 12:00 technical session: presentation “Automatic pain detection from faces: challenges”, by Laduona Dai, but also discussions and pitches
12:00 – 13:00 informal lunch together (not provided)

Lecture Hall D@ta
Electrical Engineering, Mathematics and Computer Science (EEMCS / EWI)
Building number: 36
Mekelweg 4
2628 CD Delft