Are you an aspiring computer science researcher interested in deep neural networks? Do you want to understand the fundamental limits of machine learning? Then you have a part to play as a PhD candidate. By investigating the learning behaviour of neural networks from both a theoretical and experimental angle, you will help us to better understand their limits and potentially develop improved learning algorithms. In recent years, deep neural networks have become indispensable for all kinds of applications in image, speech and text recognition. We know how these networks can be used in practice, but the underlying theory still leaves much to be desired. For example, little is known about which tasks can and cannot be solved with a neural network, or how large a network should be for a particular task.
As part of this PhD project, you will work to increase our understanding of one or more of the following topics:
- The influence of model size on training flexibility and sensitivity to initialisation.
- The characterisation of learned neural networks among all possible networks with representable parameters.
- How networks trained from scratch differ from fine-tuned, pre-trained neural networks.
- The sensitivity of neural networks to perturbations.
These topics can be investigated from a purely theoretical angle, as well as by experiments on standard benchmarks or artificial datasets.
You will be supervised by Twan van Laarhoven.
You will spend roughly ten percent of your time (0.1 FTE) assisting with teaching activities in our department. This will typically include tutoring practical assignments, grading coursework, and supervising student projects.