We are seeking an enthusiastic PhD candidate to develop novel ideas to establish trust in deep learning models. Trustworthy AI is a major topic in machine learning, which is illustrated by the increasing number of initiatives to enforce AI systems to be more trustworthy. Although machine learning models typically perform well on input data they are trained on, they are less suited for indicating that they cannot provide a reliable prediction as the input is too different from the data they are trained on.
Our goal is to provide theoretically founded Out-Of-Distribution (OOD) methods as a stepping stone toward trustworthy machine learning models that know what they don't know. We can distinguish different types of OOD data, such as adversarial examples, near-OOD, and far-OOD. A challenge here is that, unlike for adversarial examples, we don't have a mathematical framework (yet) for near-OOD and far-OOD data. The topic of OOD detection also links to explainable AI, whose exploration might be worthwhile to identify differences in the processing of in-distribution data and OOD data.
This project is a collaboration between the Data and AI cluster from TU/e and the industrial semiconductor company NXP. This gives the position a unique opportunity to experience and build a network in both academic industrial research.
The position is available from October 1st and will be performed under the supervision of Profs. Wil Michiels (Security Group, NXP), and
Sibylle Hess (Data and AI).