Short Description Are you interested in Artificial Intelligence systems that learn from interactions with their environments? Have you wondered what the best way to collect data is for them? How much information they gain by observing specific data points? Then this PhD position might be for you. We are looking for someone that is interested in uncovering information-theoretic properties of intelligent systems that learn from their environment.
Job Description Not all data is equally useful. A major challenge in training artificially intelligent systems that learn from interactions with their environments (agents), is to acquire the
most useful data points. For example, where should a robot look in order to pick up a cup?
Active Inference is a framework for designing agents that balance information-seeking and goal-seeking behaviour. This PhD position will dive into the information-theoretic basis of this framework.
You will work with probabilistic machine learning methods, such as (variational) Bayesian inference and Active Inference, applied to signal processing and control systems. We are looking for someone that has experience with information theory, i.e., someone who is familiar with concepts such as entropy, mutual information and divergence measures. You will use this knowledge to derive insights into whether the data acquisition protocols for Active Inference agents can be improved.
You will become a member of the
Bayesian Intelligent Autonomous Systems laboratory (
https://biaslab.github.io/), which is part of the Signal Processing Systems group at the Electrical Engineering department. We are a close team of over a dozen researchers that work on probabilistic models, inference algorithms, and signal processing / control system applications. We are known for our probabilistic programming toolbox
RxInfer.jl (
https://rxinfer.ml/) and
our foundational perspective on Artificial Intelligence (
https://youtu.be/2wnJ6E6rQsU?si=UYgNxU5LeFd1Nq6P).