The next decade for Representation Learning will find Causality, Interactivity, and Embodiment in the centre, given also the impressive progress in 3D simulated. However, our causal representations are not yet mature enough to compete with the likes of self-supervised learning, which however, rely purely on correlations. What is more, our causal representations are not yet mature to account for all possible fine-grained interventions and interactions that a 3D World would require.
Are you passionate about bleeding edge research on
Causal Representation Learning and with a knack towards Computer Vision and Embodied AI applications? Then this is the position for you!
We are looking for a postdoctoral researcher with expertise in Causal Representation Learning, Causality or Machine Learning to join a team of 15+ researchers (1+1 years contract); a team that is connected with the ELLIS Network of Excellence in AI; a team with consistent and strong presence in the top Machine Learning, as well as Computer Vision conferences and journals.
What are you going to do?Causal Representation Learning is an excellent framework to enable embodied agents in 3D worlds with zero-shot and novel task capabilities, by discovering causal knowledge that applies to new settings, and by learning mechanisms on how to interact with novel environments.
In this position, we will start from our successful works on causal representation from spatiotemporal sequences, like
CITRIS, iCITRIS, and the latest BISCUIT that works on full images. While these works lay down the foundations for causal representations that are learned from high-dimensional data like images, in a 3D World environment we require much more fine-grained and decomposable causal representations and mechanisms. The project can focus on a subset of the following key objectives, although
research freedom is more than welcome:
- Learn to transfer causal priors from complex 3D world simulators to real data.
- Learn causal primitives (“skills”) that align with causal priors and interactions.
- Decompose known-by-demonstration complex tasks into simple causal skills.
- Compose causal skills for novel, zero-shot complex task learning.
Making 3D embodiment core to our proposal, we leverage simulated environments like AI2Thor, AIHabitat, ISAAC Sim, or even ObjectFolder with 3D meshes, videos, and tactile readings of real-world objects. We discover causal primitives that transfer better to the real world, e.g. world models, physics laws, and object affordances, compared to correlation-based distributed representations and domain adaptation.
The funding is from personal grants with little strings attached, and
fundamental research is possible and desirable.
Tasks and responsibilities: - Show independence in achieving research goals and willingness to collaborate and to supervise PhD students working on causal representation learning, deep learning, computer vision, dynamical and interactive systems;
- Contribute to a real-world showcase demo along the lines of the challenges in Embodied AI;
- Present research results at international conferences, workshops, and journals;
- Become an active member of the research community and to collaborate with other researchers, both within and outside the Informatics Institute;
- Contribute to teaching activities, such as lectures, lab courses, or supervising bachelor and master students.