Are you interested in performing high-impact interdisciplinary research in Artificial Intelligence and its alignment with humans and society? The University of Amsterdam has recently started a flagship project on Human-Aligned Video AI (HAVA). The HAVA Lab will address fundamental questions about what defines human alignment with video AI, how to make this computable, and what determines its societal acceptance. Video AI holds the promise to explore what is unreachable, monitor what is imperceivable and to protect what is most valuable. New species have become identifiable in our deep oceans, the visually impaired profit from automated speech transcriptions of visual scenery, and elderly caregivers may be supported with an extra pair of eyes, to name just three of the many, many application examples. This is no longer wishful thinking. Broad uptake of video-AI for science, for business, and for wellbeing awaits at the horizon, thanks to a decade of phenomenal progress in machine deep learning. However, the same video-AI is also accountable for self-driving cars crashing into pedestrians, deep fakes that make us believe misinformation, and mass-surveillance systems that monitor our behaviour. The research community’s over-concentration on recognition accuracy has neglected human-alignment for societal acceptance. The HAVA Lab is an intern-disciplinary lab that will study how to make the much-needed digital transformation towards human-aligned video AI.
The HAVA Lab will host 7 PhD positions working together with researchers from all 7 faculties of the university, from video AI and its alignment with human cognition, ethics, and law, to its embedding in medical domains, public safety, and business. The lab has 9 supervisors in total spanning all 7 faculties of the university for maximum interdisciplinarity. Depending on the specific topic, the PhD students also have a strong link to the working environment and faculty of their respective supervisors. The HAVA Lab has been given a unique central location at the library, an ideal hub for interdisciplinary collaborations. The PI of the lab is prof. dr. Cees Snoek.
The PhD Position on alignment between video-AI and law will be supervised by dr. Heleen Janssen and prof. dr. Cees Snoek.
What are you going to do? For this position, you will research video-AI system’s compliance with fundamental rights and ethical values our European societies are based on. Can we incorporate legal rights such as privacy and non-discrimination in the design, development and deployment of video AI systems, as well as ethical standards of transparency, accountability, non-maleficence, equity, or justice by design? Can we develop human-aligned video-AI that accords with other regulatory concerns, while grounding legal and policy discussions in technical realities – and vice versa? Our ideal PhD candidate has an information law background, with legal and technical AI-knowledge, or the willingness to acquire these.
Tasks and responsibilities Your tasks will be to:
- Perform novel research towards video AI and its human-alignment in society;
- Actively collaborate within the interdisciplinary HAVA Lab;
- Present research results at international conferences and journals;
- Be active in sharing your research in the public as well as in the social domain, according to UvA Guidelines;
- Assist in teaching activities such as lab assistance and student supervision;
- Pursue and complete a PhD thesis within the appointed duration of four years.