Do you want to conduct philosophical research on the explainability and fairness of state-of-the-art large language models? Do you have an interdisciplinary background that spans philosophy, artificial intelligence, and cognitive science? Are you interested in conducting impactful philosophical research in close collaboration with scientists and engineers in a EU-funded doctoral network, helping to promote fair and transparent generative AI?
We are seeking a highly motivated doctoral candidate to join our research team as part of a prestigious Marie Skłodowska-Curie Actions (MSCA) doctoral network: AlignAI. The AlignAI doctoral network aims to train doctoral candidates to develop, evaluate, and engage with Large Language Models (LLMs). It focuses on aligning these models with human values to ensure their development and deployment are ethically sound and socially beneficial. By integrating expertise from social sciences, humanities, and technical disciplines, the project will address critical issues such as explainability and fairness, thereby ensuring LLMs contribute positively to education, mental health, and news consumption.
The doctoral candidate in this position will develop a philosophical framework to ensure and evaluate the explainability and fairness of Large Language Models. This framework will center on conceptual and theoretical tools with which to understand, systematize, and evaluate formal and computational methods from AI that help explain the behavior of state-of-the-art LLMs, and that help align such systems with human values generally and fairness-related values specifically. Depending on the interest and expertise of the candidate, these tools might be designed to attribute conceptual or cognitive capacities to artificial neural networks on the basis of their learned representations, to apply benchmarks or statistical measures to evaluate the fairness and toxicity of their generated content, and/or to evaluate the success of probing and intervention techniques to manipulate their behaviors.
The doctoral candidate will need to apply methods from philosophy of science, philosophy of mind, and ethics to recent work in artificial intelligence, psychology, neuroscience, linguistics, and/or related disciplines. They will also need to learn about contemporary LLM applications in education, mental health, and online news consumption. Thus, the ideal candidate will have an interdisciplinary background that spans several of the aforementioned disciplines, and that demonstrates a clear interest in AI technology and its societal impact. At the same time, the candidate will have the necessary language, communication, and analytic skills to produce philosophical research of publishable quality.
The doctoral candidate will be supervised by
Carlos Zednik and
Lambèr Royakkers, and will be embedded within the
Philosophy & Ethics group, Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology (TU/e). Additional supervision will be provided within the AlignAI doctoral network by Andrea Cavallaro (EPFL) and Christoph Lütge (TU München). The candidate will be affiliated with the
Eindhoven Center for Philosophy of AI (ECPAI) and the
Eindhoven Artificial Intelligent Systems Institute (EAISI), and will be expected to contribute actively and positively to the philosophical and AI research communities in Eindhoven. The candidate will also be expected to attend workshops, summer schools, and research visits at AlignAI partner institutions.
This position is one of four PhD positions being offered at TU/e through the AlignAI doctoral network, the others being in Human Technology Interaction (Martijn Willemsen) and Industrial Design (Jesse Benjamin & Stephan Wensween).