Are you interested in philosophical questions surrounding how machine learning and AI models are being used across society for social prediction, like facial recognition, recidivism risk, or job screening? Check out this PhD position!
Wat ga je doen? The same machine learning methods that are unprecedently used across large areas of science are also wide ranging across society. ML models are determining what news we see, risk scores for fraud, and more. LLMs are structuring our knowledge with ChatGPT integrated with Bing search and Quora answers, even despite well documented cases of ChatGPT ‘hallucinations.’
Current approaches to evaluating ML models in society have clustered around issues of fairness, bias, problems of justice by introducing ML models at scale, the right to explanation, and more. While all these issues remain important, there is a deep worry that ML models might not be providing us with genuine information or knowledge in the first place. Before we can make informed decisions about when and where ML models should be used across society, we need to understand their epistemic value.
The aim of this PhD project is to bring methods and resources from philosophy of science (e.g. idealisation and representation) to answer important questions in AI ethics regarding appropriate use of ML models in society. How do ML models idealize social phenomena? When do the idealisations of ML models get in the way of acceptable use?
This PhD project is part of the ERC Starting Grant project
Machine Learning in Science and Society: A Dangerous Toy? (TOY) The project team consists of the PI (
Emily Sullivan), this PhD position, and two forthcoming postdoc positions. The PhD candidate will be embedded within the theoretical philosophy group at Utrecht University and the
Normative Philosophy of Science research lab.