- Are you inspired by the prospect of shaping the future of autonomous driving?
- Are you fascinated by explainable artificial intelligence?
- Are you eager to work on an interdisciplinary team that combines engineering and the humanities?
- Then apply for the PhD position on Explainable AI in Machine Vision for autonomous driving!
Job Description Autonomous driving is a key application of artificial intelligence, and machine vision specifically. Although contemporary machine vision systems now routinely outperform their biological counterparts, they are far from perfect, especially when they are integrated with the complex action-selection systems that drive autonomous vehicles.
Methods from explainable AI (XAI) are increasingly used to evaluate and improve the performance of AI systems. Although the technical implementation of these methods is becoming increasingly routine, it remains unclear exactly how these methods can be used most effectively to ensure the safe, responsible, and transparent artificial intelligence. For example, although XAI methods can be used to precisely characterize an AI system's classification performance, it remains unclear how much error can and should be tolerated in this performance. Moreover, although other XAI methods can tell us which factors are actually considered when decisions are being made, it remains unclear which factors are permissible and which ones are not.
This PhD project is designed to identify systemize and evaluate XAI methods, and to identify best-practices for machine vision in the context of autonomous driving, while taking into account human factors and societal norms. To this end, it will be necessary to not only consider mathematical and technical details, but also relevant insights from e.g. psychology of human decision-making, regulation & standardization of explainability, and ethical principles of safety, transparency, privacy, and fairness.
More specifically, research tasks will include:
- Reviewing relevant literature from the social sciences and humanities on AI safety and explainability.
- Reviewing technical literature on machine learning and explainable AI.
- Developing a normative evaluation framework for the use of explainable AI in machine vision for autonomous driving.
- Collaborating on ongoing engineering projects that aim to implement XAI methods in machine vision for autonomous driving.
As this is an inherently interdisciplinary research project, the ideal candidate will combine technical expertise in machine learning and explainable AI (e.g. visualization techniques and feature-importance measures) with an ability to engage relevant issues in social science and humanities (in particular, norms of explainability and AI safety).
The candidate will be integrated in the
Mobile Perception Systems (MPS) lab as well as the
Philosophy & Ethics (P&E) group. They will be a member of the
LTP ROBUST consortium funded by NWO and NXP Semiconductors, and of the
EAISI institute at TU/e.