Generative artificial intelligence is disrupting many domains, including communication, education, and academic research. Recently, some companies and research initiatives have proposed that generative AI could – perhaps should – be used in healthcare, both for discovering new medical technologies and for patient treatment. In terms of research, generative AI can find trends and patterns in vast quantities of medical research, synthesising data and making new medical discoveries. In terms of improving medical care, LLMs have the potential to be integrated into the doctor-patient relationship. They can summarise consultations, and offer ongoing medical advice, and independently provide mental health services.
Using LLMs and other kinds of generative AI in healthcare comes with serious ethical issues. The most challenging of these is privacy, but ethical issues also include: the ability of these technologies to generate misleading or unreliable information to healthcare practitioners, the emergence of “responsibility gaps” in the healthcare system, and issues of social and epistemic justice among different patients and users. Some of these ethical challenges are similar to risks in other domains, but the challenges of applying generative AI in healthcare potentially comes with distinct harms. Unreliable healthcare information can have devastating consequences, which may be magnified if the design of LLMs do not take the privacy of the highly sensitive data they deal with into account as well as issues of justice and responsibility.
This PhD will begin by examining the key ethical challenges of privacy, reliability, justice and responsibility, showing how these ethical risks can be minimised or mitigated. This will prepare the candidate to develop a new ethical framework that outlines how generative AI can be ethically deployed in various kinds of healthcare. The candidate can either choose to focus on how generative AI can be used in medical research or how LLMs can provide new tools for doctors and patients. The candidate will also be responsible for sharing ethical insights with the other partners in the MedGPT consortium (see below for details). Additionally, an interest in analysing the use of generative AI in healthcare from the perspective of philosophy of science or intercultural philosophy would be a plus, although not a requirement.
Funding & Institutional Embedding This PhD position is part of the
Medical GPT: Revolutionising Healthcare with Ethical AI project (MedGPT). The candidate will be supervised by Matthew J. Dennis (TU/e), Vlasta Sikimić (TU/e), Filippo Santoni de Sio (TU/e), and they will be hosted by Eindhoven University of Technology (TU/e) in the Philosophy & Ethics Group. The candidate will be responsible for working with other stakeholders in the MedGPT project, and will be encouraged to collaborate with other researchers in the consortium.
Philosophy & Ethics Group TU/e’s Philosophy and Ethics (P&E) group connects philosophy and ethics to emerging technologies and innovation. Researchers in the P&E group primarily study innovative technologies and technology-related problems in detail to enable empirically informed analyses that are meaningful to philosophers, researchers across disciplines, and other societal stakeholders. To do this, the group has established close interdisciplinary collaborations with researchers from groups in the TU/e School of Innovation Sciences, as well as with mechanical engineers, climate scientists, and archaeologists, among others. The group’s expertise covers a variety of philosophical sub-disciplines, including applied ethics, normative ethics, meta-ethics, philosophy of science and technology, and epistemology. More information about P&E can be found here:
https://www.tue.nl/en/research/research-groups/innovation-sciences/philosophy-ethics