Through advances in artificial intelligence (AI), medical imaging has gained an increasingly important role in precision medicine. AI methods are being used both in radiology (“radiomics”) and pathology (“pathomics”) to develop prediction models that are at the basis of more precise and personalized clinical decision making. While radiomics and pathomics models often have similar goals and contain complementary information, these research fields are largely separated. Moreover, despite major advancements in these fields, implementation in real-world clinical practice remains limited.
To address this, the aim of the AI for Integrated Diagnostics (AIID) research line is to join forces of radiomics and pathomics to create trustworthy models to aid clinicians in decision making. Our mission is to develop multi-modal machine learning methods primarily for improved diagnosis and therapy response in oncology. While we focus on radiology and pathology, our vision is to extend our methods to additional datatypes. By smartly combining the different datatypes through AI, our methods could substantially impact clinical practice. The
current project funded by a Dutch National Growth Fund AINed Fellowship kickstarts this research line with three PhD students and two Postdocs.
This PostDoc focusses on developing novel multi-modal deep learning methods. Only a handful of studies in recent years have just scratched the surface of “RadioPathomics” (i.e., combined radiomics and pathomics). You will develop multi-modal machine learning methods that simultaneously learn from both modalities by sharing representations of the different modalities during learning, e.g., through coordination (promoting equivalence) or co-learning (sharing and aligning intermediate representations). These should be model- and disease-agnostic, so you can incorporate the developments from our radiomics and pathomics research, and apply your methods to different disease domains.