Potential Pitfalls of Deep Learning in Medical Image Reconstruction


This session will focus on scenarios in which deep learning algorithms developed for medical imaging might produce unreliable results, e.g. due to distribution shifts, bias, hallucinations, or other factors. The session is planned following the increasing interest in studying sensitivities and instabilities of such algorithms. The talks will discuss strategies for exposing algorithmic sensitivities and addressing them, preventing inverse crimes, and increasing algorithmic interpretability. The aim of the session is to raise awareness to the growing problem of unreliable AI performance in the context of medical imaging, suggest guidelines and solutions, and invoke community discussions.

Organisers: Efrat Shimron (Chair) & Florian Knoll (Committee member)

Session Schedule

17:30 - 17:50 Invited talk
Efrat Shimron
17:50 - 18:10 Invited talk
Daniel Rueckert
18:10 - 18:30 Invited talk
Jeffrey Kessler
18:30 - 18:50 Invited talk
Jon Tamir
18:50 - 19:10 Invited talk
Anders Hansen
19:10 - 20:30 Invited poster
Akshay Chaudhari
19:10 - 20:30 Committee member poster
Florian Knoll
19:10 - 20:30 Contributed poster
Arjun Desai
19:10 - 20:30 Contributed poster
Guanxiong Luo
19:10 - 20:30 Contributed poster
Tamara T. Mueller