Date

2025-Jan-28

Foundation and multimodal Models in Computational Imaging

Abstract:

Recently, foundation models, which are large scale models pretrained on massive amounts of diverse data in a self-supervised manner, have been quickly replacing the existing CNN based end-to-end and/or supervised-learning approaches due to their superior performance. As a result of large-scale self-supervised pre-training, foundation models have excellent generalization capabilities with intriguing emergent properties. This session targets recent advances in foundation models for computational imaging, such as text+image models, as well as diffusion models for inverse problems, to understand their advantages and opportunities.

Organiser & chair: Jong Chul Ye

Session Schedule

8:00 - 8:20
Invited talk: Diffusion Foundation Models for Inverse Problems
Jong Chul Ye
8:20 - 8:40
Invited talk: Datasets Design for Improved and Reliable Deep Learning-Based Imaging
Reinhard Heckel
8:40 - 9:00
Invited talk: MediConfusion: Can you trust your AI radiologist? Probing the Reliability of Multimodal Medical Foundation Models
Mahdi Soltanolkotabi
9:00 - 9:20
Invited talk: Towards Multi-Modal Foundation Models for 3D Medical Imaging
Akshay S. Chaudhari
9:20 - 9:40
Invited talk: 3D Computer Vision in the Age of Deep Learning
Daniel Cremers
9:40 - 11:00
Invited poster: Learning Generative Models from Corrupted Data
Giannis Daras*
9:40 - 11:00
Invited poster: Iterative Renoising as an Efficient Way to Solve Imaging Inverse Problems with Diffusion
Philip Schniter
9:40 - 11:00
Contributed poster: AI4EYE: Translation and Fusion of Multimodal Retinal Imaging
Alin Achim
9:40 - 11:00
Contributed poster: Blind Image Restoration via Fast Diffusion Inversion
Hamadi Chihaoui*