Proceedings will be uploaded in due time.
Organiser & chair: Mujdat Cetin
Abstract:
This session explores recent advances in the theory and methods of uncertainty quantification for computational imaging. As computational imaging involves solving ill-posed inverse problems using complicated estimators, quantifying uncertainties associated with formed imagery, including both aleatoric (stochastic) and epistemic (systematic or modeling) uncertainties, has been an important topic of interest. Recent emergence of deep learning-based image formation methods, including generative models, has both increased the need for proper uncertainty quantification with an eye towards trustworthiness, and also provided tools that can possibly enable progress in that direction. This session highlights a variety of statistical and learning-driven recent work in this area, including estimation and visualization of epistemic and aleatoric uncertainties, posterior variance-based error quantification, distribution-free uncertainty quantification using conformal prediction, as well as task-driven uncertainty quantification for computational imaging, among others.
Organiser & chair: Aviad Levis / Marie Ygouf
Abstract:
The topic of this session is Computational Imaging for Precision Astrophysics, covering a variety of exciting new approaches for ultra high resolution astronomy. The session focuses on black holes and exoplanets both of which with their unique challenges. Horizon scale black hole science is incredibly challenging due the extreme resolution required (~20 micro arcseconds) to resolve the largest black holes on the sky: Sagitarrius A* and M87*. Nonetheless, innovations and advancements in interferometry over the last decade have opened the door to new horizon scale science: from the first images with the Event Horizon Telescope, to precise astrometry with GRAVITY. The talks and posters in this session cover a variety of topics from novel computational imaging that integrates interferometry with information theory and modern machine learning to new mission concepts for space interferometry. For exoplanets, beyond resolution, a key challenge is that of contrast: the signal from the planet could be up to a billion times fainter than that of its host star. Nonetheless, direct imaging holds the key to characterizing composition and habitability. In this session, we will hear about advancements in the field and how precise modeling of instrumental optics could be the key to pushing the detection limit towards the fundamental noise.
Organiser & chair: Andrew Webb (Chair) & Florian Knoll
Abstract:
This session will explore different challenges associated with medical imaging in low resource settings. MRI, X-ray/CT and ultrasound image requirements will be compared with one another, as well as issues of data transmission and interpretation. The different potential roles of AI in image acquisition, processing and diagnosis will be a common theme throughout.
Organiser & chair: Julia Schnabel
Abstract:
This session explores the critical “R4” challenges—Reconstruction, Resolution, Regularization, and Representation—that are central to advancing medical imaging technologies. With a strong focus on how novel computational methods, including machine learning and physics-informed techniques, can address longstanding limitations in image quality, acquisition time, and motion artifacts, this session highlights innovative approaches across a range of medical imaging modalities, including MRI, PET, and photoacoustic tomography (PAT), demonstrating synergistic opportunities for overcoming common imaging challenges.
Organiser & chair: Jong Chul Ye
Abstract: Recently, foundation models, which are large scale models pretrained on massive amounts of diverse data in a self-supervised manner, have been quickly replacing the existing CNN based end-to-end and/or supervised-learning approaches due to their superior performance. As a result of large-scale self-supervised pre-training, foundation models have excellent generalization capabilities with intriguing emergent properties. This session targets recent advances in foundation models for computational imaging, such as text+image models, as well as diffusion models for inverse problems, to understand their advantages and opportunities.
Organiser & chair: Stefan Wijnholds (Chair) & Kazunori Akiyama
Abstract:
With the development and deployment of LOFAR2.0, SKA and ngVLA, radio interferometry is entering the Big Data Era. To keep up with the data deluge that will be generated by these instruments with an acceptable costing envelope for computing hardware and energy consumption, at least an order of magnitude improvement in computational performance needs to be realized. Also, data processing pipelines will need to scale well and use processing components optimized for energy efficiency. These challenges require us to rethink the way we process radio interferometric data, opening opportunities for exploration of new avenues. This session aims to present an overview of the challenges and provide a forum for discussion of new ways to process radio interferometric data.
Organiser & chair: Laurence Perreault Levasseur
Abstract:
Recent progress in machine learning and generative modeling has opened new avenues to tackle previously insoluble high-dimensional inverse problems in astronomy and astrophysics, particularly in Bayesian image reconstruction. While these methodologies show great promise in a range of applications from field-level cosmology to differentiable optics systems, multiple open problems stand in the way of groundbreaking discoveries. This session will explore recent applications and proposals addressing the development of computationally tractable methodologies to reconstruct posterior samples in imaging problems. We will also discuss assessing their accuracy in real-world settings and addressing the problem of robustness to distributional shifts, which remains a key focus of current research.
Organiser & chair: Audrey Repetti
Abstract:
This session focuses on computational imaging methods at the interface of optimization and deep learning. Recent advances aiming at pairing these two areas has led to significant advancements in image reconstruction and enhancement. Optimization techniques traditionally provided mathematically rigorous solutions for imaging problems, but they often struggled with high-dimensional data and non-linearities. Deep learning, particularly neural networks, has transformed this landscape by learning complex mappings from data, enabling faster and higher expressivity in image solutions. When combined, these approaches harness optimization algorithms to refine and guide the training of deep networks, resulting in robust models that are highly efficient for performing computational imaging tasks.
Organiser & chair: Daniel K. Sodickson
Abstract:
If we had all the hardware power, compute resources and data analysis techniques we could dream of, what could we achieve in medical imaging? What fundamental scientific or clinical questions could we answer? This session will explore advanced imaging capabilities which are currently being enabled or may soon be enabled by emerging hardware platforms, software tools, and datasets. Speakers will share a focus on new information that can be gleaned from MRI, PET, CT, ultrasound, and other imaging or sensing modalities. They will also comment on which developments may eventually translate to, or otherwise influence, the low-resource settings to be explored in more detail in a companion session.
Speaker: Greg Zaharchuk
Topic: Perspectives on the past, present, and future of medical imaging
Abstract: TBC
Speaker: Jean-Luc Starck
Topic: Perspectives on the past, present, and future of astronomical imaging
Abstract: TBC
Speaker: Carola Schoenlieb
Topic: Perspectives on the past, present, and future of computational imaging
Abstract: TBC
Panel Discussion
Topic: Challenges and opportunities at the interface
Organisers: He Sun & Katie Bouman (Committee member)
Abstract:
Large optical telescopes have a rich history of pushing the boundaries of imaging technology and astronomical science. With a series of revolutionary optical telescopes launched recently or planned to be launched soon, in the next decade we expect to discover astronomical phenomena that will redefine our view of the universe. These telescopes will open windows to studying galaxies in the early universe, Earth-like planets around distant stars, and unknown discoveries beyond our imagination. This session will review recent development of computational imaging techniques in large space- or ground-based optical observatories. Topics include adaptive optics for future space telescopes (e.g. RST) and 30m level ground telescopes (e.g. E-ELT), data processing and early results from JWST, and machine learning methods for gravitational lensing and exoplanet study.
Organisers: Kerstin Hammernik (Chair) & Julia Schnabel (Committee member)
Abstract:
The aim of this session is to guide the audience through the medical imaging pipeline – from acquisition and reconstruction to analysis – with a little help from AI. The talks will cover different image modalities, and focus on inverse problems in medical image reconstruction, motion estimation and motion correction. As image quality and accurate diagnostic assessment are key for the applicability of AI-based solutions in the medical context, the talks cover these aspects and provide insights into the uncertainty of AI-based algorithms in image reconstruction and analysis.
Organisers: Kazunori Akiyama (Chair) & Yves Wiaux (Committee member)
Abstract:
Computational imaging is a key process in radio and optical/near-infrared interferometry to reveal the fine views of the universe from observational data taken in Fourier space. Over the last decade, significant progress has been made in the development of new computational imaging techniques to address and overcome various challenges brought by the advent of the new instruments including the Event Horizon Telescope (EHT), Low Frequency Array (LOFAR), MeerKAT and Very Large Telescope Interferometer (VLTI), as well as upgrades of existing facilities such as Very Large Array (VLA). Many algorithmic and data processing challenges arise in our quest to endow these instruments, set to observe the sky at new regimes of sensitivity and resolution, with their expected acute vision. In this new era, imaging encompasses not only forming 2D spatial maps of observed fields of view, but also the reconstruction of the spectrum, polarisation, and dynamics of the sources of interest, not to mention the mapping of underlying physical quantities. This focused session will gather interferometric imaging experts to review the exciting frontiers in the field.
Organisers: Nicholas Durr (Chair) & Audrey Bowden (Committee member)
Abstract:
Despite significant progress in modern healthcare, state-of-the-art healthcare in first-world environments differs significantly from that in low-resource settings. Constraints on cost, size, usability and environmental stability pose interesting challenges to engineer suitable solutions in these spaces. This session will bring together impactful research that showcases innovative strategies for imaging that are suitable for implementing in low-resources settings such as rural areas, primary care centers and low- and middle-income countries.
Organisers: Gitta Kutyniok (Chair) & Ulugbek Kamilov (Committee member)
Abstract:
Large-scale optimization problems arise in a variety of imaging tasks. Examples include dictionary learning, low-rank matrix recovery, blind deconvolution, and phase retrieval. Conventional approaches for solving many of these optimization problems involve designing algorithms that can effectively leverage a wide-variety of structural constraints.
This session will provide an excellent opportunity for the wider signal processing and imaging community to come together and share recent developments, open challenges, and future directions in large-scale optimization methods suitable for analyzing imaging data.
Organisers: Francois Lanusse (Chair) & Jean-Luc Starck (Committee member)
Abstract:
Machine Learning had a significant success in Astronomy in recent years, and it becomes obvious that useful applications of ML require a tight connection to physical modelling. In this session, we will explore several aspects of imbuing physics as part of a ML model, from building hybrid models that merge both deep learning and physical models, to using known physical symmetries and equivariances to design dedicated neural architectures.
Organisers: Ivan Dokmanic (Chair) & Philip Schniter (Committee member)
Abstract:
Generative models are experiencing a second youth in imaging and scientific inference. New ideas include injective models for sampling high-dimensional posteriors, theoretical advances on statistical, approximation-theoretic, and topological questions, generating continuous functions which dovetail with the downstream PDE solvers, and creative uses of generative models to probe performance limits of inference systems. The session “Generative Inference and Calibration” brings together prominent researchers spearheading these exciting new directions.
Organisers: Efrat Shimron (Chair) & Florian Knoll (Committee member)
Abstract:
This session will focus on scenarios in which deep learning algorithms developed for medical imaging might produce unreliable results, e.g. due to distribution shifts, bias, hallucinations, or other factors. The session is planned following the increasing interest in studying sensitivities and instabilities of such algorithms. The talks will discuss strategies for exposing algorithmic sensitivities and addressing them, preventing inverse crimes, and increasing algorithmic interpretability. The aim of the session is to raise awareness to the growing problem of unreliable AI performance in the context of medical imaging, suggest guidelines and solutions, and invoke community discussions.
Organisers: Thomas Pock (Chair) & Philip Schniter (Committee member)
Abstract:
The availability of expressive regularizers is a very important component in solving ill-posed inverse problems in imaging. In recent years, hand-designed regularizers have been gradually replaced by data-driven ones. Provided sufficient training data is available, it is nowadays possible to learn tailored regularizers for a certain problem class. This usually leads to a huge increase in reconstruction quality, but the learned regularizers are usually much harder to analyse and it is much harder to give guarantees on convergence behaviour, generalisation ability, or reconstruction error. In this session, we will present and discuss the latest methods, techniques and applications in this cutting-edge field of research.