5 – 10 February, 2023
Abstract submission is now open.
Organisers: He Sun & Katie Bouman (Committee member)
Large optical telescopes have a rich history of pushing the boundaries of imaging technology and astronomical science. With a series of revolutionary optical telescopes launched recently or planned to be launched soon, in the next decade we expect to discover astronomical phenomena that will redefine our view of the universe. These telescopes will open windows to studying galaxies in the early universe, Earth-like planets around distant stars, and unknown discoveries beyond our imagination. This session will review recent development of computational imaging techniques in large space- or ground-based optical observatories. Topics include adaptive optics for future space telescopes (e.g. RST) and 30m level ground telescopes (e.g. E-ELT), data processing and early results from JWST, and machine learning methods for gravitational lensing and exoplanet study.
Organisers: Kerstin Hammernik (Chair) & Julia Schnabel (Committee member)
The aim of this session is to guide the audience through the medical imaging pipeline – from acquisition and reconstruction to analysis – with a little help from AI. The talks will cover different image modalities, and focus on inverse problems in medical image reconstruction, motion estimation and motion correction. As image quality and accurate diagnostic assessment are key for the applicability of AI-based solutions in the medical context, the talks cover these aspects and provide insights into the uncertainty of AI-based algorithms in image reconstruction and analysis.
Organisers: Kazunori Akiyama (Chair) & Yves Wiaux (Committee member)
Computational imaging is a key process in radio and optical/near-infrared interferometry to reveal the fine views of the universe from observational data taken in Fourier space. Over the last decade, significant progress has been made in the development of new computational imaging techniques to address and overcome various challenges brought by the advent of the new instruments including the Event Horizon Telescope (EHT), Low Frequency Array (LOFAR), MeerKAT and Very Large Telescope Interferometer (VLTI), as well as upgrades of existing facilities such as Very Large Array (VLA). Many algorithmic and data processing challenges arise in our quest to endow these instruments, set to observe the sky at new regimes of sensitivity and resolution, with their expected acute vision. In this new era, imaging encompasses not only forming 2D spatial maps of observed fields of view, but also the reconstruction of the spectrum, polarisation, and dynamics of the sources of interest, not to mention the mapping of underlying physical quantities. This focused session will gather interferometric imaging experts to review the exciting frontiers in the field.
Organisers: Nicholas Durr (Chair) & Audrey Bowden (Committee member)
Despite significant progress in modern healthcare, state-of-the-art healthcare in first-world environments differs significantly from that in low-resource settings. Constraints on cost, size, usability and environmental stability pose interesting challenges to engineer suitable solutions in these spaces. This session will bring together impactful research that showcases innovative strategies for imaging that are suitable for implementing in low-resources settings such as rural areas, primary care centers and low- and middle-income countries.
Organisers: Gitta Kutyniok (Chair) & Ulugbek Kamilov (Committee member)
Large-scale optimization problems arise in a variety of imaging tasks. Examples include dictionary learning, low-rank matrix recovery, blind deconvolution, and phase retrieval. Conventional approaches for solving many of these optimization problems involve designing algorithms that can effectively leverage a wide-variety of structural constraints.
This session will provide an excellent opportunity for the wider signal processing and imaging community to come together and share recent developments, open challenges, and future directions in large-scale optimization methods suitable for analyzing imaging data.
Organisers: Francois Lanusse (Chair) & Jean-Luc Starck (Committee member)
Machine Learning had a significant success in Astronomy in recent years, and it becomes obvious that useful applications of ML require a tight connection to physical modelling. In this session, we will explore several aspects of imbuing physics as part of a ML model, from building hybrid models that merge both deep learning and physical models, to using known physical symmetries and equivariances to design dedicated neural architectures.
Organisers: Ivan Dokmanic (Chair) & Philip Schniter (Committee member)
Generative models are experiencing a second youth in imaging and scientific inference. New ideas include injective models for sampling high-dimensional posteriors, theoretical advances on statistical, approximation-theoretic, and topological questions, generating continuous functions which dovetail with the downstream PDE solvers, and creative uses of generative models to probe performance limits of inference systems. The session “Generative Inference and Calibration” brings together prominent researchers spearheading these exciting new directions.
Organisers: Efrat Shimron (Chair) & Florian Knoll (Committee member)
This session will focus on scenarios in which deep learning algorithms developed for medical imaging might produce unreliable results, e.g. due to distribution shifts, bias, hallucinations, or other factors. The session is planned following the increasing interest in studying sensitivities and instabilities of such algorithms. The talks will discuss strategies for exposing algorithmic sensitivities and addressing them, preventing inverse crimes, and increasing algorithmic interpretability. The aim of the session is to raise awareness to the growing problem of unreliable AI performance in the context of medical imaging, suggest guidelines and solutions, and invoke community discussions.
Organisers: Thomas Pock (Chair) & Philip Schniter (Committee member)
The availability of expressive regularizers is a very important component in solving ill-posed inverse problems in imaging. In recent years, hand-designed regularizers have been gradually replaced by data-driven ones. Provided sufficient training data is available, it is nowadays possible to learn tailored regularizers for a certain problem class. This usually leads to a huge increase in reconstruction quality, but the learned regularizers are usually much harder to analyse and it is much harder to give guarantees on convergence behaviour, generalisation ability, or reconstruction error. In this session, we will present and discuss the latest methods, techniques and applications in this cutting-edge field of research.