About
Clinical decision making often draws on multiple diverse data sources including, but not limited to, images, text reports, electronic health records (EHR), continuous physiological signals and genomic profiles and yet most AI systems deployed in healthcare remain unimodal. However, integrating diverse medical modalities presents significant challenges including data heterogeneity, data scarcity, missing or asynchronous modalities, variable data quality and the lack of standardized frameworks for aligned representation learning. This workshop aims to bring the EurIPS community together to tackle the problem of fusing these heterogeneous inputs into interpretable, coherent patient representations, aiming to mirror the holistic reasoning of clinicians. We aim to bring together machine learning researchers, clinicians and industry partners dedicated to the theory, methods, and translation of multimodal learning in healthcare.
Goals
In this workshop, we aim to:
- Advance methods for learning joint representations from images, text, signals, and genomics.
- Investigate foundation-model pretraining at scale on naturally paired modalities.
- Address robustness, fairness, and missing-modality issues unique to healthcare fusion.
- Foster clinician–ML collaboration and outline translational paths to deployment.
Key Info
Important Dates
Our Call for Papers is now open!
Please note all deadlines are Anywhere on Earth (AOE).
- Submission Deadline: October 10, 2025
- Acceptance Notification: October 31, 2025
- Workshop Date: December 6/7, 2025 (TBC)
Topics of Interest
These include but are not limited to:
- Vision-language models for Radiology
- Temporal alignment of multimodal ICU streams
- Graph and Transformer architectures for patient data fusion
- Cross-modal self-supervised objectives
- Multi-modal benchmarks, fairness and bias analysis
Schedule
Time | Session |
---|---|
09:00–09:15 | Opening Remarks |
09:15–10:00 | Sonali Parbhoo |
10:00–10:20 | Spotlight Session I (3x7mins) |
10:20–11:00 | Coffee & Poster Session I |
11:00–11:45 | Patrick Schwab |
11:45–12:30 | Panel - "Translating Multimodal ML to the Bedside" |
12:30–14:00 | Lunch & Networking |
14:00-14:45 | Rajesh Ranganath |
14:45-15:15 | Spotlight Session II (3x7mins) |
15:15–15:45 | Coffee & Poster Session II |
15:45–16:30 | Round-table Discussion |
16:30-16:50 | Open Q&A |
16:50-17:00 | Closing Remarks & Awards |
Confirmed Speakers
- Patrick Schwab - GSK.ai, CH
- Sonali Parbhoo - Imperial College London, UK
- Stephanie Hyland - Microsoft Research, UK
- Rajesh Ranganath - New York University, USA
And more to be announced soon!
Organizers
- Stephan Mandt — Associate Professor, UC Irvine, USA
- Ece Ozkan Elsen - Assistant Professor, University of Basel, CH
- Samuel Ruiperez-Campillo - PhD Student, ETH Zurich, CH
- Thomas Sutter - Postdoctoral Researcher, ETH Zurich, CH
- Julia Vogt - Assistant Professor, ETH Zurich, CH
- Nikita Narayanan - PhD Student, Imperial College London, UK
Contact
- General inquiries: email@domain