MMRL4H@EurIPS 2025
Workshop graphic

Multimodal Representation Learning for Healthcare

EurIPS 2025 Workshop


About

Clinical decision making often draws on multiple diverse data sources including, but not limited to, images, text reports, electronic health records (EHR), continuous physiological signals and genomic profiles and yet most AI systems deployed in healthcare remain unimodal. However, integrating diverse medical modalities presents significant challenges including data heterogeneity, data scarcity, missing or asynchronous modalities, variable data quality and the lack of standardized frameworks for aligned representation learning. This workshop aims to bring the EurIPS community together to tackle the problem of fusing these heterogeneous inputs into interpretable, coherent patient representations, aiming to mirror the holistic reasoning of clinicians. We aim to bring together machine learning researchers, clinicians and industry partners dedicated to the theory, methods, and translation of multimodal learning in healthcare.

Goals

In this workshop, we aim to:

  • Advance methods for learning joint representations from images, text, signals, and genomics.
  • Investigate foundation-model pretraining at scale on naturally paired modalities.
  • Address robustness, fairness, and missing-modality issues unique to healthcare fusion.
  • Foster clinician–ML collaboration and outline translational paths to deployment.

Key Info

Important Dates

Our Call for Papers is now open!

Please note all deadlines are Anywhere on Earth (AOE).

  • Submission Deadline: October 15, 2025
  • Acceptance Notification: October 31, 2025
  • Workshop Date: December 6/7, 2025 (TBC)

Call for Papers

Authors are invited to submit 4-page abstracts on topics relevant to multimodal representation learning in healthcare. These include, but are not limited to, vision-language models for radiology, temporal alignment of multimodal ICU streams, graph and transformer architectures for patient data fusion, cross-modal self-supervised objectives, and multimodal benchmarks with fairness and bias analysis.

Submission

  • Submission site: via OpenReview
  • Format: NeurIPS 2025 template
  • Length: max 4 pages excluding references
  • Review: Double-blind
  • Anonymization: Required, ensure that there are no names or affiliations in all parts of the submission including any code.

All accepted papers will be published on the website. Please note that there will be no workshop proceedings (non-archival).


Preliminary Schedule

TimeSession
09:00–09:15Opening Remarks
09:15–10:00Speaker Session I
10:00–10:20Spotlight Session I (3x7mins)
10:20–11:00Coffee & Poster Session I
11:00–11:45Speaker Session II
11:45–12:30Panel - "Translating Multimodal ML to the Bedside"
12:30–14:00Lunch & Networking
14:00-14:45Speaker Session III
14:45-15:15Spotlight Session II (3x7mins)
15:15–15:45Coffee & Poster Session II
15:45–16:30Round-table Discussion
16:30-16:50Open Q&A
16:50-17:00Closing Remarks & Awards

Confirmed Speakers

Patrick Schwab

Patrick Schwab

GSK.ai, CH

Sonali Parbhoo

Sonali Parbhoo

Imperial College London, UK

Stephanie Hyland

Stephanie Hyland

Microsoft Research, UK

Rajesh Ranganath

Rajesh Ranganath

New York University, USA

Bianca Dumitrascu

Bianca Dumitrascu

Columbia University, USA


Organizers

Stephan Mandt

Stephan Mandt

Associate Professor, UC Irvine, USA

Ece Ozkan Elsen

Ece Ozkan Elsen

Assistant Professor, University of Basel, CH

Samuel Ruiperez-Campillo

Samuel Ruiperez-Campillo

PhD Student, ETH Zurich, CH

Thomas Sutter

Thomas Sutter

Postdoctoral Researcher, ETH Zurich, CH

Julia Vogt

Julia Vogt

Associate Professor, ETH Zurich, CH

Nikita Narayanan

Nikita Narayanan

PhD Student, Imperial College London, UK

Contact