/mbcreate_logo_small512.png

PhD Student in Machine Learning

Driven to solve challenging health care problems with machine learning solutions

Recent Publications

A Foundation Model for Intensive Care: Unlocking Generalization across Tasks and Domains at Scale (Spotlight at ML4H Findings Track and invited talk for AI in Medicine at BIFOLD Institute)
Manuel Burger, Daphné Chopard, Malte Londschien, Fedor Sergeev, Hugo Yèche, Rita Kuznetsova, Martin Faltys, Eike Gerdes, Polina Leshetkina, Peter Bühlmann, Gunnar Rätsch
Working Paper on medRxiv
Towards Foundation Models for Critical Care Time Series (Best Paper Award)
Manuel Burger*, Fedor Sergeev*, Malte Londschien, Daphné Chopard, Hugo Yèche, Eike Gerdes, Polina Leshetkina, Alexander Morgenroth, Zeynep Babür, Jasmina Bogojeska, Martin Faltys, Rita Kuznetsova, Gunnar Rätsch
AIM-FM @ NeurIPS 2024
Multi-Modal Contrastive Learning for Online Clinical Time-Series Applications
Fabian Baldenweg, Manuel Burger, Gunnar Rätsch, Rita Kuznetsova
TS4H @ ICLR 2024

Recent Posts

Best Paper Award at AIM-FM for 'Towards Foundation Models for Critical Care Time Series'

Abstract

Notable progress has been made in generalist medical large language models across various healthcare areas. However, large-scale modeling of in-hospital time series data - such as vital signs, lab results, and treatments in critical care - remains underexplored. Existing datasets are relatively small, but combining them can enhance patient diversity and improve model robustness. To effectively utilize these combined datasets for large-scale modeling, it is essential to address the distribution shifts caused by varying treatment policies, necessitating the harmonization of treatment variables across the different datasets. This work aims to establish a foundation for training large-scale multi-variate time series models on critical care data and to provide a benchmark for machine learning models in transfer learning across hospitals to study and address distribution shift challenges. We introduce a harmonized dataset for sequence modeling and transfer learning research, representing the first large-scale collection to include core treatment variables. Future plans involve expanding this dataset to support further advancements in transfer learning and the development of scalable, generalizable models for critical healthcare applications.

Multi-modal Graph Learning over UMLS Knowledge Graphs

Abstract

Clinicians are increasingly looking towards machine learning to gain insights about patient evolutions. We propose a novel approach named Multi-Modal UMLS Graph Learning (MMUGL) for learning meaningful representations of medical concepts using graph neural networks over knowledge graphs based on the unified medical language system. These representations are aggregated to represent entire patient visits and then fed into a sequence model to perform predictions at the granularity of multiple hospital visits of a patient. We improve performance by incorporating prior medical knowledge and considering multiple modalities. We compare our method to existing architectures proposed to learn representations at different granularities on the MIMIC-III dataset and show that our approach outperforms these methods. The results demonstrate the significance of multi-modal medical concept representations based on prior medical knowledge.