
Applying state-of-the-art segmentation models to new medical imaging datasets typically requires extensive manual optimization and expert knowledge. We develop adaptive and interactive AI systems that remove this barrier — from fully automated configuration to real-time and language-guided interaction.
nnU-Net is a self-configuring segmentation framework that automatically adapts to new datasets and delivers out-of-the-box 2D and 3D models, making state-of-the-art medical image segmentation broadly accessible (8,000 citations and 300+ daily downloads).
nnDetection extends this automation to 3D object detection, self-configuring for arbitrary volumetric tasks and achieving state-of-the-art performance without manual intervention.
nnInteractive enables expert-guided 3D segmentation within tools such as Napari and MITK, translating intuitive prompts into accurate volumetric masks across 120+ datasets.
VoxTell brings free-text–guided universal 3D segmentation, mapping clinical language directly to volumetric masks and generalizing across modalities and unseen concepts.
The Human Radiome Project (THRP) develops an AI-powered foundation model to unlock the full complexity of 3D radiological imaging. Funded by the Helmholtz Association’s Foundation Model Initiative (HFMI), THRP aims to build a digital framework that integrates structural, functional, and pathological imaging information.
THRP trains a self-supervised foundation model on over 3 million 3D image volumes (including CT and MRI). The dataset combines clinical repositories at DKFZ and partner hospitals in Bonn, Heidelberg, and Basel with population studies and public datasets, enabling robust, generalizable representations.
As medical imaging demand grows and clinical expertise becomes scarce, THRP enables scalable AI for diverse radiology tasks without extensive task-specific annotation. Long term, it aims to accelerate research and advance personalized healthcare by integrating imaging with other data modalities.

Rare genetic variants can have large effects on human health, but their impacts are difficult to detect with standard methods. We developed DeepRVAT, a rare variant analysis framework for modeling disease risk and biological traits through the integration of rare variant annotations.
Using deep set networks, DeepRVAT learns gene-level impairment scores from variant annotations (e.g. conservation and splicing-related scores), boosting power for association testing in large biobank datasets. Compared with existing approaches, it increases gene discovery and improves identification of individuals with high genetic risk in population-scale studies - supporting deeper insights into disease biology and more informative genetic risk stratification.
