Wo ist die Publikation erschienen?"Multimodal Learning for Clinical Decision Support" Workshop of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020
Tumor segmentation in multimodal medical images has seen
a growing trend towards deep learning based methods. Typically, studies
dealing with this topic fuse multimodal image data to improve the tumor
segmentation contour for a single imaging modality. However, they do
not take into account that tumor characteristics are emphasized differently by each modality, which affects the tumor delineation. Thus, the tumor segmentation is modality- and task-dependent. This is especially the case for soft tissue sarcomas, where, due to necrotic tumor tissue, the segmentation differs vastly. Closing this gap, we develop a modalityspecific sarcoma segmentation model that utilizes multimodal image data to improve the tumor delineation on each individual modality. We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches, and the use of resource-efficient densely connected convolutional layers.
We further conduct experiments to analyze how different input modalities and encoder-decoder fusion strategies affect the segmentation result. We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and
PET/CT scans. The results show that our multimodal co-segmentation model provides better modality-specific tumor segmentation than models using only the PET or MRI (T1 and T2) scan as input.
Tumor Co-segmentation; Multimodality; Deep Learning