Abstract
Accurate organ segmentation from magnetic resonance imaging (MRI) or computed tomography (CT) images is essential for surgical planning and decision-making. Traditional fully supervised deep learning methods often exhibit a significant decline in performance when applied to datasets that differ from the training data, thus limiting their clinical applicability. This study proposes a novel segmentation method based on unsupervised domain adaptation, aiming to improve cross-domain segmentation performance without the need for ground truth labels in the target domain. Specifically, our method trains the network with labeled source images and unlabeled target images, introducing a bidirectional feature-prototype contrastive loss to align features across domains, minimizing within-class variations and maximizing between-class variations. To further improve model performance, we propose a prototype-guided pseudo-label fusion module that generates high-quality pseudo-labels for the unlabeled target images between domain prototypes. Experimental results show that our method outperforms other unsupervised domain adaptation segmentation approaches, achieving state-of-the-art performance. Code is available at: https://github.com/WANGSIQII/UDA.git.
| Original language | English |
|---|---|
| Article number | e70210 |
| Number of pages | 11 |
| Journal | International Journal of Imaging Systems and Technology |
| Volume | 35 |
| Issue number | 6 |
| DOIs | |
| Publication status | Published - Oct 2025 |
Keywords
- contrastive learning
- medical segmentation
- unsupervised domain adaptation
Fingerprint
Dive into the research topics of 'Unsupervised domain adaptive medical segmentation network based on contrastive learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver