TY - JOUR
T1 - Unsupervised domain adaptive medical segmentation network based on contrastive learning
AU - Wang, Siqi
AU - Wu, Hao
AU - Yu, Xiaosheng
AU - Wu, Chengdong
PY - 2025/10
Y1 - 2025/10
N2 - Accurate organ segmentation from magnetic resonance imaging (MRI) or computed tomography (CT) images is essential for surgical planning and decision-making. Traditional fully supervised deep learning methods often exhibit a significant decline in performance when applied to datasets that differ from the training data, thus limiting their clinical applicability. This study proposes a novel segmentation method based on unsupervised domain adaptation, aiming to improve cross-domain segmentation performance without the need for ground truth labels in the target domain. Specifically, our method trains the network with labeled source images and unlabeled target images, introducing a bidirectional feature-prototype contrastive loss to align features across domains, minimizing within-class variations and maximizing between-class variations. To further improve model performance, we propose a prototype-guided pseudo-label fusion module that generates high-quality pseudo-labels for the unlabeled target images between domain prototypes. Experimental results show that our method outperforms other unsupervised domain adaptation segmentation approaches, achieving state-of-the-art performance. Code is available at: https://github.com/WANGSIQII/UDA.git.
AB - Accurate organ segmentation from magnetic resonance imaging (MRI) or computed tomography (CT) images is essential for surgical planning and decision-making. Traditional fully supervised deep learning methods often exhibit a significant decline in performance when applied to datasets that differ from the training data, thus limiting their clinical applicability. This study proposes a novel segmentation method based on unsupervised domain adaptation, aiming to improve cross-domain segmentation performance without the need for ground truth labels in the target domain. Specifically, our method trains the network with labeled source images and unlabeled target images, introducing a bidirectional feature-prototype contrastive loss to align features across domains, minimizing within-class variations and maximizing between-class variations. To further improve model performance, we propose a prototype-guided pseudo-label fusion module that generates high-quality pseudo-labels for the unlabeled target images between domain prototypes. Experimental results show that our method outperforms other unsupervised domain adaptation segmentation approaches, achieving state-of-the-art performance. Code is available at: https://github.com/WANGSIQII/UDA.git.
KW - contrastive learning
KW - medical segmentation
KW - unsupervised domain adaptation
UR - http://www.scopus.com/inward/record.url?scp=105018464996&partnerID=8YFLogxK
UR - https://go.openathens.net/redirector/westernsydney.edu.au?url=https://doi.org/10.1002/ima.70210
U2 - 10.1002/ima.70210
DO - 10.1002/ima.70210
M3 - Article
AN - SCOPUS:105018464996
SN - 0899-9457
VL - 35
JO - International Journal of Imaging Systems and Technology
JF - International Journal of Imaging Systems and Technology
IS - 6
M1 - e70210
ER -