Abstract
Accurate liver and tumor segmentation from CT images is essential for cancer diagnosis, treatment planning, and response assessment. However, manual segmentation is labor-intensive and variable, while standard automated models lack the flexibility to adapt to diverse clinical needs or inherent image uncertainties. To bridge this gap, we introduce User-Preference Alignment with Uncertainty-Aware Interactive Rectification (UAIR), a novel framework designed for efficient and adaptive segmentation. Instead of requiring laborious pixel-level corrections, UAIR presents the clinician with a small, curated set of diverse segmentation candidates generated by quantifying model uncertainty. The user simply selects the most suitable option, allowing the framework to iteratively refine its results and align with specific clinical preferences. This selection-based approach drastically reduces the human interaction cost. We validated UAIR on a large-scale, multi-center CT dataset, demonstrating superior accuracy (DSC 0.776) over existing manual positional prompting (DSC 0.685) and less prompting efforts. UAIR provides a clinically-viable solution that integrates seamless human guidance, enabling rapid and robust segmentation for downstream quantitative analysis.
Similar content being viewed by others
Data availability
All imaging datasets analyzed in this study are publicly accessible. The AbdomenAtlas dataset is available at (https://github.com/MrGiovanni/AbdomenAtlas). Processed or derived data supporting the findings of this study are available from the corresponding author upon reasonable request.
References
Wild, C. P., Weiderpass, E. & Stewart, B. W. World cancer report (IARC, 2020).
Sedano, R. et al. Immunotherapy for cancer: common gastrointestinal, liver, and pancreatic side effects and their management. Am. J. Gastroenterol. 117, 1917–1932 (2022).
Midya, A. et al. Computerized diagnosis of liver tumors from ct scans using a deep neural network approach. IEEE J. Biomed. Health Inform. 27, 2456–2464 (2023).
Shiina, S. et al. Percutaneous ablation for hepatocellular carcinoma: comparison of various ablation techniques and surgery. Can. J. Gastroenterol. Hepatol. 2018, 4756147 (2018).
Gul, S. et al. Deep learning techniques for liver and liver tumor segmentation: a review. Comput. Biol. Med. 147, 105620 (2022).
Bilic, P. et al. The liver tumor segmentation benchmark (lits). Med. Image Anal. 84, 102680 (2023).
Ma, J. et al. Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 44, 6695–6714 (2022).
Eisenhauer, E. A. et al. New response evaluation criteria in solid tumours: revised recist guideline (version 1.1). Eur. J. Cancer 45, 228–247 (2009).
Virdis, F. et al. Clinical outcomes of primary arterial embolization in severe hepatic trauma: a systematic review. Diagn. Interv. Imaging 100, 65–75 (2019).
Todorov, M. I. et al. Machine learning analysis of whole mouse brain vasculature. Nat. Methods 17, 442–449 (2020).
Azad, R. et al. Medical image segmentation review: the success of U-Net. IEEE Trans. Pattern Anal. Mach. Intell. 46, 10076–10095 (2024).
Alirr, O. I. & Rahni, A. A. A. Survey on liver tumour resection planning system: steps, techniques, and parameters. J. Digit. Imaging 33, 304–323 (2020).
Moghbel, M., Mashohor, S., Mahmud, R. & Saripan, M. I. B. Review of liver segmentation and computer assisted detection/diagnosis methods in computed tomography. Artif. Intell. Rev. 50, 497–537 (2018).
Hesamian, M. H., Jia, W., He, X. & Kennedy, P. Deep learning techniques for medical image segmentation: achievements and challenges. J. Digit. Imaging 32, 582–596 (2019).
Baumgartner, C. F. et al. Phiseg: Capturing uncertainty in medical image segmentation. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, 119–127 (Springer, 2019).
Prasanna, P. G. et al. Normal tissue protection for improving radiotherapy: where are the gaps? Transl. Cancer Res. 1, 35 (2012).
Zhu, J., Wu, J., Ouyang, C., Kamnitsas, K. & Noble, J. A. Spa: efficient user-preference alignment against uncertainty in medical image segmentation. In Proc. IEEE/CVF International Conference on Computer Vision, 23731–23740 (IEEE, 2025).
Zhang, Y., Shen, Z. & Jiao, R. Segment anything model for medical image segmentation: current applications and future directions. Comput. Biol. Med. 171, 108238 (2024).
Kirillov, A. et al. Segment anything. In Proc. IEEE/CVF International Conference on Computer Vision, 4015–4026 (IEEE, 2023).
Ma, J. et al. Segment anything in medical images. Nat. Commun. 15, 654 (2024).
Wang, H. et al. Sam-med3d: towards general-purpose segmentation models for volumetric medical images. In Proc. European Conference on Computer Vision, 51–67 (Springer, 2024).
Zhang, Y. et al. Enhancing the reliability of auto-prompting sam for medical image segmentation with uncertainty estimation and rectification. In Proc. IEEE/CVF International Conference on Computer Vision, 1282–1291 (IEEE, 2025).
Deng, G. et al. Sam-u: Multi-box prompts triggered uncertainty estimation for reliable sam in medical image. In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, 368–377 (Springer, 2023).
Li, W. et al. Abdomenatlas: a large-scale, detailed-annotated & multi-center dataset for efficient transfer learning and open algorithmic benchmarking. Med. Image Anal. 97, 103285 (2024).
Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In Proc. International Conference on Medical image computing and computer-assisted intervention, 234–241 (Springer, 2015).
Chen, J. et al. Transunet: rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image Anal. 97, 103280 (2024).
Lei, W., Xu, W., Li, K., Zhang, X. & Zhang, S. Medlsam: Localize and segment anything model for 3d ct images. Med. Image Anal. 99, 103370 (2025).
Zhang, D., Chen, B., Chong, J. & Li, S. Weakly-supervised teacher-student network for liver tumor segmentation from non-enhanced images. Med. Image Anal. 70, 102005 (2021).
Ma, Y., Wang, J., Yang, J. & Wang, L. Model-heterogeneous semi-supervised federated learning for medical image segmentation. IEEE Trans. Med. Imaging 43, 1804–1815 (2024).
Jiang, M., Yang, H., Cheng, C. & Dou, Q. Iop-fl: Inside-outside personalization for federated medical image segmentation. IEEE Trans. Med. Imaging 42, 2106–2117 (2023).
Zhao, J. et al. United adversarial learning for liver tumor segmentation and detection of multi-modality non-contrast mri. Med. Image Anal. 73, 102154 (2021).
Ji, Y. et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. Adv. Neural Inf. Process. Syst. 35, 36722–36732 (2022).
Zhang, Y. et al. Seganypet: Universal promptable segmentation from positron emission tomography images. In Proc. IEEE/CVF International Conference on Computer Vision (ICCV), 21107–21116 (IEEE, 2025).
Wang, Z., Wu, Z., Agarwal, D. & Sun, J. Medclip: contrastive learning from unpaired medical images and text. Proc. Conf. Empir. Methods Nat. Lang. Process. 2022, 3876 (2022).
Moor, M. et al. Med-flamingo: a multimodal medical few-shot learner. In Machine Learning for Health (ML4H), 353–367 (PMLR, 2023).
Zhang, Y. et al. Semisam+: rethinking semi-supervised medical image segmentation in the era of foundation models. Med. Image Anal. (2025).
Ali, M. et al. A review of the segment anything model (sam) for medical image analysis: accomplishments and perspectives. Comput. Med. Imaging Graph. 119, 102473 (2025).
Jiao, R. et al. Learning with limited annotations: a survey on deep semi-supervised learning for medical image segmentation. Comput. Biol. Med. 169, 107840 (2024).
Zhang, Y., Jiao, R., Liao, Q., Li, D. & Zhang, J. Uncertainty-guided mutual consistency learning for semi-supervised medical image segmentation. Artif. Intell. Med. 138, 102476 (2023).
Zou, K. et al. A review of uncertainty estimation and its application in medical imaging. Meta Radiol. 1, 100003 (2023).
Zhou, N. et al. Medsam-u: Uncertainty-guided auto multi-prompt adaptation for reliable medsam. IEEE Trans. Circuits Syst. Video Technol. 36, 3768–3781 (2025).
Acknowledgements
This work was supported by the Natural Science Foundation of Hubei Province (Grant No. 2025AFD774). The authors would also like to express their sincere gratitude to Yuchuan Jiang for his valuable contributions to the field, from which this study has greatly benefited. His previously published work provided important insights and methodological guidance for the present research.
Author information
Authors and Affiliations
Contributions
G.Z., C.G., and Y.W. contributed equally to this work, having full access to all study data and assuming responsibility for the integrity and accuracy of the analyses (Validation and Formal analysis). G.Z., C.G., and G.H. conceptualized the study, designed the methodology, and participated in securing research funding (Conceptualization, Methodology, and Funding acquisition). Y.W. and X.Z. carried out data acquisition, curation, and investigation (Investigation, Data curation) and provided key resources, instruments, and technical support (Resources and Software). Z.W. drafted the initial manuscript and generated visualizations (Writing—Original Draft and Visualization). T.C. and B.Y. supervised the project, coordinated collaborations, and ensured administrative support (Supervision and Project administration). All authors contributed to reviewing and revising the manuscript critically for important intellectual content (Writing—Review & Editing) and approved the final version for submission.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zhao, G., Wang, Y., Gong, C. et al. User-preference alignment with uncertainty-aware interactive rectification for liver organ and tumor segmentation and analysis from CT images. npj Digit. Med. (2026). https://doi.org/10.1038/s41746-026-02544-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41746-026-02544-2


