Pretraining powerful deep learning models requires large, comprehensive training datasets, which are often unavailable for medical imaging. In response, the universal biomedical pretrained (UMedPT) foundational model was developed based on multiple small and medium-sized datasets. This model reduced the amount of data required to learn new target tasks by at least 50%.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to the full article PDF.
USD 39.95
Prices may be subject to local taxes which are calculated during checkout

References
Deng, J. et al. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009). This article introduces the ImageNet database, which is commonly used for pretraining in medical imaging.
Liu, Z. et al. Swin transformer: Hierarchical vision transformer using shifted windows. In IEEE/CVF International Conference on Computer Vision (ICCV) 9992–10002 (IEEE/CVF, 2021). This article introduces the Swin Transformer, which is an integral part of UMedPT.
Ronneberger, O. et al. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (eds Navab, N. et al.) https://doi.org/10.1007/978-3-319-24574-4_28 (Springer, 2015). This paper inspired us to use an encoder/decoder architecture with skip-connections for UMedPT.
Tian, Z. et al. FCOS: A simple and strong anchor free object detector. IEEE Trans. Pattern Anal. Mach. Intell. 44, 1922–1933 (2022). This paper introduces the method used in UMedPT for object detection labels.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This is a summary of: Schäfer, R. et al. Overcoming data scarcity in biomedical imaging with a foundational multi-task model. Nat. Comput. Sci. https://doi.org/10.1038/s43588-024-00662-z (2024).
Rights and permissions
About this article
Cite this article
A multi-task learning strategy to pretrain models for medical image analysis. Nat Comput Sci 4, 479–480 (2024). https://doi.org/10.1038/s43588-024-00666-9
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s43588-024-00666-9