Abstract
Accurate and automated abdominal organs and tumors segmentation is of great importance in clinical practice. Due to the high time- and labor-consumption of manual annotating datasets, especially in the highly specialized medical domain, partially annotated datasets and unlabeled datasets are more common in practical applications, compared to fully labeled datasets. CNNs based methods have contributed to the development of medical image segmentation. However, previous CNN models were mostly trained on fully labeled datasets. So it is more vital to develop a method based on partially labeled datasets. In FLARE23, we design a model combining a lightweight nnU-Net and target adaptive loss (TAL) to obtain the segmentation results efficiently and make full use of partially labeled dataset. Our method achieved an average DSC score of 86.40% and 19.41% for the organs and lesions on the validation set and the average running time and area under GPU memory-time cure are 25.34 s and 23018 MB, respectively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bilic, P., et al.: The liver tumor segmentation benchmark (lits). Med. Image Anal. 84, 102680 (2023)
Chen, S., Ma, K., Zheng, Y.: Med3d: transfer learning for 3d medical image analysis. ar**v preprint ar**v:1904.00625 (2019)
Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013)
Fang, X., Yan, P.: Multi-organ segmentation over partially labeled datasets with multi-scale feature abstraction. IEEE Trans. Med. Imaging 39(11), 3619–3629 (2020)
Gatidis, S., et al.: The autoPET challenge: towards fully automated lesion segmentation in oncologic PET/CT imaging. preprint at Research Square (Nature Portfolio ) (2023). https://doi.org/10.21203/rs.3.rs-2572595/v1
Gatidis, S., et al.: A whole-body FDG-PET/CT dataset with manually annotated tumor lesions. Sci. Data 9(1), 601 (2022)
Heller, N., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the kits19 challenge. Med. Image Anal. 67, 101821 (2021)
Heller, N., et al.: An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in ct imaging. Proc. Am. Soc. Clin. Oncol. 38(6), 626–626 (2020)
Huang, Z., et al.: Revisiting nnU-Net for iterative pseudo labeling and efficient sliding window inference. In: Ma, J., Wang, B. (eds.) FLARE 2022. LNCS, vol. 13816, pp. 178–189. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23911-3_16
Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)
Liu, H., et al.: Cosst: Multi-organ segmentation with partially labeled datasets using comprehensive supervisions and self-training. IEEE Trans. Med. Imaging (2024)
Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nat. Commun. 15(1), 654 (2024)
Ma, J., et al.: Fast and low-GPU-memory abdomen CT organ segmentation: the flare challenge. Med. Image Anal. 82, 102616 (2022)
Ma, J., et al.: Unleashing the strengths of unlabeled data in pan-cancer abdominal organ quantification: the flare22 challenge. ar**v preprint ar**v:2308.05862 (2023)
Ma, J., et al.: Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6695–6714 (2022)
Pavao, A., et al.: Codalab competitions: an open source platform to organize scientific challenges. J. Mach. Learn. Res. 24(198), 1–6 (2023)
Shi, G., **ao, L., Chen, Y., Zhou, S.K.: Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Med. Image Anal. 70, 101979 (2021)
Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. ar**v preprint ar**v:1902.09063 (2019)
Wasserthal, J., et al.: Totalsegmentator: robust segmentation of 104 anatomic structures in CT images. Radiol. Artif. Intell. 5(5), e230024 (2023)
Yushkevich, P.A., Gao, Y., Gerig, G.: Itk-snap: an interactive tool for semi-automatic segmentation of multi-modality biomedical images. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3342–3345 (2016)
Zhang, J., **e, Y., **a, Y., Shen, C.: Dodnet: learning to segment multi-organ and tumors from multiple partially labeled datasets. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1195–1204 (2021)
Acknowledgements
This project was funded by the National Natural Science Foundation of China 82090052. The authors of this paper declare that the segmentation method they implemented for participation in the FLARE 2023 challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers. The proposed solution is fully automatic without any manual intervention. We thank all the data owners for making the CT scans publicly available and CodaLab [16] for hosting the challenge platform.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, T., Zhang, X., Han, M., Zhang, L. (2024). A Lightweight nnU-Net Combined with Target Adaptive Loss for Organs and Tumors Segmentation. In: Ma, J., Wang, B. (eds) Fast, Low-resource, and Accurate Organ and Pan-cancer Segmentation in Abdomen CT. FLARE 2023. Lecture Notes in Computer Science, vol 14544. Springer, Cham. https://doi.org/10.1007/978-3-031-58776-4_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-58776-4_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-58775-7
Online ISBN: 978-3-031-58776-4
eBook Packages: Computer ScienceComputer Science (R0)