A Lightweight nnU-Net Combined with Target Adaptive Loss for Organs and Tumors Segmentation

  • Conference paper
  • First Online:
Fast, Low-resource, and Accurate Organ and Pan-cancer Segmentation in Abdomen CT (FLARE 2023)

Abstract

Accurate and automated abdominal organs and tumors segmentation is of great importance in clinical practice. Due to the high time- and labor-consumption of manual annotating datasets, especially in the highly specialized medical domain, partially annotated datasets and unlabeled datasets are more common in practical applications, compared to fully labeled datasets. CNNs based methods have contributed to the development of medical image segmentation. However, previous CNN models were mostly trained on fully labeled datasets. So it is more vital to develop a method based on partially labeled datasets. In FLARE23, we design a model combining a lightweight nnU-Net and target adaptive loss (TAL) to obtain the segmentation results efficiently and make full use of partially labeled dataset. Our method achieved an average DSC score of 86.40% and 19.41% for the organs and lesions on the validation set and the average running time and area under GPU memory-time cure are 25.34 s and 23018 MB, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 102.71
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 70.61
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bilic, P., et al.: The liver tumor segmentation benchmark (lits). Med. Image Anal. 84, 102680 (2023)

    Google Scholar 

  2. Chen, S., Ma, K., Zheng, Y.: Med3d: transfer learning for 3d medical image analysis. ar**v preprint ar**v:1904.00625 (2019)

  3. Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013)

    Article  Google Scholar 

  4. Fang, X., Yan, P.: Multi-organ segmentation over partially labeled datasets with multi-scale feature abstraction. IEEE Trans. Med. Imaging 39(11), 3619–3629 (2020)

    Article  Google Scholar 

  5. Gatidis, S., et al.: The autoPET challenge: towards fully automated lesion segmentation in oncologic PET/CT imaging. preprint at Research Square (Nature Portfolio ) (2023). https://doi.org/10.21203/rs.3.rs-2572595/v1

  6. Gatidis, S., et al.: A whole-body FDG-PET/CT dataset with manually annotated tumor lesions. Sci. Data 9(1), 601 (2022)

    Article  Google Scholar 

  7. Heller, N., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the kits19 challenge. Med. Image Anal. 67, 101821 (2021)

    Article  Google Scholar 

  8. Heller, N., et al.: An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in ct imaging. Proc. Am. Soc. Clin. Oncol. 38(6), 626–626 (2020)

    Article  Google Scholar 

  9. Huang, Z., et al.: Revisiting nnU-Net for iterative pseudo labeling and efficient sliding window inference. In: Ma, J., Wang, B. (eds.) FLARE 2022. LNCS, vol. 13816, pp. 178–189. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23911-3_16

    Chapter  Google Scholar 

  10. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

    Article  Google Scholar 

  11. Liu, H., et al.: Cosst: Multi-organ segmentation with partially labeled datasets using comprehensive supervisions and self-training. IEEE Trans. Med. Imaging (2024)

    Google Scholar 

  12. Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nat. Commun. 15(1), 654 (2024)

    Article  Google Scholar 

  13. Ma, J., et al.: Fast and low-GPU-memory abdomen CT organ segmentation: the flare challenge. Med. Image Anal. 82, 102616 (2022)

    Article  Google Scholar 

  14. Ma, J., et al.: Unleashing the strengths of unlabeled data in pan-cancer abdominal organ quantification: the flare22 challenge. ar**v preprint ar**v:2308.05862 (2023)

  15. Ma, J., et al.: Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6695–6714 (2022)

    Article  Google Scholar 

  16. Pavao, A., et al.: Codalab competitions: an open source platform to organize scientific challenges. J. Mach. Learn. Res. 24(198), 1–6 (2023)

    Google Scholar 

  17. Shi, G., **ao, L., Chen, Y., Zhou, S.K.: Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Med. Image Anal. 70, 101979 (2021)

    Article  Google Scholar 

  18. Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. ar**v preprint ar**v:1902.09063 (2019)

  19. Wasserthal, J., et al.: Totalsegmentator: robust segmentation of 104 anatomic structures in CT images. Radiol. Artif. Intell. 5(5), e230024 (2023)

    Google Scholar 

  20. Yushkevich, P.A., Gao, Y., Gerig, G.: Itk-snap: an interactive tool for semi-automatic segmentation of multi-modality biomedical images. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3342–3345 (2016)

    Google Scholar 

  21. Zhang, J., **e, Y., **a, Y., Shen, C.: Dodnet: learning to segment multi-organ and tumors from multiple partially labeled datasets. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1195–1204 (2021)

    Google Scholar 

Download references

Acknowledgements

This project was funded by the National Natural Science Foundation of China 82090052. The authors of this paper declare that the segmentation method they implemented for participation in the FLARE 2023 challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers. The proposed solution is fully automatic without any manual intervention. We thank all the data owners for making the CT scans publicly available and CodaLab [16] for hosting the challenge platform.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lihua Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, T., Zhang, X., Han, M., Zhang, L. (2024). A Lightweight nnU-Net Combined with Target Adaptive Loss for Organs and Tumors Segmentation. In: Ma, J., Wang, B. (eds) Fast, Low-resource, and Accurate Organ and Pan-cancer Segmentation in Abdomen CT. FLARE 2023. Lecture Notes in Computer Science, vol 14544. Springer, Cham. https://doi.org/10.1007/978-3-031-58776-4_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-58776-4_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-58775-7

  • Online ISBN: 978-3-031-58776-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation