Multi-organ Segmentation in CT from Partially Annotated Datasets using Disentangled Learning

  • Conference paper
  • First Online:
Bildverarbeitung für die Medizin 2024 (BVM 2024)

Part of the book series: Informatik aktuell ((INFORMAT))

Included in the following conference series:

  • 450 Accesses

Abstract

While deep learning models are known to be able to solve the task of multi-organ segmentation, the scarcity of fully annotated multi-organ datasets poses a significant obstacle during training. The 3D volume annotation of such datasets is expensive, time-consuming and varies greatly in the variety of labeled structures. To this end, we propose a solution that leverages multiple partially annotated datasets using disentangled learning for a single segmentation model. Dataset-specific encoder and decoder networks are trained, while a joint decoder network gathers the encoders’ features to generate a complete segmentation mask. We evaluated our method using two simulated partially annotated datasets: one including the liver, lungs and kidneys, the other bones and bladder. Our method is trained to segment all five organs achieving a dice score of 0.78 and an IoU of 0.67. Notably, this performance is close to a model trained on the fully annotated dataset, scoring 0.80 in dice score and 0.70 in IoU respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 79.99
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 99.99
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Aljabri M, AlGhamdi M. A review on the use of deep learning for medical images segmentation. Neurocomputing. 2022;506:311–35.

    Google Scholar 

  2. Hesamian M, Jia W, He X et al. Deep learning techniques for medical image segmentation: achievements and challenges. J Digit Imaging. 2019;32:582–96.

    Google Scholar 

  3. Lei Y, Fu Y, Wang T, Qiu RLJ, Curran WJ, Liu T et al. Deep learning in multi-organ segmentation. 2020.

    Google Scholar 

  4. Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Physica Medica. 2021;85:107–22.

    Google Scholar 

  5. Rister B, Yi D, Shivakumar K, Nobashi T, Rubin DL. CT-ORG, a new dataset for multiple organ segmentation in computed tomography. Sci. Data. 2020;7(1):381.

    Google Scholar 

  6. Ji Y, Bai H, Ge C, Yang J, Zhu Y, Zhang R et al. Amos: a large-scale abdominal multi-organ benchmark for versatile medical image segmentation. Proc NeuroIPS. 2022;35:36722–32.

    Google Scholar 

  7. Zhu X. Semi-supervised learning literature survey. Comput Sci. 2008;2.

    Google Scholar 

  8. Pan SJ,Yang Q.Asurvey on transfer learning. IEEETrans Knowl Data Eng. 2010;22(10):1345– 59.

    Google Scholar 

  9. Zhou ZH. A brief introduction to weakly supervised learning. Natl Sci Rev. 2018;5(1):44–53.

    Google Scholar 

  10. Zhou T, Ruan S, Canu S. A review: deep learning for medical image segmentation using multi-modality fusion. Array. 2019;3-4:100004.

    Google Scholar 

  11. Lyu Y, Liao H, Zhu H, Zhou SK. A3DSegNet: anatomy-aware artifact disentanglement and segmentation network for unpaired segmentation, artifact reduction, and modality translation. 2021.

    Google Scholar 

  12. Yang Q, Guo X, Chen Z,Woo PYM,Yuan Y. D2-Net: dual disentanglement network for brain tumor segmentation with missing modalities. IEEE Trans Med Imaging. 2022;41(10):2953– 64.

    Google Scholar 

  13. Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Proc MICCAI. 2015:234–41.

    Google Scholar 

  14. Salimi Y, Shiri I, Mansouri Z, Zaidi H. Deep learning-assisted multiple organ segmentation from whole-body CT images. medRxiv. 2023.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianyi Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Der/die Autor(en), exklusiv lizenziert an Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, T., Liu, C., Rist, L., Maier, A. (2024). Multi-organ Segmentation in CT from Partially Annotated Datasets using Disentangled Learning. In: Maier, A., Deserno, T.M., Handels, H., Maier-Hein, K., Palm, C., Tolxdorff, T. (eds) Bildverarbeitung für die Medizin 2024. BVM 2024. Informatik aktuell. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-44037-4_76

Download citation

Publish with us

Policies and ethics

Navigation