Log in

Cross-dimensional knowledge-guided synthesizer trained with unpaired multimodality MRIs

  • Application of soft computing
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Magnetic resonance images (MRIs) of different modalities have different reference values for pathological diagnosis. The difficulty of obtaining multimodal MRIs makes it effective to synthesize medical images of missing modalities from existing one. To train a one-for-all synthesizer with limited number of unpaired MRIs, a novel cross-dimensional knowledge-guided generative adversarial network (CKG–GAN) is proposed. In CKG–GAN, a cross-dimensional knowledge transfer network is utilized to measure the perceptual similarity of 2D images (slices of MRIs) from source and synthesized modalities, the knowledge of which is transferred from a pre-trained 3D network without accessing its private training data set. We evaluate the proposed model for three tasks on BraTs2018 and BraTs2021 data sets, using one modality of T1, T2 or Flair to synthesize the other two modalities without changing the content. The results show that compared with the current state-of-the-art methods, our method improves the performance by 2–7%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Data availability

Data is available at https://github.com/QianWeiZhou/CKG-GAN.

References

  • Baid U, Ghodasara S, Mohan S, Bilello M, Calabrese E, Colak E, Farahani K, Kalpathy-Cramer J, Kitamura FC, Pati S, et al. (2021) The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification. https://doi.org/10.48550/ar**v.2107.02314. ar**v preprint ar**v:2107.02314

  • Bian X, Luo X, Wang C, Liu W, Lin X (2022) Dda-net: unsupervised cross-modality medical image segmentation via dual domain adaptation. Comput Methods Programs Biomed 213:106531. https://doi.org/10.1016/j.cmpb.2021.106531

    Article  Google Scholar 

  • Blystad I, Warntjes JBM, Smedby O, Landtblom AM, Lundberg P, Larsson EM (2012) Synthetic MRI of the brain in a clinical setting. Acta Radiol 53(10):1158–1163. https://doi.org/10.1258/ar.2012.120195

    Article  Google Scholar 

  • Butte S, Wang H, **an M, Vakanski A (2022) Sharp-gan: sharpness loss regularized gan for histopathology image synthesis. In: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). IEEE, pp 1–5, https://doi.org/10.1109/ISBI52829.2022.9761534

  • Cao B, Zhang H, Wang N, Gao X, Shen D (2020) Auto-gan: self-supervised collaborative learning for medical image synthesis. In: Proceedings of the AAAI conference on artificial intelligence, pp 10486–10493, https://doi.org/10.1609/aaai.v34i07.6619

  • Chartsias A, Joyce T, Giuffrida MV, Tsaftaris SA (2017) Multimodal MR synthesis via modality-invariant latent representation. IEEE Trans Med Imaging 37(3):803–814. https://doi.org/10.1109/TMI.2017.2764326

    Article  Google Scholar 

  • Chen M, Jog A, Carass A, Prince JL (2015) Using image synthesis for multi-channel registration of different image modalities. In: Medical imaging 2015: image processing. International Society for Optics and Photonics, p 94131Q. https://doi.org/10.1117/12.2082373

  • Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Proceedings of the 30th international conference on neural information processing systems, pp 2180–2188

  • Chen S, Ma K, Zheng Y (2019) Med3d: transfer learning for 3d medical image analysis. https://doi.org/10.48550/ar**v.1904.00625. ar**v preprint ar**v:1904.00625

  • Chen S, Zhou Q, Zou H (2022) A novel un-supervised gan for fundus image enhancement with classification prior loss. Electronics 11(7):1000. https://doi.org/10.3390/electronics11071000

    Article  Google Scholar 

  • Dalmaz O, Yurt M, Çukur T (2022) Resvit: residual vision transformers for multimodal medical image synthesis. IEEE Trans Med Imaging 41(10):2598–2614. https://doi.org/10.1109/TMI.2022.3167808

    Article  Google Scholar 

  • Dar SU, Yurt M, Karacan L, Erdem A, Erdem E, Çukur T (2019) Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans Med Imaging 38(10):2375–2388. https://doi.org/10.1109/TMI.2019.2901750

    Article  Google Scholar 

  • Denck J, Guehring J, Maier A, Rothgang E (2021) MR-contrast-aware image-to-image translations with generative adversarial networks. Int J Comput Assist Radiol Surg 16:2069–2078

    Article  Google Scholar 

  • Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S et al (2020) An image is worth 16x16 words: transformers for image recognition at scale. https://doi.org/10.48550/ar**v.2010.11929. ar**v preprint ar**v:2010.11929

  • Fu Y, Hong Y, Chen L, You S (2022) Le-gan: unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl Based Syst 240:108010. https://doi.org/10.1016/j.knosys.2021.108010

    Article  Google Scholar 

  • Ganguli S, Garzon P, Glaser N (2019) Geogan: a conditional gan with reconstruction and style loss to generate standard layer of maps from satellite images. https://doi.org/10.48550/ar**v.1902.05611. ar**v preprint ar**v:1902.05611

  • Gao L, Chen D, Zhao Z, Shao J, Shen HT (2021) Lightweight dynamic conditional gan with pyramid attention for text-to-image synthesis. Pattern Recognit 110:107384. https://doi.org/10.1016/j.patcog.2020.107384

    Article  Google Scholar 

  • Hao D, Ai T, Goerner F, Hu X, Runge VM, Tweedle M (2012) MRI contrast agents: basic chemistry and safety. J Magn Reson Imaging 36(5):1060–1071. https://doi.org/10.1002/jmri.23725

    Article  Google Scholar 

  • Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE international conference on computer vision, pp 1501–1510. https://doi.org/10.48550/ar**v.1607.08022

  • Isola P, Zhu JY, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134

  • Jog A, Roy S, Carass A, Prince JL (2013) Magnetic resonance image synthesis through patch regression. In: 2013 IEEE 10th international symposium on biomedical imaging. IEEE, pp 350–353. https://doi.org/10.1109/ISBI.2013.6556484

  • Jog A, Carass A, Roy S, Pham DL, Prince JL (2015) MR image synthesis by contrast learning on neighborhood ensembles. Med Image Anal 24(1):63–76. https://doi.org/10.1016/j.media.2015.05.002

    Article  Google Scholar 

  • Jog A, Carass A, Roy S, Pham DL, Prince JL (2017) Random forest regression for magnetic resonance image synthesis. Med Image Anal 35:475–488. https://doi.org/10.1016/j.media.2016.08.009

    Article  Google Scholar 

  • Jónsson BA, Bjornsdottir G, Thorgeirsson T, Ellingsen LM, Walters GB, Gudbjartsson D, Stefansson H, Stefansson K, Ulfarsson M (2019) Brain age prediction using deep learning uncovers associated sequence variants. Nat Commun 10(1):1–10. https://doi.org/10.1038/s41467-019-13163-9

    Article  Google Scholar 

  • Jung E, Luna M, Park SH (2023) Conditional gan with 3d discriminator for MRI generation of Alzheimers disease progression. Pattern Recognit 133:109061. https://doi.org/10.1016/j.patcog.2022.109061

    Article  Google Scholar 

  • Kim J, Kim M, Kang H, Lee K (2019) U-gat-it: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. https://doi.org/10.48550/ar**v.1907.10830. ar**v preprint ar**v:1907.10830

  • Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. https://doi.org/10.48550/ar**v.1412.6980. ar**v preprint ar**v:1412.6980

  • Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G (2023) One model to synthesize them all: multi-contrast multi-scale transformer for missing data imputation. IEEE Trans Med Imaging. https://doi.org/10.1109/TMI.2023.3261707

    Article  Google Scholar 

  • Liu MY, Breuel T, Kautz J (2017) Unsupervised image-to-image translation networks. In: Advances in neural information processing systems, pp 700–708

  • Liu X, Zheng Y, Du Z, Ding M, Qian Y, Yang Z, Tang J (2023) Gpt understands, too. AI Open. https://doi.org/10.1016/j.aiopen.2023.08.012

    Article  Google Scholar 

  • Marcos L, Alirezaie J, Babyn P (2021) Low dose ct image denoising using boosting attention fusion gan with perceptual loss. In: 2021 43rd annual international conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, pp 3407–3410. https://doi.org/10.1109/EMBC46164.2021.9630790

  • Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R et al (2014) The multimodal brain tumor image segmentation benchmark (brats). IEEE Trans Med Imaging 34(10):1993–2024. https://doi.org/10.1109/TMI.2014.2377694

    Article  Google Scholar 

  • Modanwal G, Vellal A, Mazurowski MA (2021) Normalization of breast MRIs using cycle-consistent generative adversarial networks. Comput Methods Programs Biomed 208:106225. https://doi.org/10.1016/j.cmpb.2021.106225

    Article  Google Scholar 

  • Moya-Sáez E, Peña-Nogales Ó, de Luis-García R, Alberola-López C (2021) A deep learning approach for synthetic MRI based on two routine sequences and training with synthetic data. Comput Methods Programs Biomed 210:106371. https://doi.org/10.1016/j.cmpb.2021.106371

    Article  Google Scholar 

  • Or-El R, Sengupta S, Fried O, Shechtman E, Kemelmacher-Shlizerman I (2020) Lifespan age transformation synthesis. In: European conference on computer vision. Springer, pp 739–755

  • Ouyang J, Adeli E, Pohl KM, Zhao Q, Zaharchuk G (2021) Representation disentanglement for multi-modal brain MRI analysis. In: Information processing in medical imaging: 27th international conference, IPMI 2021, Virtual Event, June 28–June 30, 2021, Proceedings, vol 27. Springer, pp 321–333

  • Qin X, Zhang Z, Huang C, Dehghan M, Zaiane OR, Jagersand M (2020) U2-net: going deeper with nested u-structure for salient object detection. Pattern Recognit 106:107404. https://doi.org/10.1016/j.patcog.2020.107404

    Article  Google Scholar 

  • Roy S, Carass A, Shiee N, Pham DL, Prince JL (2010) MR contrast synthesis for lesion segmentation. In: 2010 IEEE international symposium on biomedical imaging: from nano to macro. IEEE, pp 932–935. https://doi.org/10.1109/ISBI.2010.5490140

  • Roy S, Carass A, Prince JL (2013) Magnetic resonance image example-based contrast synthesis. IEEE Trans Med Imaging 32(12):2348–2363. https://doi.org/10.1109/TMI.2013.2282126

    Article  Google Scholar 

  • Sharma A, Hamarneh G (2019) Missing MRI pulse sequence synthesis using multi-modal generative adversarial network. IEEE Trans Med Imaging 39(4):1170–1183. https://doi.org/10.1109/TMI.2019.2945521

    Article  Google Scholar 

  • Srivastava A, Valkov L, Russell C, Gutmann MU, Sutton C (2017) Veegan: reducing mode collapse in gans using implicit variational learning. In: Proceedings of the 31st international conference on neural information processing systems, pp 3310–3320

  • Sun H, ** Q, Sun J, Fan R, **e K, Ni X, Yang J (2022) Research on new treatment mode of radiotherapy based on pseudo-medical images. Comput Methods Programs Biomed 221:106932. https://doi.org/10.1016/j.cmpb.2022.106932

    Article  Google Scholar 

  • Tomosada H, Kudo T, Fujisawa T, Ikehara M (2021) Gan-based image deblurring using DCT loss with customized datasets. IEEE Access 9:135224–135233. https://doi.org/10.1109/ACCESS.2021.3116194

    Article  Google Scholar 

  • Ulyanov D, Vedaldi A, Lempitsky V (2016) Instance normalization: The missing ingredient for fast stylization. ar**v preprint ar**v:1607.08022

  • Upadhyay U, Awate SP (2019) Robust super-resolution gan, with manifold-based and perception loss. In: 2019 IEEE 16th international symposium on biomedical imaging (ISBI 2019). IEEE, pp 1372–1376. https://doi.org/10.1109/ISBI.2019.8759375

  • Wang TC, Liu MY, Zhu JY, Tao A, Kautz J, Catanzaro B (2018) High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8798–8807

  • Wang J, Chen K, Xu R, Liu Z, Loy CC, Lin D (2019) Carafe: content-aware reassembly of features. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3007–3016

  • Wang C, Yang G, Papanastasiou G, Tsaftaris SA, Newby DE, Gray C, Macnaught G, MacGillivray TJ (2021) DICyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. Inf Fusion 67:147–160. https://doi.org/10.1016/j.inffus.2020.10.015

    Article  Google Scholar 

  • Yu B, Zhou L, Wang L, Shi Y, Fripp J, Bourgeat P (2019) Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans Med Imaging 38(7):1750–1762. https://doi.org/10.1109/TMI.2019.2895894

    Article  Google Scholar 

  • Yurt M, Dar SU, Erdem A, Erdem E, Oguz KK, Çukur T (2021) mustGAN: multi-stream generative adversarial networks for MR image synthesis. Med Image Anal 70:101944. https://doi.org/10.1016/j.media.2020.101944

    Article  Google Scholar 

  • Zhang H, Xu T, Li H, Zhang S, Wang X, Huang X, Metaxas DN (2017) Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 5907–5915

  • Zhang J, He X, Qing L, Gao F, Wang B (2022) BpGAN: brain pet synthesis from MRI using generative adversarial network for multi-modal Alzheimer’s disease diagnosis. Comput Methods Programs Biomed 217:106676. https://doi.org/10.1016/j.cmpb.2022.106676

    Article  Google Scholar 

  • Zheng T, Oda H, Moriya T, Sugino T, Nakamura S, Oda M, Mori M, Takabatake H, Natori H, Mori K (2020) Multi-modality super-resolution loss for GAN-based super-resolution of clinical CT images using micro CT image database. In: Medical imaging 2020: image processing. SPIE, pp 7–13. https://doi.org/10.1117/12.2548929

  • Zhou T, Fu H, Chen G, Shen J, Shao L (2020) Hi-net: hybrid-fusion network for multi-modal MR image synthesis. IEEE Trans Med Imaging 39(9):2772–2781. https://doi.org/10.1109/TMI.2020.2975344

    Article  Google Scholar 

  • Zhu JY, Park T, Isola P, Efros AA (2017a) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232

  • Zhu JY, Zhang R, Pathak D, Darrell T, Efros AA, Wang O, Shechtman E (2017b) Multimodal image-to-image translation by enforcing bi-cycle consistency. In: Advances in neural information processing systems, pp 465–476

Download references

Funding

This research was supported by the National Natural Science Foundation of China (62271448).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qianwei Zhou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any study with human participants or animals performed by the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, B., Zhou, Q., Miao, C. et al. Cross-dimensional knowledge-guided synthesizer trained with unpaired multimodality MRIs. Soft Comput (2024). https://doi.org/10.1007/s00500-024-09700-4

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00500-024-09700-4

Keywords

Navigation