Abstract
Positron emission tomography (PET) is an important medical imaging technique, especially for brain and cancer disease diagnosis. Modern PET scanner is usually combined with computed tomography (CT), where CT image is used for anatomical localization, PET attenuation correction, and radiotherapy treatment planning. Considering radiation dose of CT image as well as increasing spatial resolution of PET image, there is a growing demand to synthesize CT image from PET image (without scanning CT) to reduce risk of radiation exposure. However, most existing works perform learning-based image synthesis to construct cross-modality map** only in the image domain, without considering of the projection domain, leading to potential physical inconsistency. To address this problem, we propose a novel PET-CT synthesis framework by exploiting dual-domain information (i.e., image domain and projection domain). Specifically, we design both image domain network and projection domain network to jointly learn high-dimensional map** from PET to CT. The image domain and the projection domain can be connected together with a forward projection (FP) and a filtered back projection (FBP). To further help the PET-to-CT synthesis task, we also design a secondary CT-to-PET synthesis task with the same network structure, and combine the two tasks into a bidirectional map** framework with several closed cycles. More importantly, these cycles can serve as cycle-consistent losses to further help network training for better synthesis performance. Extensive validations on the clinical PET-CT data demonstrate the proposed PET-CT synthesis framework outperforms the state-of-the-art (SOTA) medical image synthesis methods with significant improvements.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Armanious, K., et al.: Independent attenuation correction of whole body [18 F] FDG-pet using a deep learning approach with generative adversarial networks. EJNMMI Res. 10(1), 1–9 (2020)
Armanious, K., et al.: MedGAN: medical image translation using GANs. Comput. Med. Imaging Graph. 79, 101684 (2020)
Bi, L., Kim, J., Kumar, A., Feng, D., Fulham, M.: Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs). In: Cardoso, M.J., et al. (eds.) CMMI/SWITCH/RAMBO 2017. LNCS, vol. 10555, pp. 43–51. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67564-0_5
Dong, X., et al.: Synthetic CT generation from non-attenuation corrected pet images for whole-body pet imaging. Phys. Med. Biol. 64(21), 215016 (2019)
Goitein, M., et al.: The value of CT scanning in radiation therapy treatment planning: a prospective study. Int. J. Radiat. Oncol.* Biol.* Phys. 5(10), 1787–1798 (1979)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
Lin, W.A., et al.: DuDoNet: dual domain network for CT metal artifact reduction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10512–10521 (2019)
Liu, F., Jang, H., Kijowski, R., Zhao, G., Bradshaw, T., McMillan, A.B.: A deep learning approach for 18 F-FDG pet attenuation correction. EJNMMI Phys. 5(1), 1–15 (2018)
Liu, J., Kang, Y., Hu, D., Chen, Y.: Cascade ResUnet with noise power spectrum loss for low dose CT imaging. In: 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 796–801. IEEE (2020)
Luan, H., Qi, F., Xue, Z., Chen, L., Shen, D.: Multimodality image registration by maximization of quantitative-qualitative measure of mutual information. Pattern Recogn. 41(1), 285–298 (2008)
Muehllehner, G., Karp, J.S.: Positron emission tomography. Phys. Med. Biol. 51(13), R117 (2006)
Nie, D., et al.: Medical image synthesis with deep convolutional adversarial networks. IEEE Trans. Biomed. Eng. 65(12), 2720–2730 (2018)
Shi, L., et al.: A novel loss function incorporating imaging acquisition physics for PET attenuation map generation using deep learning. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 723–731. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_79
Sudarshan, V.P., Upadhyay, U., Egan, G.F., Chen, Z., Awate, S.P.: Towards lower-dose pet using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Med. Image Anal. 73, 102187 (2021)
**ang, L., et al.: Deep embedding convolutional neural network for synthesizing CT image from T1-weighted MR image. Med. Image Anal. 47, 31–44 (2018)
Xu, J., Gong, E., Pauly, J., Zaharchuk, G.: 200x low-dose pet reconstruction using deep learning. ar**v preprint ar**v:1712.04119 (2017)
Zhang, J., et al.: Limited-view photoacoustic imaging reconstruction with dual domain inputs based on mutual information. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1522–1526. IEEE (2021)
Zhou, B., Zhou, S.K.: DuDoRNet: learning a dual-domain recurrent network for fast MRI reconstruction with deep T1 prior. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4273–4282 (2020)
Zhou, T., Thung, K.H., Zhu, X., Shen, D.: Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Hum. Brain Mapp. 40(3), 1001–1016 (2019)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Acknowledgements
This work was supported in part by National Natural Science Foundation of China (grant number 62131015), Science and Technology Commission of Shanghai Municipality (STCSM) (grant number 21010502600), and The Key R &D Program of Guangdong Province, China (grant number 2021B0101420006).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, J., Cui, Z., Jiang, C., Zhang, J., Gao, F., Shen, D. (2022). Map** in Cycles: Dual-Domain PET-CT Synthesis Framework with Cycle-Consistent Constraints. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_72
Download citation
DOI: https://doi.org/10.1007/978-3-031-16446-0_72
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16445-3
Online ISBN: 978-3-031-16446-0
eBook Packages: Computer ScienceComputer Science (R0)