Log in

RAD-UNet: a Residual, Attention-Based, Dense UNet for CT Sparse Reconstruction

  • Original Paper
  • Published:
Journal of Digital Imaging Aims and scope Submit manuscript

Abstract

To suppress the streak artifacts in images reconstructed from sparse-view projections in computed tomography (CT), a residual, attention-based, dense UNet (RAD-UNet) deep network is proposed to achieve accurate sparse reconstruction. The filtered back projection (FBP) algorithm is used to reconstruct the CT image with streak artifacts from sparse-view projections. Then, the image is processed by the RAD-UNet to suppress streak artifacts and obtain high-quality CT image. Those images with streak artifacts are used as the input of the RAD-UNet, and the output-label images are the corresponding high-quality images. Through training via the large-scale training data, the RAD-UNet can obtain the capability of suppressing streak artifacts. This network combines residual connection, attention mechanism, dense connection and perceptual loss. This network can improve the nonlinear fitting capability and the performance of suppressing streak artifacts. The experimental results show that the RAD-UNet can improve the reconstruction accuracy compared with three existing representative deep networks. It may not only suppress streak artifacts but also better preserve image details. The proposed networks may be readily applied to other image processing tasks including image denoising, image deblurring, and image super-resolution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?,” Inverse Problems, vol. 25, no. 12, pp. 1230009, 2008.

    Google Scholar 

  2. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289-1306, 2006.

    Article  Google Scholar 

  3. E. Y. Sidky, and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys Med Biol, vol. 53, no. 17, pp. 4777–4807, Sep 7, 2008.

  4. Sidky, Y. Emil, C. M. Kao, and X. Pan, “Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT,” Journal of X-Ray Science & Technology, vol. 14, no. 2, pp. 119–139, 2009.

  5. Y. Liu, J. Ma, Y. Fan, and Z. Liang, “Adaptive-weighted Total Variation Minimization for Sparse Data toward Low-dose X-ray Computed Tomography Image Reconstruction,” Physics in Medicine & Biology, vol. 57, no. 23, pp. 7923-7956, 2012.

    Article  Google Scholar 

  6. D. Strong, and T. Chan, “Edge-preserving and scale-dependent properties of total variation regularization,” Inverse Problems, vol. 19, no. 6, pp. 165-187, 2003.

    Article  Google Scholar 

  7. Z. Chen, X. **, L. Li, and G. Wang, “A limited-angle CT reconstruction method based on anisotropic TV minimization,” Physics in Medicine & Biology, vol. 58, no. 7, pp. 2119-2141, 2013.

    Article  Google Scholar 

  8. Y. **, Z. Qiao, W. Wang, and L. Niu, “Study of CT image reconstruction algorithm based on high order total variation,” Optik, vol. 204, no. 204, pp. 163814-163830, 2020.

    Article  Google Scholar 

  9. H. Kim, J. Chen, A. Wang, C. Chuang, M. Held, and J. Pouliot, “Non-local total-variation (NLTV) minimization combined with reweighted L1-norm for compressed sensing CT reconstruction,” Physics in Medicine and Biology, vol. 61, no. 18, pp. 6878-6891, 2016.

    Article  CAS  PubMed  Google Scholar 

  10. E. Y. Sidky, I. Reiser, R. M. Nishikawa, X. Pan, and R. H. Moore, “Practical iterative image reconstruction in digital breast tomosynthesis by non-convex TpV optimization,” Proceedings of SPIE - The International Society for Optical Engineering, vol. 6913, 2008.

  11. C. Yang, L. Shi, Q. Feng, J. Yang, H. Shu, L. Luo, J. L. Coatrieux, and W. Chen, “Artifact Suppressed Dictionary Learning for Low-Dose CT Image Processing,” IEEE Transactions on Medical Imaging, vol. 33, no. 12, pp. 2271-2292, 2014.

    Article  PubMed  Google Scholar 

  12. J. Zhu, and C. W. Chen, “A Compressive Sensing Image Reconstruction Algorithm Based on Low-rank and Total Variation Regularization,” Journal of **ling Institute of Technology, 2015.

  13. G. Wang, “A Perspective on Deep Imaging,” IEEE Access, vol. 4, pp. 8914-8924, 2017.

    Article  Google Scholar 

  14. G. Wang, J. C. Ye, and B. D. Man, “Deep learning for tomographic image reconstruction,” Nature Machine Intelligence, vol. 2, no. 12, pp. 737-748, 2020.

    Article  Google Scholar 

  15. Bo, Zhu, Jeremiah, Liu, Stephen, Cauley, and Bruce, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, pp. 487-492, 2018.

    Article  Google Scholar 

  16. H. Lee, J. Lee, and S. Cho, “View-interpolation of sparsely sampled sinogram using convolutional neural network,” SPIE, vol. 10133, 2019.

  17. X. Dong, S. Vekhande, and G. Cao, “Sinogram interpolation for sparse-view micro-CT with deep learning neural network,” SPIE, vol. 10948, 2019.

  18. H. Chen, Y. Zhang, Y. Chen, J. Zhang, W. Zhang, H. Sun, Y. Lv, P. Liao, J. Zhou, and G. Wang, “LEARN: Learned Experts’ Assessment-based Reconstruction Network for Sparse-data CT,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1333-1347, 2018.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Y. Yang, H. Li, Z. Xu, and S. Jian, “Deep ADMM-Net for compressive sensing MRI,” Advances in Neural Information Processing System, pp. 10–18, 2016.

  20. L. Xu, J. Ren, C. Liu, and J. Jia, “Deep Convolutional Neural Network for Image Deconvolution,” Advances in neural information processing systems, vol. 1, pp. 1790-1798, 2014.

    Google Scholar 

  21. K. He, X. Zhang, S. Ren, and S. Jian, “Deep Residual Learning for Image Recognition,” pp. 770–778, 2016.

  22. Chen, Zhang, K. Mannudeep, Kalra, Feng, Lin, Yang, Peixo, Liao, and Jiliu, “Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN),” IEEE transactions on medical imaging, vol. 36, no. 99, pp. 2524–2535, 2017.

  23. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Networks,” Advances in Neural Information Processing Systems, vol. 3, pp. 2672-2680, 2014.

    Google Scholar 

  24. J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum, “Generative Adversarial Networks for Noise Reduction in Low-Dose CT,” IEEE Transactions on Medical Imaging, vol. 36, no. 12, pp. 2536-2545, 2017.

    Article  PubMed  Google Scholar 

  25. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Springer, Cham, 2015.

    Google Scholar 

  26. Y. S. Han, J. Yoo, and J. C. Ye, “Deep Residual Learning for Compressed Sensing CT Reconstruction via Persistent Homology Analysis,” 2016.

  27. K. H. **, M. T. Mccann, E. Froustey, and M. Unser, “Deep Convolutional Neural Network for Inverse Problems in Imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509-4522, 2016.

    Article  Google Scholar 

  28. H. Gao, L. Zhuang, L. V. D. Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” Computer Era, pp. 2261–2269, 2017.

  29. Z. Zhang, X. Liang, D. Xu, Y. **e, and G. Cao, “A Sparse-View CT Reconstruction Method Based on Combination of DenseNet and Deconvolution,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1-1, 2018.

    Article  CAS  Google Scholar 

  30. Y. Han, and J. C. Ye, “Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1418-1429, 2018.

    Article  PubMed  Google Scholar 

  31. Steven, Guan, Amir, Khan, Siddhartha, Sikdar, Parag, and Chitnis, “Fully Dense UNet for 2D Sparse Photoacoustic Tomography Artifact Removal,” IEEE Journal of Biomedical & Health Informatics, vol. 24, no. 2, pp. 568-576, 2019.

    Google Scholar 

  32. H. Jie, S. Li, S. Gang, and S. Albanie, “Squeeze-and-Excitation Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 8, pp. 2011-2023, 2020.

    Article  Google Scholar 

  33. S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, “CBAM: Convolutional Block Attention Module,” Springer, Cham, vol. 11211, pp. 3-19, 2018.

    Google Scholar 

  34. C. P. Tay, S. Roy, and K. H. Yap, “AANet: Attribute Attention Network for Person Re-Identifications,” pp. 7127–7136, 2019.

  35. J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” Springer, Cham, vol. 9906, pp. 694-711, 2016.

    Google Scholar 

  36. Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M. K. Kalra, Y. Zhang, L. Sun, and G. Wang, “Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1348-1357, 2018.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Maryam, Gholizadeh-Ansari, Javad, Alirezaie, Paul, and Babyn, “Low-dose CT Denoising Using Edge Detection Layer and Perceptual Loss.” IEEE Engineering in Medicine and Biology Society, pp. 6247–6250, 2019.

  38. K. He, X. Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” International Conference on Computer Vision, vol. 1, pp. 1026-1034, 2015.

    Google Scholar 

Download references

Funding

This work was supported in part by the Natural Science Foundation of China under grant 62071281, by the Central Guidance on Local Science and Technology Development Fund Project under grant YDZJSX2021A003, and by the Research Project Supported by Shanxi Scholarship Council of China under grant 2020-008.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Congcong Du and Zhiwei Qiao. The first draft of the manuscript was written by Congcong Du and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zhiwei Qiao.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiao, Z., Du, C. RAD-UNet: a Residual, Attention-Based, Dense UNet for CT Sparse Reconstruction. J Digit Imaging 35, 1748–1758 (2022). https://doi.org/10.1007/s10278-022-00685-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-022-00685-w

Keywords

Navigation