Log in

MFDNet: Multi-Frequency Deflare Network for efficient nighttime flare removal

  • Research
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

When light is scattered or reflected accidentally in the lens, flare artifacts may appear in the captured photographs, affecting the photographs’ visual quality. The main challenge in flare removal is to eliminate various flare artifacts while preserving the original content of the image. To address this challenge, we propose a lightweight Multi-Frequency Deflare Network (MFDNet) based on the Laplacian Pyramid. Our network decomposes the flare-corrupted image into low- and high-frequency bands, effectively separating the illumination and content information in the image. The low-frequency part typically contains illumination information, while the high-frequency part contains detailed content information. So our MFDNet consists of two main modules: the Low-Frequency Flare Perception Module (LFFPM) to remove flare in the low-frequency part and the Hierarchical Fusion Reconstruction Module (HFRM) to reconstruct the flare-free image. Specifically, to perceive flare from a global perspective while retaining detailed information for image restoration, LFFPM utilizes Transformer to extract global information while utilizing a convolutional neural network to capture detailed local features. Then HFRM gradually fuses the outputs of LFFPM with the high-frequency component of the image through feature aggregation. Moreover, our MFDNet can reduce the computational cost by processing in multiple frequency bands instead of directly removing the flare on the input image. Experimental results demonstrate that our approach outperforms state-of-the-art methods in removing nighttime flare on real-world and synthetic images from the Flare7K dataset. Furthermore, the computational complexity of our model is remarkably low.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (France)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Data availability statement

No datasets were generated or analyzed during the current study.

References

  1. Asha, C., Bhat, S.K., Nayak, D., Bhat, C.: Auto removal of bright spot from images captured against flashing light source. In: IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics, pp. 1–6. IEEE (2019)

  2. Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization (2016). ar**v:1607.06450

  3. Boynton, P.A., Kelley, E.F.: Liquid-filled camera for the measurement of high-contrast images. In: Cockpit Displays X, vol. 5080, pp. 370–378 (2003)

  4. Burt, P.J., Adelson, E.H.: The laplacian pyramid as a compact image code. In: Readings in computer vision, pp. 671–679. Elsevier (1987)

  5. Chabert, F.: Automated lens flare removal (2015)

  6. Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: CVPR, pp. 12299–12310 (2021)

  7. Chen, L., Chu, X., Zhang, X., Sun, J.: Simple baselines for image restoration. In: ECCV, pp. 17–33 (2022)

  8. Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: Hinet: Half instance normalization network for image restoration. In: CVPR, pp. 182–192 (2021)

  9. Chen, L., Zhang, J., Li, Z., Wei, Y., Fang, F., Ren, J., Pan, J.: Deep richardson–lucy deconvolution for low-light image deblurring. Int. J. Comput. Vis. 1–18 (2023)

  10. Chougule, A., Bhardwaj, A., Chamola, V., Narang, P.: Agd-net: attention-guided dense inception u-net for single-image dehazing. Cogn. Comput. 1–14 (2023)

  11. Criminisi, A., Pérez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13(9), 1200–1212 (2004)

    Article  Google Scholar 

  12. Cun, X., Pun, C., Shi, C.: Towards ghost-free shadow removal via dual hierarchical aggregation network and shadow matting gan. In: AAAI, pp. 10680–10687 (2020)

  13. Dai, Y., Li, C., Zhou, S., Feng, R., Loy, C.C.: Flare7k: A phenomenological nighttime flare removal dataset. In: NeurIPS (2022)

  14. Faulkner, K., Kotre, C., Louka, M.: Veiling glare deconvolution of images produced by x-ray image intensifiers. In: Third International Conference on Image Processing and its Applications, pp. 669–673 (1989)

  15. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)

    Article  Google Scholar 

  16. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR, pp. 7132–7141 (2018)

  17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NeurIPS, pp. 1106–1114 (2012)

  18. Li, W., Lu, X., Lu, J., Zhang, X., Jia, J.: On efficient transformer-based image pre-training for low-level vision. In: IJCAI (2021)

  19. Li, Y., Yan, Q., Zhang, K., Xu, H.: Image reflection removal via contextual feature fusion pyramid and task-driven regularization. IEEE Trans. Circuits Syst. Video Technol. 32(2), 553–565 (2022)

    Article  Google Scholar 

  20. Li, Z., Chen, X., Pun, C.M., Cun, X.: High-resolution document shadow removal via a large-scale real-world dataset and a frequency-aware shadow erasing net. In: ICCV, pp. 12415–12424 (2023). https://doi.org/10.1109/ICCV51070.2023.01144

  21. Li, Z., Chen, X., Wang, S., Pun, C.M.: A large-scale film style dataset for learning multi-frequency driven film enhancement. In: IJCAI (2023)

  22. Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In: ICCV, pp. 1833–1844 (2021)

  23. Liang, J., Zeng, H., Zhang, L.: High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network. In: CVPR, pp. 9392–9400 (2021)

  24. Luo, Q., Liao, Y., **g, B., Gao, X., Chen, W., Tan, K.: Hir-net: a simple and effective heterogeneous image restoration network. Signal, Image and Video Processing, pp. 1–12 (2023)

  25. Macleod, H.A., Macleod, H.A.: Thin-Film Optical Filters. CRC Press, Boca Raton (2010)

    Book  Google Scholar 

  26. Ragini, T., Prakash, K., Cheruku, R.: Detformer: a novel efficient transformer framework for image deraining. Circuits, Systems, and Signal Processing, pp. 1–23 (2023)

  27. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: MICCAI, pp. 234–241. Springer (2015)

  28. Sahoo, S., Nanda, P.K.: Adaptive feature fusion and spatio-temporal background modeling in kde framework for object detection and shadow removal. IEEE Trans. Circuits Syst. Video Technol. 32(3), 1103–1118 (2022)

    Article  Google Scholar 

  29. Seibert, J.A., Nalcioglu, O., Roeck, W.: Removal of image intensifier veiling glare by mathematical deconvolution techniques. Med. Phys. 12(3), 281–288 (1985)

  30. Sharma, A., Tan, R.T.: Nighttime visibility enhancement by increasing the dynamic range and suppression of light effects. In: CVPR, pp. 11977–11986 (2021)

  31. Tu, Z., Talebi, H., Zhang, H., Yang, F., Milanfar, P., Bovik, A., Li, Y.: Maxim: Multi-axis mlp for image processing. In: CVPR, pp. 5769–5780 (2022)

  32. Vitoria, P., Ballester, C.: Automatic flare spot artifact detection and removal in photographs. J. Math. Imaging Vis. 61(4), 515–533 (2019)

    Article  MathSciNet  Google Scholar 

  33. Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., Lu, T.: Ultra-high-definition low-light image enhancement: a benchmark and transformer-based method. In: AAAI, vol. 37, pp. 2654–2662 (2023)

  34. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  35. Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: a general u-shaped transformer for image restoration. In: CVPR, pp. 17683–17693 (2022)

  36. Wu, Y., He, Q., Xue, T., Garg, R., Chen, J., Veeraraghavan, A., Barron, J.T.: How to train neural networks for flare removal. In: ICCV, pp. 2239–2247 (2021)

  37. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: CVPR, pp. 5728–5739 (2022)

  38. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Multi-stage progressive image restoration. In: CVPR, pp. 14821–14831 (2021)

  39. Zha, Z., Yuan, X., Zhou, J., Zhu, C., Wen, B.: Image restoration via simultaneous nonlocal self-similarity priors. IEEE Trans. Image Process. 29, 8561–8576 (2020)

    Article  Google Scholar 

  40. Zhang, D., Ouyang, J., Liu, G., Wang, X., Kong, X., **, Z.: Ff-former: Swin Fourier transformer for nighttime flare removal. In: CVPR Workshops, pp. 2824–2832 (2023)

  41. Zhang, J., Cao, Y., Zha, Z., Tao, D.: Nighttime dehazing with a synthetic benchmark. In: ACM MM, pp. 2355–2363 (2020)

  42. Zhang, J., Wang, F., Zhang, H., Shi, X.: Compressive sensing spatially adaptive total variation method for high-noise astronomical image denoising. The Visual Computer, pp. 1–13 (2023)

  43. Zhang, M., Desrosiers, C.: High-quality image restoration using low-rank patch regularization and global structure sparsity. IEEE Trans. Image Process. 28(2), 868–879 (2019)

    Article  MathSciNet  Google Scholar 

  44. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR, pp. 586–595 (2018)

  45. Zhang, Z., Feng, H., Xu, Z., Li, Q., Chen, Y.: Single image veiling glare removal. J. Mod. Opt. 65(19), 2220–2230 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Y.J. and X. C. wrote the first draft of the manuscript, and it was revised by other authors. All authors contributed and reviewed the manuscript.

Corresponding author

Correspondence to Chi-Man Pun.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by the Science and Technology Development Fund, Macau SAR, under Grants 0141/2023/RIA2 and 0193/2023/RIA3.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, Y., Chen, X., Pun, CM. et al. MFDNet: Multi-Frequency Deflare Network for efficient nighttime flare removal. Vis Comput (2024). https://doi.org/10.1007/s00371-024-03540-x

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-024-03540-x

Keywords

Navigation