Log in

MaCo: efficient unsupervised low-light image enhancement via illumination-based magnitude control

  • ORIGINAL ARTICLE
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This paper presents a novel low-light image enhancement (LLIE) method based on magnitude control (MaCo). Our method establishes a relationship between the low-light image and illumination, using pixel intensity and image brightness. Exploiting this relationship, MaCo enhances pixels with varying magnitudes, achieving pixel-wise LLIE. This yields high-quality enhanced images without local overexposure. We also introduce a set of carefully formulated unsupervised loss functions to enable training using only low-light images. Concretely, our method first trains a lightweight deep network, Low-res Coefficient Estimation Network (LCE-Net), to estimate the feature map in low-resolution space. Next, the High-res Illumination Estimation Module (HIE Module) is proposed to perform bilateral-grid-based upsampling for obtaining the high-res illumination. The illumination is finally utilized to light up the low-light image, yielding the enhanced image. MaCo is efficient and consumes fewer resources, as most computations occur in low resolution and the network is lightweight. Current LLIE datasets are mostly synthesized or generated through altered camera settings. These datasets are often limited in accurately representing real-world situations. To address this problem, we create a new dataset, IOLD, containing 572 images captured under real low- and normal-light conditions. In particular, the potential advantage of MaCo for face detection in the dark is also discussed. Extensive qualitative and quantitative experiments demonstrate that our method performs favorably against state-of-the-art methods in terms of effectiveness and efficiency. The IOLD dataset will be made publicly available at https://drive.google.com/drive/folders/1VAkuj9gheZ4aPEhBZ5MszHF6zpbrE_9_?usp=sharing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25

Similar content being viewed by others

Notes

  1. https://sites.google.com/site/vonikakis/datasets/tm-died.

References

  1. Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

  2. Li, C., Guo, C., Loy, C.C.: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 44, 4225–4238 (2021)

    Google Scholar 

  3. Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2011)

  4. Coltuc, D., Bolon, P., Chassery, J.: Exact histogram specification. IEEE Trans. Image Process. 15, 1143–1152 (2006)

    Article  Google Scholar 

  5. Ibrahim, H., Kong, N.S.P.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53, 1752–1758 (2007)

    Article  Google Scholar 

  6. Lee, C., Lee, C., Kim, C.: Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans. Image Process. 22, 5372–5384 (2013)

    Article  Google Scholar 

  7. Stark, J.A.: Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 9, 889–896 (2000)

    Article  Google Scholar 

  8. Yuan, L., Sun, J.: Automatic exposure correction of consumer photographs. Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7–13, 2012, Proceedings, Part IV 12 (2012)

  9. Rahman, Z., Pu, Y., Aamir, M., Wali, S.: Structure revealing of low-light images using wavelet transform based on fractional-order denoising and multiscale decomposition. Vis. Comput. 37, 865–880 (2021)

    Article  Google Scholar 

  10. Land, E.H.: The retinex. Ciba Foundation Symposium-Colour Vision, Physiology and Experimental Psychology (1965)

  11. Jobson, D.J., Rahman, Z., Woodell, G.A.: Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 6, 451–462 (1997)

    Article  Google Scholar 

  12. Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6, 965–976 (1997)

    Article  Google Scholar 

  13. Land, E.H.: An alternative technique for the computation of the designator in the retinex theory of color vision. Proc. Natl. Acad. Sci. 83, 3078–3080 (1986)

    Article  Google Scholar 

  14. Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 982–993 (2016)

    Article  MathSciNet  Google Scholar 

  15. Hao, S., Han, X., Guo, Y., Xu, X., Wang, M.: Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimedia 22, 3025–3038 (2020)

    Article  Google Scholar 

  16. Xu, J., et al.: Star: a structure and texture aware retinex model. IEEE Trans. Image Process. 29, 5022–5037 (2020)

    Article  Google Scholar 

  17. Lin, Y., Lu, Y.: Low-light enhancement using a plug-and-play retinex model with shrinkage map** for illumination estimation. IEEE Trans. Image Process. 31, 4897–4908 (2022)

    Article  Google Scholar 

  18. Lin, X., et al.: EAPT: efficient attention pyramid transformer for image processing. IEEE Trans. Multimedia (2021)

  19. Li, H., Sheng, B., Li, P., Ali, R., Chen, C.: Globally and locally semantic colorization via exemplar-based broad-GAN. IEEE Trans. Image Process. 30, 8526–8539 (2021)

    Article  Google Scholar 

  20. Li, P., Sheng, B., Chen, C.: Face sketch synthesis using regularized broad learning system. IEEE Trans. Neural Netw. Learn. Syst. 33, 5346–5360 (2021)

    Article  Google Scholar 

  21. Wen, Y., et al.: Structure-aware motion deblurring using multi-adversarial optimized cyclegan. IEEE Trans. Image Process. 30, 6142–6155 (2021)

    Article  Google Scholar 

  22. **, Y., Sheng, B., Li, P., Chen, C.: Broad colorization. IEEE Trans. Neural Netw. Learn. Syst. 32, 2330–2343 (2020)

    Article  Google Scholar 

  23. Guo, H., Sheng, B., Li, P., Chen, C.: Multiview high dynamic range image synthesis using fuzzy broad learning system. IEEE Trans. Cybern. 51, 2735–2747 (2019)

    Article  Google Scholar 

  24. Sheng, B., Li, P., Fang, X., Tan, P., Wu, E.: Depth-aware motion deblurring using loopy belief propagation. IEEE Trans. Circuits Syst. Video Technol. 30, 955–969 (2019)

    Article  Google Scholar 

  25. Sheng, B., Li, P., **, Y., Tan, P., Lee, T.: Intrinsic image decomposition with step and drift shading separation. IEEE Trans. Vis. Comput. Graph. 26, 1332–1346 (2018)

    Article  Google Scholar 

  26. Song, X., Huang, J., Cao, J., Song, D.: Feature spatial pyramid network for low-light image enhancement. Vis. Comput. 39, 489–499 (2023)

    Article  Google Scholar 

  27. Yu, N., Li, J., Hua, Z.: Fla-net: multi-stage modular network for low-light image enhancement. Vis. Comput. 39, 1251–1270 (2023)

    Google Scholar 

  28. Hu, W., Wang, T., Wang, Y., Chen, Z., Huang, G.: Le-msfe-ddnet: a defect detection network based on low-light enhancement and multi-scale feature extraction. Vis. Comput. 38, 3731–3745 (2022)

    Article  Google Scholar 

  29. Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)

    Article  Google Scholar 

  30. Gharbi, M., Chen, J., Barron, J.T., Hasinoff, S.W., Durand, F.: Deep bilateral learning for real-time image enhancement. ACM Trans. Graph. (TOG) 36, 1–12 (2017)

    Article  Google Scholar 

  31. Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia (2019)

  32. Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vis. 129, 1013–1037 (2021)

    Article  Google Scholar 

  33. Zhang, Y., Di, X., Zhang, B., Ji, R., Wang, C.: Better than reference in low-light image enhancement: conditional re-enhancement network. IEEE Trans. Image Process. 31, 759–772 (2021)

    Article  Google Scholar 

  34. Zhang, Z., et al.: Deep color consistent network for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

  35. Li, J., Feng, X., Hua, Z.: Low-light image enhancement via progressive-recursive network. IEEE Trans. Circuits Syst. Video Technol. 31, 4227–4240 (2021)

    Article  Google Scholar 

  36. Xu, X., Wang, R., Fu, C., Jia, J.: Snr-aware low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

  37. Wu, W., et al.: Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

  38. Yang, W., Wang, W., Huang, H., Wang, S., Liu, J.: Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 30, 2072–2086 (2021)

    Article  Google Scholar 

  39. Huang, H., Yang, W., Hu, Y., Liu, J., Duan, L.: Towards low light enhancement with raw images. IEEE Trans. Image Process. 31, 1391–1405 (2022)

    Article  Google Scholar 

  40. Wang, R., et al.: Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)

  41. Lin, Q., Zheng, Z., Jia, X.: Uhd low-light image enhancement via interpretable bilateral learning. Inf. Sci. 608, 1401–1415 (2022)

    Article  Google Scholar 

  42. Hai, J., et al.: R2rnet: low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 90, 103712 (2023)

    Article  Google Scholar 

  43. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. ar**v preprint ar**v:1808.04560 (2018)

  44. Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)

  45. Fu, Z., et al.: An efficient hybrid model for low-light image enhancement in mobile devices. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

  46. Guo, S., Wang, W., Wang, X., Xu, X.: Low-light image enhancement with joint illumination and noise data distribution transformation. Vis. Comput. 39, 1363–1374 (2023)

    Google Scholar 

  47. Li, C., Guo, C., Feng, R., Zhou, S., Loy, C.C.: Cudi: curve distillation for efficient and controllable exposure adjustment. ar**v preprint ar**v:2207.14273 (2022)

  48. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: Band representation-based semi-supervised low-light image enhancement: Bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 30, 3461–3473 (2021)

    Article  Google Scholar 

  49. Fu, Y., Hong, Y., Chen, L., You, S.: Le-gan: unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl. Based Syst. 240, 108010 (2022)

    Article  Google Scholar 

  50. Jiang, Y., et al.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  51. Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)

  52. Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021)

  53. Zhang, F., et al.: Unsupervised low-light image enhancement via histogram equalization prior. ar**v preprint ar**v:2112.01766 (2021)

  54. Guo, H., Xu, W., Qiu, S.: Unsupervised low-light image enhancement with quality-task-perception loss. In: 2021 International Joint Conference on Neural Networks (IJCNN) (2021)

  55. Zhao, Z., et al.: Retinexdip: a unified deep framework for low-light image enhancement. IEEE Trans. Circuits Syst. Video Technol. 32, 1076–1088 (2021)

    Article  Google Scholar 

  56. Yuan, Y., et al.: Ug \(^{2+}\) track 2: a collective benchmark effort for evaluating and advancing image understanding in poor visibility environments. ar**v preprint ar**v:1904.04474 (2019)

  57. Loh, Y., Chan, C.: Getting to know low-light images with the exclusively dark dataset. Comput. Vis. Image Underst. 178, 30–42 (2019)

    Article  Google Scholar 

  58. Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 982–993 (2016)

    Article  MathSciNet  Google Scholar 

  59. Lee, C., Lee, C., Kim, C.: Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans. Image Process. 22, 5372–5384 (2013)

    Article  Google Scholar 

  60. Tan, X., et al.: Night-time scene parsing with a large real dataset. IEEE Trans. Image Process. 30, 9085–9098 (2021)

    Article  Google Scholar 

  61. Mertens, T., Kautz, J.. Van Reeth, F.: Exposure fusion. In: 15th Pacific Conference on Computer Graphics and Applications (PG’07) (2007)

  62. Barron, J.T., Adams, A., Shih, Y., Hernández, C.: Fast bilateral-space stereo for synthetic defocus. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)

  63. Barron, J.T., Poole, B.: The fast bilateral solver. In: Proceedings of the European Conference on Computer Vision (ECCV) (2016)

  64. Fan, Q., Yang, J., Wipf, D., Chen, B., Tong, X.: Image smoothing via unsupervised learning. ACM Trans. Graph. (TOG) 37, 1–14 (2018)

    Article  Google Scholar 

  65. Awad, M., Elliethy, A., Aly, H.A.: Adaptive near-infrared and visible fusion for fast image enhancement. IEEE Trans. Comput. Imaging 6, 408–418 (2019)

    Article  Google Scholar 

  66. Zhang, Y., Di, X., Zhang, B., Wang, C.: Self-supervised image enhancement network: training with low light images only. ar**v preprint ar**v:2002.11300 (2020)

  67. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

  68. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)

  69. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind’’ image quality analyzer. IEEE Signal Process. Lett. 20, 209–212 (2012)

    Article  Google Scholar 

  70. Wang, S., Zheng, J., Hu, H., Li, B.: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22, 3538–3548 (2013)

    Article  Google Scholar 

  71. Ke, J., Wang, Q., Wang, Y., Milanfar, P., Yang, F.: Musiq: multi-scale image quality transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021)

  72. Li, J., et al.: Dsfd: dual shot face detector. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)

  73. Yang, S., Luo, P., Loy, C., Tang, X.: Wider face: a face detection benchmark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

Download references

Funding

This work was supported by the National Key Research and Development Program of China under Grant No.2021YFC3320302 and by the Fundamental Research Funds for the Central Universities under Grant No.3072022TS0604.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianguo Sun.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Y., Liu, D., Zhang, L. et al. MaCo: efficient unsupervised low-light image enhancement via illumination-based magnitude control. Vis Comput (2024). https://doi.org/10.1007/s00371-023-03249-3

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-023-03249-3

Keywords

Navigation