Log in

Adaptive enhancement of spatial information in adverse weather

  • Published:
Spatial Information Research Aims and scope Submit manuscript

Abstract

In the context of spatial information, particularly in video surveillance and intelligent transportation systems, the visibility of video images is severely impacted by adverse climates including rain, snow, and fog. Accurate and swift recognition of current weather conditions and adaptive clarification of surveillance videos are crucial to maintaining the integrity of spatial information. Addressing the limitations of traditional weather recognition methods and the scarcity of weather image datasets, a multicategory weather image block dataset was constructed. This research introduced a weather recognition algorithm that integrates image block processing with feature fusion. The algorithm uses traditional methods to extract shallow spatial features such as average gradient, contrast, saturation, and dark channel from weather images. It also employs transfer learning to fine-tune a pretrained VGG16 model, extracting deep spatial features from the model’s fully connected layers. The approach improves the SoftMax classifier’s recognition of fog, rain, snow, and clear weather photos by merging shallow and deep spatial information. This improvement is essential for the quality and reliability of spatial data in bad weather. The algorithm achieves 99.26% accuracy in weather detection; however, the best accuracy archive by state of art is 97.14%, confirming its usefulness as a module for adaptive video picture clarification in spatially informed systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

Data shall be made available on request.

References

  1. Lin, D., Lu, C., Huang, H., & Jia, J. (Sept. 2017). RSCM: Region Selection and Concurrency Model for Multi-class Weather Recognition. IEEE Transactions on Image Processing, 26(9), 4154–4167. https://doi.org/10.1109/TIP.2017.2695883.

  2. Yu, T., Kuang, Q., Hu, J., Zheng, J., & Li, X. (2021). Global-Similarity Local-Salience Network for Traffic Weather Recognition. Ieee Access : Practical Innovations, Open Solutions, 9, 4607–4615. https://doi.org/10.1109/ACCESS.2020.3048116.

    Article  Google Scholar 

  3. Negru, M., Nedevschi, S., & Peter, R. I. (2015). Exponential contrast Restoration in Fog conditions for driving assistance. IEEE Transactions on Intelligent Transportation Systems Aug, 16(4), 2257–2268. https://doi.org/10.1109/TITS.2015.2405013.

    Article  Google Scholar 

  4. Zheng, X., et al. (June 2019). Detecting comma-shaped clouds for severe Weather forecasting using shape and motion. IEEE Transactions on Geoscience and Remote Sensing, 57(6), 3788–3801. https://doi.org/10.1109/TGRS.2018.2887206.

  5. Cheng, Y., Jia, Z., Lai, H., Yang, J., & Kasabov, N. K. (2020). A fast sand-dust image Enhancement Algorithm by Blue Channel Compensation and guided image Filtering. Ieee Access : Practical Innovations, Open Solutions, 8, 196690–196699. https://doi.org/10.1109/ACCESS.2020.3034151.

    Article  Google Scholar 

  6. Xue, J., Zhang, H., Nishino, K., & Dana, K. J., Differential viewpoints for Ground Terrain Material Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 44(3): 1205–1218, 1 March 2022, https://doi.org/10.1109/TPAMI.2020.3025121.

  7. Raghunandan, K. S., et al. (Sept. 2018). Riesz Fractional based model for enhancing license plate detection and recognition. IEEE Transactions on Circuits and Systems for Video Technology, 28(9), 2276–2288. https://doi.org/10.1109/TCSVT.2017.2713806.

  8. Tourani, A., Shahbahrami, A., Soroori, S., Khazaee, S., & Suen, C. Y. (2020). A Robust Deep Learning Approach for Automatic Iranian vehicle license plate detection and Recognition for Surveillance Systems. Ieee Access : Practical Innovations, Open Solutions, 8, 201317–201330. https://doi.org/10.1109/ACCESS.2020.3035992.

    Article  Google Scholar 

  9. Zhao, C., Zhang, S., Luo, R., Feng, S., & Kuang, G. (2023). Scattering Features Spatial-Structural Association Network for Aircraft Recognition in SAR Images. in IEEE Geoscience and Remote Sensing Letters. 20: 1–5, Art no. 4006505, https://doi.org/10.1109/LGRS.2023.3280442.

  10. Ratnasingam, S., & McGinnity, T. M. (Aug. 2012). Chromaticity Space for Illuminant Invariant Recognition. IEEE Transactions on Image Processing, 21(8), 3612–3623. https://doi.org/10.1109/TIP.2012.2193135.

  11. Mahmoud, S. A., Afifi, M. S., & Green, R. J. (1988). Recognition and velocity computation of large moving objects in images. IEEE Transactions on Acoustics, Speech, and Signal Processing. 36(11), 1790–1791, Nov. https://doi.org/10.1109/29.9020.

  12. Gao, F., Liu, Q., Sun, J., Hussain, A., & Zhou, H. (2019). Integrated GANs: Semi-supervised SAR Target Recognition. Ieee Access : Practical Innovations, Open Solutions, 7, 113999–114013. https://doi.org/10.1109/ACCESS.2019.2935167.

    Article  Google Scholar 

  13. Xu, Y., Wen, J., Fei, L., & Zhang, Z. (2016). Review of video and image defogging algorithms and related studies on image restoration and enhancement. Ieee Access: Practical Innovations, Open Solutions. 4, 165–188. https://doi.org/10.1109/ACCESS.2015.2511558.

    Article  Google Scholar 

  14. Wu, D., (2023). DCFusion: A Dual-Frequency Cross-Enhanced Fusion Network for Infrared and Visible Image Fusion. IEEE Transactions on Instrumentation and Measurement. 72, 1–15. Art no. 5011815. https://doi.org/10.1109/TIM.2023.3267380.

  15. Cheng, Y., Jia, Z., Lai, H., Yang J. and Kasabov, N. K. (2020). Blue Channel and Fusion for Sandstorm Image Enhancement. IEEE Access. 8, 66931–66940. Art no. 5011815. https://doi.org/10.1109/ACCESS.2020.2985869.

  16. B. Yu, Y. Chen, S. -Y. Cao, H. -L. Shen and J. Li. (2022). Three-Channel Infrared Imaging for Object Detection in Haze. IEEE Transactions on Instrumentation and Measurement. 8, 66931–66940. Art no. 5011815. https://doi.org/10.1109/TIM.2022.3164062.

  17. Zhang, C., et al. (2019). Weather Visibility Prediction based on Multimodal Fusion. Ieee Access : Practical Innovations, Open Solutions. 7, 74776–74786. https://doi.org/10.1109/ACCESS.2019.2920865.

    Article  Google Scholar 

  18. Wang, Y., Zhang, Z., Hao, W., & Song, C. (2021). Multi-domain image-to-image translation via a Unified Circular Framework. IEEE Transactions on Image Processing, 30, 670–684. https://doi.org/10.1109/TIP.2020.3037528.

    Article  ADS  PubMed  Google Scholar 

  19. Zou, Y., et al. (2020). A robust license plate Recognition Model based on Bi-LSTM. Ieee Access : Practical Innovations, Open Solutions, 8, 211630–211641. https://doi.org/10.1109/ACCESS.2020.3040238.

    Article  Google Scholar 

  20. Panahi, R., & Gholampour, I. (April 2017). Accurate detection and Recognition of Dirty Vehicle Plate Numbers for high-speed applications. IEEE Transactions on Intelligent Transportation Systems, 18(4), 767–779. https://doi.org/10.1109/TITS.2016.2586520.

  21. Huang, S. C., Chen, B. H., & Wang, W. J. (2014). Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions. IEEE Transactions on Circuits and Systems for Video Technology. 24(10), 1814–1824, Oct. https://doi.org/10.1109/TCSVT.2014.2317854.

  22. Fang, C., Song, Y., Guan, F., Liang, F., & Yang, L. (June 2023). Complex-valued deep neural network for Target Recognition of UAV SAR Imagery. IEEE Journal on Miniaturization for Air and Space Systems, 4(2), 175–185. https://doi.org/10.1109/JMASS.2023.3247586.

  23. Chen, S., Shu, T., Zhao, H., & Tang, Y. Y. (2023). MASK-CNN-Transformer for real-time multi-label weather recognition. Knowledge-Based Systems, 278, 0950–7051.

    Article  Google Scholar 

  24. **e, K., Huang, L., Zhang, W., & Qin, Q. (2022). Lei Lyu,a CNN-based multi-task framework for weather recognition with multi-scale weather cues. Expert Systems with Applications Volume, 198, 0957–4174.

    Google Scholar 

  25. Wenchen Yang, Y., Zhao, Q., Li, F., & Zhu (2023). Yu Su,Multi visual feature fusion based fog visibility estimation for expressway surveillance using deep learning network, Expert Systems with Applications, Volume 234, 121151, ISSN 0957–4174.

  26. S Divya Meena, Veeramachaneni Gayathri siva sameeraja, Nagineni Sai Lasya, Meda Sathvika, Veluru Harshitha, J Sheela. (2022). Hybrid Neural Network Architecture for Multi-Label Object Recognition using Feature Fusion. Procedia Computer Science. 215, 78–90. ISSN 1877 – 0509.

  27. Kaihua, X., Kang, X., & Liu, H. (2023). Puhong Duan,MOFA: A novel dataset for multi-modal image Fusion Applications Information Fusion, 96, Pages 144–155, ISSN 1566–2535.

  28. Tang, Q., & Liang, J. (2023). Fangqi Zhu, a comparative review on multi-modal sensors fusion based on deep learning Signal Processing. 213:109165, ISSN 0165–1684.

  29. Chen, L., Zhan, W., Tian, W., He, Y., & Zou, Q. (2019). Deep Integration: A Multi-Label Architecture for Road Scene Recognition. IEEE Transactions on Image Processing. 28(10): 4883–4898, Oct. https://doi.org/10.1109/TIP.2019.2913079.

  30. Wang, H., Zhang, Z., Hu, Z., & Dong, Q. (2022). SAR-to-optical image translation with hierarchical latent features. IEEE Transactions on Geoscience and Remote Sensing, 60(1-12), Art5233812. https://doi.org/10.1109/TGRS.2022.3200996.

    Article  Google Scholar 

  31. Li, Qu, Z., & Wang, S. (2021). Ling Liu,a method of cross-layer fusion multi-object detection and recognition based on improved faster R-CNN model in complex traffic environment. Pattern Recognition Letters, 145, 127–134.

    Article  ADS  Google Scholar 

  32. Wang, J., Huang, X., & Gai, S. (2019). Single image rain removal via cascading attention Aggregation Network on Challenging Weather conditions. Ieee Access : Practical Innovations, Open Solutions, 7, 178848–178861. https://doi.org/10.1109/ACCESS.2019.2959041.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work does not receive any kind of funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammad Shabaz.

Ethics declarations

The authors have no conflicts of interest. No human or animal participation is involved in this research.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shabaz, M., Soni, M. Adaptive enhancement of spatial information in adverse weather. Spat. Inf. Res. (2024). https://doi.org/10.1007/s41324-024-00577-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41324-024-00577-x

Keywords

Navigation