Log in

Real-time detection of construction and demolition waste impurities using the improved YOLO-V7 network

  • ORIGINAL ARTICLE
  • Published:
Journal of Material Cycles and Waste Management Aims and scope Submit manuscript

Abstract

Construction and demolition waste accounts for a considerable part of the total waste flow of the city. The most common way to recycle it is to make it into recycled aggregate. In the process of recycling and preparing recycled aggregate from the construction and demolition waste, it is necessary to manually screen out impurities that remain after wind selection, water floating, etc. This not only increases production costs but also affects the quality of recycled aggregates and the utilization rate of construction and demolition waste. This study proposes an automated method for detecting construction and demolition waste using an improved object detection network. By improving the feature fusion layer, the convolutional block, and the loss function of the YOLOv7 object detection network, the recognition accuracy, the recall rate, and the mean average precision of the network have been greatly improved, while the number of parameters has been further reduced. Therefore, the improved YOLOV7 network can effectively identify various impurities in the dismantled waste, providing technical support for automatic detection and screening of construction and demolition waste impurities robots, improving the efficiency of enterprise processing of construction and demolition waste, and indirectly alleviating environmental problems and resource waste caused by construction and demolition waste.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Hoornweg D, Bhada-Tata P (2012) What a waste: a global review of solid waste management.

  2. de Andrade Salgado F, de Andrade Silva F (2022) Recycled aggregates from construction and demolition waste towards an application on structural concrete: a review. J BuildEng 52:104452

    Article  Google Scholar 

  3. Redmon J, Divvala S, Girshick R, Farhadi A, (2016) You only look once: unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788.

  4. Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271.

  5. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, ar**v preprint ar**v:1804.02767, 2018.

  6. Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: Optimal speed and accuracy of object detection. ar**v preprint ar**v:2004.10934.

  7. ultralytics. yolov5. https://github.com/ultralytics/yolov5. Accessed on 18 May 2020.

  8. Wang CY, Bochkovskiy A, Liao HYM (2022) YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. ar**v preprint ar**v:2207.02696.

  9. Girshick R, Donahue J, Darrell T, Malik J (2015) Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Pattern Anal Mach Intell 38(1):142–158

    Article  Google Scholar 

  10. Girshick R (2015) Fast R-CNN. Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448.

  11. Ren S, He K, Girshick R, Sun J (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  12. Liu W, Anguelov D, Erhan D, Szegedy C (2016) SSD: single shot multibox detector. European Conference on Computer Vision, pp. 21–37.

  13. Fu CY, Liu W, Ranga A, Tyagi A, Berg AC (2017) Dssd: deconvolutional single shot detector. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, ar**v preprint ar**v:1701.06659.

  14. Lukka TJ, Tossavainen T, Kujala JV, Raiko T (2014) Zenrobotics recycler–robotic sorting using machine learning. Proceedings of the International Conference on Sensor-Based Sorting (SBS), 1–8.

  15. Kujala JV, Lukka TJ, Holopainen H (2015) Picking a conveyor clean by an autonomously learning robot, ar**v preprint ar**v:1511.07608.

  16. **ao W, Yang J, Fang H, Zhuang J, Ku Y (2020) Classifying construction and demolition waste by combining spatial and spectral features. Proc Inst Civ Eng Waste Resour Manage 173(3):79–90

    Google Scholar 

  17. Davis P, Aziz F, Newaz MT, Sher W, Simon L (2021) The classification of construction waste material using a deep convolutional neural network. Autom Constr 122:103481

    Article  Google Scholar 

  18. Chen J, Lu W, Yuan L, Wu Y, Xue F (2022) Estimating construction waste truck payload volume using monocular vision. Resour Conserv Recycl 177:106013

    Article  Google Scholar 

  19. [Yang, M., Thung, G., 2016. Classification of Trash for Recyclability Status. CS229 Project Report 2016.

  20. Majchrowska S, Mikołajczyk A, Ferlin M, Klawikowska Z, Plantykow MA, Kwasigroch A, Majek K (2022) Deep learning-based waste detection in natural and urban environments. Waste Manag 138:274–284

    Article  Google Scholar 

  21. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1–9.

  22. Lin TY, Dollár P, Girshick R, He KM, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125.

  23. Liu S, Qi L, Qin HF, Shi JP, Jia JY (2018). In: Path aggregation network for instance segmentation. Salt Lake City, America, pp. 8759–8768.

  24. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T., Andreetto M, Adam H (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. ar**v preprint ar**v:1704.04861.

  25. Li H, Li J, Wei H, Liu Z, Zhan Z, Ren Q (2022) Slim-neck by GSConv: A better design paradigm of detector architectures for autonomous vehicles. ar**v preprint ar**v:2206.02424.

  26. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626.

Download references

Acknowledgements

The author would like to thank Jiangsu **cheng Yonglian Environmental Protection Technology Co., Ltd. for providing the data collection and experimental site for this study.

Funding

This work is supported by the Qing Lan Project of the Higher Education Institutions of Jiangsu Province, the 2022 Jiangsu Science and Technology Plan Special Fund (International Science and Technology Cooperation) (BZ2022029).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junji Chen.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflicts of interest to report regarding the present study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fang, H., Chen, J., Wang, M. et al. Real-time detection of construction and demolition waste impurities using the improved YOLO-V7 network. J Mater Cycles Waste Manag 26, 2200–2213 (2024). https://doi.org/10.1007/s10163-024-01960-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10163-024-01960-4

Keywords

Navigation