Vision-Based Landing Site Detection for Unmanned Aerial Vehicle: A Review

  • Conference paper
  • First Online:
Advances in Intelligent Automation and Soft Computing (IASC 2021)

Part of the book series: Lecture Notes on Data Engineering and Communications Technologies ((LNDECT,volume 80))

Included in the following conference series:

Abstract

Autonomous landing technique for Unmanned Aerial Vehicle (UAV) is a well-studied problem, for most flight accidents happened during this stage. This survey aims to provide an extensive overview for a guide for vision-based autonomous landing site detection development. According to whether an auxiliary marker is set, the detection tactics can be categorized as marker-aided and markerless ones. Marker-aided tactics usually employ an elaborate and distinctive geometric design to achieve robust detection efficiently. While the markerless tactics can be further decomposed into two main steps: pattern recognition and flat site detection. Moreover, the computational optics theory and deep learning are now showing extraordinary talents in mobile platforms including UAV, thus we elaborate monocular depth estimation with different supervision methods for flatness analysis in markless tactics. We hope our comprehensive overview may possibly be helpful in the analysis and improving the vision-based autonomous landing technique for UAV.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 279.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Peng, H., Rui, N.: Exploration of the future development trend of UAV under the background of civil-military integration. Sci. Technol. Innov. 4, 34–36 (2020)

    Google Scholar 

  2. Longsheng, C., Mou, C., Changsheng, J.: A vision-based attitude/position estimation method for UAV in its autonomous landing. Electron. Opt. Control. 16(5), 47–51 (2009)

    Google Scholar 

  3. Technical Standards of Civil Heliports. http://www.caac.gov.cn/XXGK/XXGK/BZGF/HYBZ/201511/t20151102_7850.html

  4. Zhipeng, X.: Research on several key issues of autonomous flight of UAV. Master Thesis of Zhejiang University, Zhejiang, China, pp. 58–74 (2019)

    Google Scholar 

  5. Benini, A., Rutherford, M.J., Valavanis, K.P.: Real-time, GPU-based pose estimation of a UAV for autonomous takeoff and landing. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE Press, Stockholm, pp. 3463–3470 (2016)

    Google Scholar 

  6. Guangjun, Z., Fuqiang, Z.: Position and orientation estimation method for landing of unmanned aerial vehicle with two circle based computer vision. Acta Aeronaut. et Astronaut. Sin. 26(3), 344–348 (2005)

    Google Scholar 

  7. Erya, W.: Research and implementation of unmanned aerial vehicle image processing technology. Master Thesis of University of Electronic Science and Technology of China, Sichuan, China, pp. 54–63 (2017)

    Google Scholar 

  8. Rui, W., **n, L., Guangjun, Z.: A linear algorithm for determining intrinsic parameters of zoomed monocular camera in the vision based landing of an UAV. Acta Aeronaut. et Astronaut. Sin. 27(4), 676–681 (2006)

    Google Scholar 

  9. Peng, Y., Rui, W., Fuqiang, Z., Guangjun, Z.: Software imitation for unmanned aerial vehicle landing scenes. J. Bei**g Univ. Aeronaut. Astronaut. 31(6), 609–613 (2005)

    Google Scholar 

  10. Jiaju, C., Rui, W., Rumo, W.: Vision positioning method for autonomous precise landing of UAV based on square landing mark. In: The 2nd International Conference on Computer Vision and Data Mining, Scopus, **an, pp. 1–6 (2020)

    Google Scholar 

  11. Mucai, Z., Bingwei, H., **yuan, Z.: Study of autonomous tracking and landing of unmanned aerial vehicle based on monocular vision. Mach. Build. Autom. 48(04), 149–152 (2019)

    Google Scholar 

  12. Zhaozhe, W.: Research on autonomous landing technology of four-rotor UAV mobile platform based on visual positioning. Master Dissertation of Harbin Institute of Technology, Harbin, China pp. 29–40 (2019)

    Google Scholar 

  13. Huantai, X.: Research on autonomous landing method of multi-rotor UAV based on binocular vision. Master Dissertation of Harbin University of Science and Technology, Harbin, China, pp. 33–52 (2018)

    Google Scholar 

  14. Mi, Z., Yong, Z., Shuhui, B.: Multi-level marker based autonomous landing systems for UAVs. Acta Aeronaut. et Astronaut. Sin. 39(10), 208–216 (2018)

    Google Scholar 

  15. **aopeng, Q.: Research on autonomous landing UAV technology based on cooperation objectives and vision-guided. Master Thesis of Nan**g University of Aeronautics and Astronautics, Nan**g, China, pp. 46–57 (2014)

    Google Scholar 

  16. Weilin, L.: Study on visual landing techniques of unmanned helicopter under unknown environment. Master Thesis of Nan**g University of Aeronautics and Astronautics, Nan**g, China, pp. 63–76 (2018)

    Google Scholar 

  17. Nagothu, S.K., Anitha, G.: Automatic landing site detection for UAV using supervised classification. In: Rao, P.J., Rao, K.N., Kubo, S. (eds.) Proceedings of International Conference on Remote Sensing for Disaster Management. SSGG, pp. 309–316. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-77276-9_27

    Chapter  Google Scholar 

  18. Lu, A., Ding, W., Wang, J., Li, H.: Autonomous vision-based safe area selection algorithm for UAV emergency forced landing. In: Liu, C., Wang, L., Yang, A. (eds.) ICICA 2012. CCIS, vol. 308, pp. 254–261. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34041-3_37

    Chapter  Google Scholar 

  19. Fraczek, P., Mora, A., Kryjak, T.: Embedded vision system for automated drone landing site detection. In: Chmielewski, L.J., Kozera, R., Orłowski, A., Wojciechowski, K., Bruckstein, A.M., Petkov, N. (eds.) ICCVG 2018. LNCS, vol. 11114, pp. 397–409. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00692-1_35

    Chapter  Google Scholar 

  20. Park, J., Kim, Y., Kim, S.: Landing site searching and selection algorithm development using vision system and its application to quadrotor. Trans. Control Syst. Technol. 23(2), 488–503 (2015)

    Article  Google Scholar 

  21. Desaraju, V.R., Michael, N., Humenberger, M.: Vision-based landing site evaluation and informed optimal trajectory generation toward autonomous rooftop landing. Auton Robot. 39, 445-463 (2015)

    Google Scholar 

  22. Ruikang, L., Qiwei, H., Hui, F., Bo, H.: Autonomous safe landing system of unmanned rotorcraft on rugged terrain. ROBOT 42(04), 416–426 (2020)

    Google Scholar 

  23. Maryam, A.K., Palaiahnakote, S., Mohd, Y.I.: An automatic zone detection system for safe landing of UAVs. Expert Syst. Appl. 122, 319–333 (2019)

    Article  Google Scholar 

  24. Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Proceedings of 27th International Conference on Neural Information Processing Systems, IEEE Press, Montrel, pp. 2366–2374 (2014)

    Google Scholar 

  25. Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: 2015 IEEE International Conference on Computer Vision (ICCV), IEEE Press, Santiago, pp. 2650–2658 (2015)

    Google Scholar 

  26. Laina, I., Rupprecht, C., Belagiannis, V.: Deeper depth prediction with fully convolutional residual networks. In: 2016 Fourth International Conference on 3D Vision, IEEE Press, Stanford, pp. 239–248 (2016)

    Google Scholar 

  27. Tu, X., Xu, C., Liu, S.: Learning depth for scene reconstruction using an encoder-decoder model. IEEE Access. 8, 89300–89317 (2020)

    Article  Google Scholar 

  28. Wofk, D., Ma, F., Yang, T.: FastDepth: fast monocular depth estimation on embedded systems. In: 2019 International Conference on Robotics and Automation, IEEE Press, Montreal, pp. 6101–6108 (2019)

    Google Scholar 

  29. Garg, R., Bg, V.K., Carneiro, G., Reid, I.: Unsupervised CNN for single view depth estimation: geometry to the rescue. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 740–756. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_45

    Chapter  Google Scholar 

  30. Godard, C., Aodha, O.M., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Press, Honolulu, pp. 6602–6611 (2017)

    Google Scholar 

  31. Poggi, M., Aleotti, F., Tosi, F.: Towards real-time unsupervised monocular depth estimation on CPU. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Press, Madrid, pp. 5848–5854 (2018)

    Google Scholar 

  32. Zhou, T., Brown, M., Snavely, N.: Unsupervised learning of depth and ego-motion from video. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Press, Honolulu, pp. 6612–6619 (2017)

    Google Scholar 

  33. Kuznietsov, Y., Stückler, J., Leibe, B.: Semi-supervised deep learning for monocular depth map prediction. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Press, Honolulu, pp. 2215–2223 (2017)

    Google Scholar 

  34. Zama Ramirez, P., Poggi, M., Tosi, F., Mattoccia, S., Di Stefano, L.: Geometry meets semantics for semi-supervised monocular depth estimation. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11363, pp. 298–313. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20893-6_19

    Chapter  Google Scholar 

  35. Nekrasov, V., Dharmasiri, T., Spek, A.: Real-time joint semantic segmentation and depth estimation using asymmetric annotations. In: 2019 International Conference on Robotics and Automation, IEEE Press, Montreal, pp. 7101–7107 (2019)

    Google Scholar 

Download references

Acknowledgments

The authors thank the anonymous reviewers for hel**. This work was supported partly by a grant from National Natural Science Foundation of China (61673039).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jialing Zou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, R., Zou, J. (2022). Vision-Based Landing Site Detection for Unmanned Aerial Vehicle: A Review. In: Li, X. (eds) Advances in Intelligent Automation and Soft Computing. IASC 2021. Lecture Notes on Data Engineering and Communications Technologies, vol 80. Springer, Cham. https://doi.org/10.1007/978-3-030-81007-8_108

Download citation

Publish with us

Policies and ethics

Navigation