Interactive Control Between Human and Omnidirectional Mobile Robot: A Vision-Based Deep Learning Approach

  • Conference paper
  • First Online:
Intelligent Systems and Networks (ICISN 2023)

Abstract

Nowadays, mobile robots have been popular not only in industrial applications such as materials transportation but also in non-industrial applications, e.g., human assistance. Among developed configurations, omnidirectional mobile robots have attracted great attention in recent times due to their superior maneuverability over their conventional counterparts. In this research, an application of a four mecanum-wheeled omnidirectional mobile robot (4-MWMR) in human assistance has been developed. By using image processing, the 4-MWMR is capable of following an authorized person, thereby assisting users in transporting large-size or heavy-weight materials. Good experimental results show the ability of the developed system to be used in practice.

This research is funded by the Hanoi University of Science and Technology (HUST) under project number T2022-PC-005.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Rubio, F., Valero, F., Llopis-Albert, C.: A review of mobile robots: concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 16(2) (2019). https://doi.org/10.1177/1729881419839596

  2. Taheri, H., Zhao, C.X.: Omnidirectional mobile robots, mechanisms and navigation approaches. Mech. Mach. Theory 153, 103958 (2020). https://doi.org/10.1016/j.mechmachtheory.2020.103958. ISSN 0094-114X

    Article  Google Scholar 

  3. Cebollada, S., Payá, L., Flores, M., Peidró, A., Reinoso, O.: A state-of-the-art review on mobile robotics tasks using artificial intelligence and visual data. Expert Syst. Appl. 167, 114195 (2021). https://doi.org/10.1016/j.eswa.2020.114195. ISSN 0957-4174

    Article  Google Scholar 

  4. L. Payá, A. Gil, O. Reinoso: A state-of-the-art review on map** and localization of mobile robots using omnidirectional vision sensors. J. Sens. 20 (2017). Article ID 3497650. https://doi.org/10.1155/2017/3497650

  5. Panigrahi, P.K., Bisoy, S.K.: Localization strategies for autonomous mobile robots: a review. J. King Saud Univ. Comput. Inf. Sci. 34(8), 6019–6039 (2022). https://doi.org/10.1016/j.jksuci.2021.02.015. ISSN 1319-1578

  6. Gupta, M., Kumar, S., Behera, L., Subramanian, V.K.: A novel vision-based tracking algorithm for a human-following mobile robot. IEEE Trans. Syst. Man Cybern. Syst. 47(7), 1415–1427 (2017). https://doi.org/10.1109/TSMC.2016.2616343

    Article  Google Scholar 

  7. **, D., Fang, Z., Zeng, J.: A robust autonomous following method for mobile robots in dynamic environments. IEEE Access 8, 150311–150325 (2020). https://doi.org/10.1109/ACCESS.2020.3016472

    Article  Google Scholar 

  8. Yuan, Z., Tian, Y., Yin, Y., Wang, S., Liu, J., Wu, L.: Trajectory tracking control of a four mecanum wheeled mobile platform: an extended state observer-based sliding mode approach. IET Control Theory Appl. 14, 415–426 (2020). https://doi.org/10.1049/iet-cta.2018.6127

    Article  MathSciNet  Google Scholar 

  9. https://www.geeksforgeeks.org/vgg-16-cnn-model/

  10. Schenkel, T., Ringhage, O., Branding, N.: A Comparative Study of Facial Recognition Techniques: With focus on low computational power. Dissertation (2019)

    Google Scholar 

  11. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. EEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  Google Scholar 

  12. Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  13. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, pp. 779–788 (2016)

    Google Scholar 

  14. Yinan, L.I.: A survey of research on deep learning target detection methods. China New Telecomm-unications 23(9), 159–160 (2021)

    Google Scholar 

  15. Chiu, Y.-C., Tsai, C.-Y., Ruan, M.-D., Shen, G.-Y., Lee, T.-T.: Mobilenet-SSDv2: an improved object detection model for embedded systems. In: International Conference on System Science and Engineering (ICSSE) 2020, pp. 1–5 (2020). https://doi.org/10.1109/ICSSE50014.2020.9219319

  16. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. http://arxiv.org/abs/1704.04861

  17. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: MobileNetV2: inverted residuals and linear bottlenecks. http://arxiv.org/abs/1801.04381

  18. https://google.github.io/mediapipe/solutions/hands.html

Download references

Acknowledgements

This research is funded by the Hanoi University of Science and Technology (HUST) under project number T2022-PC-005.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manh Linh Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nguyen, T.C., Bui, T.N., Nguyen, V.N., Nguyen, D.P., Nguyen, C.M., Nguyen, M.L. (2023). Interactive Control Between Human and Omnidirectional Mobile Robot: A Vision-Based Deep Learning Approach. In: Nguyen, T.D.L., Verdú, E., Le, A.N., Ganzha, M. (eds) Intelligent Systems and Networks. ICISN 2023. Lecture Notes in Networks and Systems, vol 752. Springer, Singapore. https://doi.org/10.1007/978-981-99-4725-6_66

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4725-6_66

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4724-9

  • Online ISBN: 978-981-99-4725-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation