Log in

Real-time Machine Vision System for the Visually Impaired

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

A Correction to this article was published on 07 May 2024

This article has been updated

Abstract

This work aims to provide a robust assistive system that uses the interface of a conversational chatbot to guide people who are blind in outdoor navigation. The proposed system will also be aware of situations where people close to the user are not wearing masks. We propose and analyze the effectiveness of a sophisticated system consisting of a mask detector, common objects detector and an obstacle avoidance system. The proposed system can be entirely hosted on a secure remote server and used on any smartphone with stable internet connectivity, with all the processing handled by the server. The responses each sub-system generates are transmitted back to the client, where the user listens to the response. The proposed system outperforms to detect masks with an accuracy of 99.51% and object detection with an 80.36% success rate, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

Not applicable.

Change history

References

  1. Giudice NA, Legge GE. Blind navigation and the role of technology. In: Engineering handbook of smart technology for aging disability and independence. Wiley; 2008.

    Google Scholar 

  2. Ulrich I, Borenstein J. The GuideCane-applying mobile robot technologies to assist the visually impaired. IEEE Trans Syst, Man, Cybern—Part A: Syst Humans. 2001;31(2):131–6. https://doi.org/10.1109/3468.911370.

    Article  Google Scholar 

  3. Hill J, Black J. The miniguide: a new electronic travel device. J Visual Impair Blindness. 2003;97(10):1–6.

    Article  Google Scholar 

  4. Kumar MN, Chandar PL, Prasad AV, Sumangali K. Android based educational Chatbot for visually impaired people. In: 2016 IEEE international conference on computational intelligence and computing research (ICCIC), 2016; pp. 1–4.

  5. Kulyukin V, Gharpure C, Nicholson J, Pavithran S. RFID in robot-assisted indoor navigation for the visually impaired. In: IEEE/RSJ international conference on intelligent robots and systems (IROS) Vol. 2, pp. 1979–1984, 2004; IEEE.

  6. Islam MI, Raj MMH, Nath S, Rahman MF, Hossen S, Imam MH. An indoor navigation system for visually impaired people using a path finding algorithm and a wearable cap. In: 2018 3rd international conference for convergence in technology (I2CT) 2018; pp. 1–6, IEEE.

  7. Vlaminck M, Quang LH, Van Nam H, Vu H, Veelaert P, Philips W. Indoor assistance for visually impaired people using a RGB-D camera. In: 2016 IEEE Southwest symposium on image analysis and interpretation (SSIAI), 2016; pp. 161–164, IEEE.

  8. Lee YH, Medioni G. RGB-D camera based wearable navigation system for the visually impaired. Comput Vis Image Underst. 2016;149:3–20.

    Article  Google Scholar 

  9. Lay-Ekuakille A, Mukhopadhyay SC. Wearable and autonomous biomedical devices and systems for smart environment. Heidelberg, Germany: Springer; 2010.

    Book  Google Scholar 

  10. Balata J, Mikovec Z. Conference: CHI 2017 - Conversational UX Design Workshop, Denver, CO, USA; 2017.

  11. Jacob R, Mooney P, Corcoran P, Winstanley AC. Integrating haptic feedback to pedestrian navigation applications, GIS Research UK 19th Annual Conference, 2011.

  12. Patel S, Kumar A, Yadav P, Desai J, Patil D. Smartphone-based obstacle detection for visually impaired people. In: 2017 International conference on innovations in information, embedded and communication systems (ICIIECS), Coimbatore, India, 2017; pp. 1–3, doi: https://doi.org/10.1109/ICIIECS.2017.8275916.

  13. Costa P, Fernandes H, Barroso J, Paredes H, Hadjileontiadis LJ. Obstacle detection and avoidance module for the blind. World automation congress (WAC). IEEE; 2016.

    Google Scholar 

  14. Al-refai G, Al-refai M. Road object detection using yolov3 and kitti dataset. Int J Adv Comput Sci Appl. 2020;11(8):48–53. https://doi.org/10.14569/IJACSA.2020.0110807

    Article  Google Scholar 

  15. Ashraf K, Wu B, Iandola FN, Moskewicz MW, Keutzer K. Shallow networks for high-accuracy road object-detection. Comput Vis Pattern Recogn; 2017. ar**v preprint ar**v:1606.01561

  16. Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: the kitti dataset. Int J Robot Res. 2013;32(11):1231–7.

    Article  Google Scholar 

  17. Kim H, Lee Y, Yim B, Park E, Kim H. On-road object detection using deep neural network. In: IEEE international conference on consumer electronics-Asia (ICCE-Asia), 2016; pp. 1–4, IEEE.

  18. Liu Y, Cao S, Lasang P, Shen S. Modular lightweight network for road object detection using a feature fusion approach. IEEE Trans Syst, Man, Cybern: Syst. 2019;51(8):4716–28.

    Article  Google Scholar 

  19. Bocklisch, T., Faulkner, J., Pawlowski, N., Nichol, A. Rasa: Open source language understanding and dialogue management; 2017. ar**v preprint ar**v:1712.05181.

  20. Bochkovskiy A, Wang CY, Liao HYM. Yolov4: optimal speed and accuracy of object detection. Comput Vis Pattern Recogn. 2020.

  21. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Zitnick CL. Microsoft coco: Common objects in context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part V 13, 2014; pp. 740–755. Springer International Publishing.

  22. Lippert C, Bergner B, Ahmed A, Ali R, Adeel S, Shahriar MH, Mojumder M. Face mask detector. University of Potsdam; 2020. https://doi.org/10.13140/RG.2.2.32147.50725

  23. Rahman MM, Manik MMH, Islam MM, Mahmud S, Kim JH. An automated system to limit COVID-19 using facial mask detection in smart city network. In: IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), 2020; pp. 1–5, IEEE.

  24. Fang H, Zhu F. Object detection with the addition of new classes based on the method of RNOL. Math Probl Eng. 2020. https://doi.org/10.1155/2020/9205373

    Article  Google Scholar 

  25. Li J, Wang JZ. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans Pattern Anal Mach Intell. 2003;25(9):1075–88.

    Article  Google Scholar 

  26. Saponara S, Elhanashi A, Zheng Q. Develo** a real-time social distancing detection system based on YOLOv4-tiny and bird-eye view for COVID-19. J Real-Time Image Proc. 2022;19:551–63. https://doi.org/10.1007/s11554-022-01203-5.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

The authors did not receive support from any organization for the submitted work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. N. Krishna.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Animal and human rights

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised due to incorrect third author name. Now, it has been corrected from A. C. Atul M. Bharadwaj to Atul M. Bharadwaj.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Krishna, A.N., Chaitra, Y.L., Bharadwaj, A.M. et al. Real-time Machine Vision System for the Visually Impaired. SN COMPUT. SCI. 5, 399 (2024). https://doi.org/10.1007/s42979-024-02741-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-024-02741-4

Keywords

Navigation