Abstract
This work aims to provide a robust assistive system that uses the interface of a conversational chatbot to guide people who are blind in outdoor navigation. The proposed system will also be aware of situations where people close to the user are not wearing masks. We propose and analyze the effectiveness of a sophisticated system consisting of a mask detector, common objects detector and an obstacle avoidance system. The proposed system can be entirely hosted on a secure remote server and used on any smartphone with stable internet connectivity, with all the processing handled by the server. The responses each sub-system generates are transmitted back to the client, where the user listens to the response. The proposed system outperforms to detect masks with an accuracy of 99.51% and object detection with an 80.36% success rate, respectively.
Similar content being viewed by others
Data availability
Not applicable.
Change history
07 May 2024
A Correction to this paper has been published: https://doi.org/10.1007/s42979-024-02919-w
References
Giudice NA, Legge GE. Blind navigation and the role of technology. In: Engineering handbook of smart technology for aging disability and independence. Wiley; 2008.
Ulrich I, Borenstein J. The GuideCane-applying mobile robot technologies to assist the visually impaired. IEEE Trans Syst, Man, Cybern—Part A: Syst Humans. 2001;31(2):131–6. https://doi.org/10.1109/3468.911370.
Hill J, Black J. The miniguide: a new electronic travel device. J Visual Impair Blindness. 2003;97(10):1–6.
Kumar MN, Chandar PL, Prasad AV, Sumangali K. Android based educational Chatbot for visually impaired people. In: 2016 IEEE international conference on computational intelligence and computing research (ICCIC), 2016; pp. 1–4.
Kulyukin V, Gharpure C, Nicholson J, Pavithran S. RFID in robot-assisted indoor navigation for the visually impaired. In: IEEE/RSJ international conference on intelligent robots and systems (IROS) Vol. 2, pp. 1979–1984, 2004; IEEE.
Islam MI, Raj MMH, Nath S, Rahman MF, Hossen S, Imam MH. An indoor navigation system for visually impaired people using a path finding algorithm and a wearable cap. In: 2018 3rd international conference for convergence in technology (I2CT) 2018; pp. 1–6, IEEE.
Vlaminck M, Quang LH, Van Nam H, Vu H, Veelaert P, Philips W. Indoor assistance for visually impaired people using a RGB-D camera. In: 2016 IEEE Southwest symposium on image analysis and interpretation (SSIAI), 2016; pp. 161–164, IEEE.
Lee YH, Medioni G. RGB-D camera based wearable navigation system for the visually impaired. Comput Vis Image Underst. 2016;149:3–20.
Lay-Ekuakille A, Mukhopadhyay SC. Wearable and autonomous biomedical devices and systems for smart environment. Heidelberg, Germany: Springer; 2010.
Balata J, Mikovec Z. Conference: CHI 2017 - Conversational UX Design Workshop, Denver, CO, USA; 2017.
Jacob R, Mooney P, Corcoran P, Winstanley AC. Integrating haptic feedback to pedestrian navigation applications, GIS Research UK 19th Annual Conference, 2011.
Patel S, Kumar A, Yadav P, Desai J, Patil D. Smartphone-based obstacle detection for visually impaired people. In: 2017 International conference on innovations in information, embedded and communication systems (ICIIECS), Coimbatore, India, 2017; pp. 1–3, doi: https://doi.org/10.1109/ICIIECS.2017.8275916.
Costa P, Fernandes H, Barroso J, Paredes H, Hadjileontiadis LJ. Obstacle detection and avoidance module for the blind. World automation congress (WAC). IEEE; 2016.
Al-refai G, Al-refai M. Road object detection using yolov3 and kitti dataset. Int J Adv Comput Sci Appl. 2020;11(8):48–53. https://doi.org/10.14569/IJACSA.2020.0110807
Ashraf K, Wu B, Iandola FN, Moskewicz MW, Keutzer K. Shallow networks for high-accuracy road object-detection. Comput Vis Pattern Recogn; 2017. ar**v preprint ar**v:1606.01561
Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: the kitti dataset. Int J Robot Res. 2013;32(11):1231–7.
Kim H, Lee Y, Yim B, Park E, Kim H. On-road object detection using deep neural network. In: IEEE international conference on consumer electronics-Asia (ICCE-Asia), 2016; pp. 1–4, IEEE.
Liu Y, Cao S, Lasang P, Shen S. Modular lightweight network for road object detection using a feature fusion approach. IEEE Trans Syst, Man, Cybern: Syst. 2019;51(8):4716–28.
Bocklisch, T., Faulkner, J., Pawlowski, N., Nichol, A. Rasa: Open source language understanding and dialogue management; 2017. ar**v preprint ar**v:1712.05181.
Bochkovskiy A, Wang CY, Liao HYM. Yolov4: optimal speed and accuracy of object detection. Comput Vis Pattern Recogn. 2020.
Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Zitnick CL. Microsoft coco: Common objects in context. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part V 13, 2014; pp. 740–755. Springer International Publishing.
Lippert C, Bergner B, Ahmed A, Ali R, Adeel S, Shahriar MH, Mojumder M. Face mask detector. University of Potsdam; 2020. https://doi.org/10.13140/RG.2.2.32147.50725
Rahman MM, Manik MMH, Islam MM, Mahmud S, Kim JH. An automated system to limit COVID-19 using facial mask detection in smart city network. In: IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), 2020; pp. 1–5, IEEE.
Fang H, Zhu F. Object detection with the addition of new classes based on the method of RNOL. Math Probl Eng. 2020. https://doi.org/10.1155/2020/9205373
Li J, Wang JZ. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans Pattern Anal Mach Intell. 2003;25(9):1075–88.
Saponara S, Elhanashi A, Zheng Q. Develo** a real-time social distancing detection system based on YOLOv4-tiny and bird-eye view for COVID-19. J Real-Time Image Proc. 2022;19:551–63. https://doi.org/10.1007/s11554-022-01203-5.
Acknowledgements
Not applicable.
Funding
The authors did not receive support from any organization for the submitted work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Animal and human rights
This article does not contain any studies with human participants performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised due to incorrect third author name. Now, it has been corrected from A. C. Atul M. Bharadwaj to Atul M. Bharadwaj.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Krishna, A.N., Chaitra, Y.L., Bharadwaj, A.M. et al. Real-time Machine Vision System for the Visually Impaired. SN COMPUT. SCI. 5, 399 (2024). https://doi.org/10.1007/s42979-024-02741-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42979-024-02741-4