Abstract
This paper presents the development of an assistive system for hearing impaired people, aiming to convey auditory information through visual images for effective communication. The system integrates speech recognition, morphological analysis, and image generation components, implemented on an assistive robot platform. Experiments were conducted to validate the system’s effectiveness, including speech recognition, morphological analysis, and image generation. The results demonstrate the system’s ability to accurately convey speech content and its surrounding visual information. This research contributes to the development of assistive technologies for individuals with hearing impairments, enhancing their communication abilities and improving their daily lives.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
https://www.japantimes.co.jp/news/2020/07/22/national/social-issues/sign-language-deaf/
https://www.accessibility.com/blog/do-all-deaf-people-use-sign-language
Toba,Y., et al.: Considering multi-modal speech visualization for deaf and hard of hearing people. In: 2015 10th Asia-Pacific Symposium on Information and Telecommunication Technologies (APSITT), Colombo, Sri Lanka, pp. 1-3 (2015). https://doi.org/10.1109/APSITT.2015.7217102
KN, S.K., Sathish, R., Vinayak, S., Pandit, T.P.: Braille assistance system for visually impaired, blind & deaf-mute people in indoor & outdoor application. In: 2019 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), Bangalore, India, pp. 1505–1509 (2019). https://doi.org/10.1109/RTEICT46194.2019.9016765
Yuanming, J., Jiacheng, F., Hongmiao, C., Zhanlu, L., in**, C.: Applied research of speech recognition system-based auxiliary communication device in college PE course for the deaf. In: 2021 International Conference on Information Technology and Contemporary Sports (TCS), Guangzhou, China, pp. 485–488 (2021). https://doi.org/10.1109/TCS52929.2021.00104
Jairam, B.G., Ponnappa, D.: gesture based virtual assistant for deaf-mutes using deep learning approach. In: 2023 9th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, pp. 1–7 (2023). https://doi.org/10.1109/ICACCS57279.2023.10112986
Furuhashi, M., Nakamura, T., Kanoh, M., Yamada, K.: Touch-based information transfer from a robot modeled on the hearing dog. In: 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Istanbul, Turkey, pp. 1–6 (2015). https://doi.org/10.1109/FUZZ-IEEE.2015.7337981
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems, vol. 33, pp.1-25 (2020)
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models.In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.10674–10685 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Zhang, B., Aoki, T., Matsuura, K., Lim, Ho. (2024). Development of an Image Generation System for Expressing Auditory Environment to Hearing-Impaired People. In: Li, J., Zhang, B., Ying, Y. (eds) 6GN for Future Wireless Networks. 6GN 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 553. Springer, Cham. https://doi.org/10.1007/978-3-031-53401-0_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-53401-0_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53400-3
Online ISBN: 978-3-031-53401-0
eBook Packages: Computer ScienceComputer Science (R0)