Sign Detection Using an N-Gram Language Model and MobileNet

  • Conference paper
  • First Online:
Advances in Distributed Computing and Machine Learning (ICADCML 2024)

Abstract

Typically a human being has no trouble communicating with each other through speech, gestures, body language, reading, and writing. However, people with speech impairments rely solely on sign language, which makes it harder for them to interact with the majority. This creates a need for sign language recognition systems that can understand and convert sign language into spoken or written language, and vice versa so that others can understand. Existing systems for this purpose are limited, expensive, and difficult to use and much work has not been done for Indian sign language. As a result, the main objective of this paper is to increase the accuracy of sign detection using language models for Indian sign language. For this N-gram sign model is used in MobileNet machine learning system. The simulation results showing the improvement of sign detection with accuracy of nearly 91% from 85% with the prediction model followed by machine learning method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Katoch S, Singh V, Tiwary US (2022) Indian sign language recognition system using SURF with SVM and CNN. Array 14:100141

    Google Scholar 

  2. Indian Sign Language. https://indiansignlanguage.org/. Last accessed 4 May 2022

  3. Liu T, Zhu S, Han G (2020) A comprehensive survey of deep learning for sign language recognition. IEEE Access 8:207449–207478

    Google Scholar 

  4. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H MobileNets: efficient convolutional neural networks for mobile vision applications. ar**v preprint ar**v:1704.04861

  5. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H, MobileNets: efficient convolutional neural networks for mobile vision applications [Online]. Available: https://arxiv.org/pdf/1704.04861.pdf

  6. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) ImageNet: a large-scale hierarchical image database. In: IEEE Conference on computer vision and pattern recognition (CVPR), pp 248–255

    Google Scholar 

  7. Oparin I, Sundermeyer M, Ney H, Gauvain J-L (2012) Performance analysis of neural networks in combination with n-gram language models. In: IEEE International conference on acoustics, speech and signal processing (ICASSP) proceedings, pp 5005–5008

    Google Scholar 

  8. Murali A, Prasad R (2018) A survey of sign language recognition methods. IEEE Trans Hum-Mach Syst 48(6):583–598

    Google Scholar 

  9. Zhou W, Liu Y, Yang J (2018) A review of vision-based sign language recognition methods. Int J Comput Vis 126(3):251–269

    Google Scholar 

  10. Yu W, Haibo (2019) Recent advances in sign language recognition: a review and future directions. IEEE Trans Neural Netw Learn Syst 30(10):2974–2990

    Google Scholar 

  11. Wu M, Tian Y, Tan T (2014) A survey on sign language recognition and translation. IEEE Trans Syst, Man, Cybern, Part C: Appl Rev 44(6):752–765

    Google Scholar 

  12. Pradeep K, Himaanshu G, Roy PP, Dogra DP (2017) A multimodal framework for sensor based sign language recognition. Neurocomputing 259:21–38

    Google Scholar 

  13. Athira K, Sruthi CJ, Lijiya A (2022) A signer independent sign language recognition with co-articulation elimination from live videos: an Indian scenario. J King Saud Univ—Comput Inf Sci 34(3)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Trilochan Panigrahi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gauns, K., Lawande, A., Panigrahi, T. (2024). Sign Detection Using an N-Gram Language Model and MobileNet. In: Nanda, U., Tripathy, A.K., Sahoo, J.P., Sarkar, M., Li, KC. (eds) Advances in Distributed Computing and Machine Learning. ICADCML 2024. Lecture Notes in Networks and Systems, vol 955. Springer, Singapore. https://doi.org/10.1007/978-981-97-1841-2_29

Download citation

Publish with us

Policies and ethics

Navigation