Log in

An unsupervised statistical representation learning method for human activity recognition

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

With the evolution of smart devices like smartphones, smartwatches, and other wearable devices, motion sensors have been integrated into these devices to collect data and analyze human activities. Consequently, sensor-based Human Activity Recognition (HAR) has emerged as a significant research area in the fields of ubiquitous computing and wearable computing. This paper presents a novel approach that employs Latent Dirichlet Allocation (LDA) to extract meaningful representations from activity signals. The method involves transforming the activity signal, which is a sequence of samples, into a sequence of discrete symbols using vector quantization. Subsequently, LDA is utilized to embed the symbol sequence into a fixed-length representation vector. Finally, a classifier is employed to classify the obtained representation vector. The effectiveness of the proposed method is evaluated using the UNIMIB-SHAR dataset. Experimental results demonstrate its competitive performance in terms of accuracy and F1-score metrics when compared to existing methods. Moreover, our method boasts a more lightweight architecture and incurs lower computational costs compared to deep learning-based approaches. The findings of this study contribute to the advancement of HAR and hold practical implications for HAR systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability and materials

The dataset analyzed during the current study is from public resources.

References

  1. Bulling, A., Blanke, U., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surveys (CSUR) 46(3), 1–33 (2014)

    Article  Google Scholar 

  2. Demrozi, F., Pravadelli, G., Bihorac, A., Rashidi, P.: Human activity recognition using inertial, physiological and environmental sensors: a comprehensive survey. IEEE Access 8, 210816–210836 (2020)

    Article  Google Scholar 

  3. Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surveys Tutor. 15(3), 1192–1209 (2012)

    Article  Google Scholar 

  4. Kamminga, J.W., Le, D.V., Havinga, P.J.M.: Towards deep unsupervised representation learning from accelerometer time series for animal activity recognition. In: Proceedings of the 6th Workshop on Mining and Learning from Time Series, MiLeTS (2020)

  5. Jain, Y., Tang, C.I., Min, C., Kawsar, F., Mathur, A.: Collossl: collaborative self-supervised learning for human activity recognition. Proc. ACM Interact. Mobile Wear. Ubiq. Technol. 6(1), 1–28 (2022)

    Article  Google Scholar 

  6. Paysan, D., Haug, L., Bajka, M., Oelhafen, M., Buhmann, J.M.: Self-supervised representation learning for surgical activity recognition. Int. J. Comput. Assist. Radiol. Surg. 16, 2037–2044 (2021)

    Article  Google Scholar 

  7. Cheng, X., Zhang, L., Tang, Y., Liu, Y., Hao, W., He, J.: Real-time human activity recognition using conditionally parametrized convolutions on mobile and wearable devices. IEEE Sens. J. 22(6), 5889–5901 (2022)

    Article  Google Scholar 

  8. Zhang, S., Li, Y., Zhang, S., Shahabi, F., **a, S., Deng, Y., Alshurafa, N.: Deep learning in human activity recognition with wearable sensors: a review on advances. Sensors 22(4), 1476 (2022)

    Article  Google Scholar 

  9. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(Jan), 993–1022 (2003)

    Google Scholar 

  10. Micucci, D., Mobilio, M., Napoletano, P.: Unimib shar: a dataset for human activity recognition using acceleration data from smartphones. Appl. Sci. 7(10), 1101 (2017)

    Article  Google Scholar 

  11. Ravi, N., Dandekar, N., Mysore, P., Littman, M.L.: Activity recognition from accelerometer data. In: AAAI, vol 5, pp 1541–1546. Pittsburgh, PA (2005)

  12. Chen, L., Hoey, J., Nugent, C.D., Cook, D.J., Zhiwen, Yu.: Sensor-based activity recognition. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 42(6), 790–808 (2012)

    Article  Google Scholar 

  13. Chen, K., Zhang, D., Yao, L., Guo, B., Zhiwen, Yu., Liu, Y.: Deep learning for sensor-based human activity recognition: overview, challenges, and opportunities. ACM Comput. Surveys (CSUR) 54(4), 1–40 (2021)

    Google Scholar 

  14. Wang, J., Chen, Y., Hao, S., Peng, X., Lisha, H.: Deep learning for sensor-based activity recognition: a survey. Pattern Recogn. Lett. 119, 3–11 (2019)

    Article  Google Scholar 

  15. Kwapisz, J.R., Weiss, G.M., Moore, S.A.: Activity recognition using cell phone accelerometers. ACM SIGKDD Explor. Newsl. 12(2), 74–82 (2011)

    Article  Google Scholar 

  16. Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In: Ambient Assisted Living and Home Care: 4th International Workshop, IWAAL 2012, Vitoria-Gasteiz, Spain, December 3-5, 2012. Proceedings 4, pages 216–223. Springer (2012)

  17. Reyes-Ortiz, J.-L., Oneto, L., Samà, A., Parra, X., Anguita, D.: Transition-aware human activity recognition using smartphones. Neurocomputing 171, 754–767 (2016)

    Article  Google Scholar 

  18. Uddin, M.T., Billah, M.M., Hossain, M.F.: Random forests based recognition of human activities and postural transitions on smartphone. In: 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), pages 250–255. IEEE (2016)

  19. Noor, M.H.M., Salcic, Z., Kevin, I., Wang, K.: Adaptive sliding window segmentation for physical activity recognition using a single tri-axial accelerometer. Pervasive Mobile Comput. 38, 41–59 (2017)

    Article  Google Scholar 

  20. Gupta, P., Dallas, T.: Feature selection and activity recognition system using a single triaxial accelerometer. IEEE Trans. Biomed. Eng. 61(6), 1780–1786 (2014)

    Article  Google Scholar 

  21. Shirahama, K., Kö**, L., Grzegorzek, M.: Codebook approach for sensor-based human activity recognition. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, pages 197–200 (2016)

  22. Picard, R.W., Vyzas, E., Healey, J.: Toward machine emotional intelligence: analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 23(10), 1175–1191 (2001)

    Article  Google Scholar 

  23. Ofli, F., Chaudhry, R., Kurillo, G., Vidal, R., Bajcsy, R.: Berkeley MHAD: a comprehensive multimodal human action database. In: 2013 IEEE Workshop on Applications of Computer Vision (WACV), pages 53–60. IEEE (2013)

  24. Yang, J., Nguyen, M.N., San, P.P., Li, X., Krishnaswamy, S.: Deep convolutional neural networks on multichannel time series for human activity recognition. In: IJCAI. vol 15, pp. 3995–4001. Buenos Aires, Argentina (2015)

  25. Ordóñez, F.J., Roggen, D.: Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1), 115 (2016)

    Article  Google Scholar 

  26. Hammerla, N.Y., Halloran, S., Plötz, T.: Deep, convolutional, and recurrent models for human activity recognition using wearables. ar**v Preprintar**v:1604.08880, (2016)

  27. Guan, Yu., Plötz, T.: Ensembles of deep LSTM learners for activity recognition using wearables. Proc. ACM Interact. Mobile Wear. Ubiq. Technol. 1(2), 1–28 (2017)

    Article  Google Scholar 

  28. Murahari, V.S., Plötz, T.: On attention models for human activity recognition. In: Proceedings of the 2018 ACM International Symposium on Wearable Computers, pages 100–103 (2018)

  29. Qian, H., Pan, S.J., Da, B., Miao, C.: A novel distribution-embedded neural network for sensor-based activity recognition. In: IJCAI, vol 2019, pages 5614–5620 (2019)

  30. Bock, M., Hölzemann, A., Moeller, M., Van Laerhoven, K.: Improving deep learning for HAR with shallow LSTMS. In: 2021 International Symposium on Wearable Computers, pages 7–12 (2021)

  31. Shuai Shao, Yu., Guan, B.Z., Missier, P., Plötz, T.: Convboost: boosting convnets for sensor-based activity recognition. Proc. ACM Interact. Mobile Wear. Ubiq. Technol. 7(2), 1–21 (2023)

    Article  Google Scholar 

  32. Rong, H., Chen, L., Miao, S., Tang, X.: Swl-adapt: an unsupervised domain adaptation model with sample weight learning for cross-user wearable human activity recognition. Proc. AAAI Conf. Artif. Intell. 37, 6012–6020 (2023)

    Google Scholar 

  33. Sannara E.K., Portet, F., Lalanda, P.: Lightweight transformers for human activity recognition on mobile devices. ar**v preprintar**v:2209.11750 (2022)

  34. Gersho, A., Gray, R.M.: Vector Quantization and Signal Compression, vol. 159. Springer Science & Business Media, Dordrecht (2012)

    Google Scholar 

  35. Lai, J.Z.C., Liaw, Y.-C., Liu, J.: A fast VQ codebook generation algorithm using codeword displacement. Pattern Recogn. 41(1), 315–319 (2008)

    Article  Google Scholar 

  36. Hsieh, C.-H., Tsai, J.-C.: Lossless compression of VQ index with search-order coding. IEEE Trans. Image Process. 5(11), 1579–1582 (1996)

    Article  Google Scholar 

  37. Ordóñez, F.J., Roggen, D.: Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1), 115 (2016)

    Article  Google Scholar 

  38. Jelodar, H., Wang, Y., Yuan, C., Feng, X., Jiang, X., Li, Y., Zhao, L.: Latent Dirichlet Allocation (LDA) and topic modeling: models, applications, a survey. Multimedia Tools Appl. 78, 15169–15211 (2019)

    Article  Google Scholar 

  39. Tang, Y., Teng, Q., Zhang, L., Min, F., He, J.: Layer-wise training convolutional neural networks with smaller filters for human activity recognition using wearable sensors. IEEE Sens. J. 21(1), 581–592 (2020)

    Article  Google Scholar 

  40. Gao, W., Zhang, L., Teng, Q., He, J., Hao, W.: DanHAR: dual attention network for multimodal human activity recognition using wearable sensors. Appl. Soft Comput. 111, 107728 (2021)

    Article  Google Scholar 

  41. Tang, Y., Zhang, L., Teng, Q., Min, F., Song, A.: Triple cross-domain attention on human activity recognition using wearable sensors. IEEE Trans. Emerg. Topics Comput. Intell. 6(5), 1167–1176 (2022)

    Article  Google Scholar 

  42. Huang, W., Zhang, L., Wang, S., Hao, W., Song, A.: Deep ensemble learning for human activity recognition using wearable sensors via filter activation. ACM Trans. Embed. Comput. Syst. 22(1), 1–23 (2022)

    Article  Google Scholar 

  43. Li, Y., Yin, R., Park, H., Kim, Y., Panda, P: Wearable-based human activity recognition with spatio-temporal spiking neural networks. ar**v preprintar**v:2212.02233 (2022)

  44. Tang, Y., Zhang, L., Min, F., He, J.: Multiscale deep feature learning for human activity recognition using wearable sensors. IEEE Trans. Ind. Electron. 70(2), 2106–2116 (2022)

    Article  Google Scholar 

Download references

Funding

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

All three authors made nearly identical contributions to every aspect of the manuscript.

Corresponding author

Correspondence to Bagher BabaAli.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Ethics approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abdi, M.F., BabaAli, B. & Momeni, S. An unsupervised statistical representation learning method for human activity recognition. SIViP (2024). https://doi.org/10.1007/s11760-024-03374-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03374-z

Keywords

Navigation