Fast Face Features Extraction Based on Deep Neural Networks for Mobile Robotic Platforms

  • Conference paper
  • First Online:
Interactive Collaborative Robotics (ICR 2020)

Abstract

The concept of Smart Environment (SE) provides a great benefit to its users: interactive informational services (corporate TV, video communication, navigation and localization services) and mobile autonomous entities: mobile robotic platforms, quadcopters, anthropomorphic robots etc. It is important to take into account all the personal details of the user’s behavior so to provide him a personalized most useful data. So, the task of a person identification based on an image of his face is urgent. One of the most famous approach to recognize a client is by their face. It’s important to have as huge dataset as it’s conceivable to prepare an exact classifier, particularly if profound neural systems are being used. It’s expensive to compose delegate dataset physically – to snap a picture of each individual from each conceivable edge with each conceivable light condition. This is the reason why generation of a synthetic data for training a classifier, utilizing least genuine information is so urgent. At this paper several face features extractors were tested based on deep learning models in order to find its advantages and disadvantages in the context of training a classifier for a facial recognition task and a clustering for a tracking unique people in SE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Kagirov, I., Tolstoy, I., Savelyev, A., Karpov, A.: Gesture control of collaborative robot. Robot. Techn. Cybern. 7(2), 139–144 (2019)

    Article  Google Scholar 

  2. Saveliev, A., Uzdiaev, M., Dmitrii, M.: Aggressive Action Recognition Using 3D CNN Architectures. In: 2019 12th International Conference on Developments in eSystems Engineering (DeSE), pp. 890–895 (2019)

    Google Scholar 

  3. Goodfellow, I. et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 56, 2672–2680 (2014)

    Google Scholar 

  4. Wang, X., Gupta, A.: Generative image modeling using style and structure adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 318–335. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_20

    Chapter  Google Scholar 

  5. Zhu, J.Y., Krähenbühl, P., Shechtman, E., Efros, A.A.: Generative Visual Manipulation on the Natural Image Manifold. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 597–613. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_36

    Chapter  Google Scholar 

  6. Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. Adv. Neural Inf. Processing Syst. 29, 469–477 (2016)

    Google Scholar 

  7. Tuzel, O., Taguchi, Y., Hershey, J. R.: Global-local face upsampling network. ar**v preprint ar**v:1603.07235 (2016)

  8. Generation photorealistic celebrity faces. https://blog.insightdatascience.com/generatingcustom-photo-realistic-faces-using-ai-d170b1b59255. Accessed 10 Jan 2020, Accessed 01 July 2020

  9. Kim, T., Cha, M., Kim, H., Lee, J. K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1857–1865 (2017)

    Google Scholar 

  10. Shrivastava, A.: Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116 (2017)

    Google Scholar 

  11. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning, MIT Press, Cambridge (2016)

    Google Scholar 

  12. Zhukovskiy, Y.L., Korolev, N.A., Babanova, I.S., Boikov, A.V.: The prediction of the residual life of electromechanical equipment based on the artificial neural network. In: IOP Conference Series: Earth and Environmental Science, vol. 87, p. 032056. IOP Publishing (2017)

    Google Scholar 

  13. Schroff, F., Kalenichenko, D., Philbin, J.F.: A unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)

    Google Scholar 

  14. Keras-vggface. https://github.com/rcmalli/keras-vggface. Accessed 01 July 2020

  15. Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    Google Scholar 

  16. Oleinik, A.L., Kukharev, G.A.: Algorithms for Face Image Mutual Reconstruction by Means of Two-Dimensional Projection Methods. SPIIRAS Proceedings. 2, 45–74 (2018). https://doi.org/10.15622/sp.57.3

  17. Bogomolov, A.V., Gan, S.P., Zinkin, V.N., Alekhin, M.D.: Acoustic factor environmental safety monitoring information system. In: Proceedings of 2019 22nd International Conference on Soft Computing and Measurements, SCM 2019. 8903729, pp. 215–218 (2019)

    Google Scholar 

  18. Vasiljevic, I.S., Dragan, D., Obradovic, R., Petrović, V.B.: Analysis of compression techniques for stereoscopic images. In: SPIIRAS Proceedings. vol. 6, 197–220 (2018). https://doi.org/10.15622/sp.61.8

  19. Pavliuk, N., Kharkov, I., Zimuldinov, E., Saprychev, V.: Development of multipurpose mobile platform with a modular structure. In: Ronzhin, A., Shishlakov, V. (eds.) Proceedings of 14th International Conference on Electromechanics and Robotics “Zavalishin’s Readings”. SIST, vol. 154, pp. 137–147. Springer, Singapore (2020). https://doi.org/10.1007/978-981-13-9267-2_12

    Chapter  Google Scholar 

Download references

Acknowledgements

This research is supported by RSF №16-19-00044П.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maksim Letenkov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Letenkov, M., Levonevskiy, D. (2020). Fast Face Features Extraction Based on Deep Neural Networks for Mobile Robotic Platforms. In: Ronzhin, A., Rigoll, G., Meshcheryakov, R. (eds) Interactive Collaborative Robotics. ICR 2020. Lecture Notes in Computer Science(), vol 12336. Springer, Cham. https://doi.org/10.1007/978-3-030-60337-3_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60337-3_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60336-6

  • Online ISBN: 978-3-030-60337-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation