Abstract
During harvesting using a robot, a farmer performs the actions of harvesting the crop and placing it on the robot. Meanwhile, the robot follows the farmer, and the farmer is comfortable if the distance at which the robot follows changes dependent on the farmer’s actions. In this study, we proposed a method of recognizing the action transition using the result of the principal component analysis of the skeletal information. It was confirmed that the proposed method recognized the transition to the placing action at the start of the action by the values of the 1st–3rd principal components. In addition, the proposed method recognizes the action transition even when the employed data is not used to calculate eigenvectors for principal component analysis. These results confirm that the proposed method is sufficient to follow dependent on the actions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Islam, M.M., Lam, A., Fukuda, H., Kobayashi, Y., Kuno, Y.: A person-following shop** support robot based on human pose skeleton data and lidar sensor. In: Intelligent Computing Methodologies: 15th International Conference, ICIC 2019, pp. 9–19. Nanchang, China, August 3–6, 2019, Proceedings, Part III 15, Springer (2019)
Kosuke, I., Miki, A., Masayuki, K., Kiminori, S., Mutsumi, W.: Person tracking system using a two-wheeled robot with rgb-d camera. ROBOMECH2014 2014 (2014) P1–T06 (In Japanese)
Yuta, K., Takehiko, S., Yoshimitsu, A.: Human action recognition with pose feature extraction and action transition using CNN. Instit. Electr. Inf. Commun. Eng. D 100(7), 681–691 (2017). (In Japanese)
Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634 (2015)
Wu, Z., Wang, X., Jiang, Y.G., Ye, H., Xue, X.: Modeling spatial-temporal clues in a hybrid deep learning framework for video classification. In: Proceedings of the 23rd ACM international conference on Multimedia. (2015) 461–470
Quintero, R., Parra, I., Llorca, D.F., Sotelo, M.: Pedestrian intention and pose prediction through dynamical models and behaviour classification. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pp. 83–88. IEEE (2015)
Lucia, A., Ayanori, Y., Akihisa, O., Takashi, T.: Human recognition for agricultural robots to follow worker in a narrow furrow -recognition of center position of the body using rgb-d camera and posenet-. In: 2021 JSME Conference on Robotics and Mechatronics (2021) (In Japanese)
Yorozu, A., Ishigami, G., Takahashi, M.: Ridge-tracking for strawberry harvesting support robot according to farmer’s behavior. In: Field and Service Robotics: Results of the 12th International Conference, pp. 235–245. Springer (2021)
Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vision 103(1), 60–79 (2013)
Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 489–4497 (2015)
Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional neural networks for human action recognition. IEEE Trans. Patt. Anal. Mach. Intell. 35(1), 221–231 (2012)
Akutsu, T., Chihiba, S.: Development of an algorithm for estimating farm work using wearable sensors. Multimedia, Distrib., Cooperat., Mob. Sympos. 2017(2017), 213–216 (2017). (In Japanese)
Koppula, H.S., Saxena, A.: Anticipating human activities using object affordances for reactive robotic response. IEEE Trans. Patt. Anal. Mach. Intell. 38(1), 14–29 (2015)
Ryoo, M.S., Fuchs, T.J., **a, L., Aggarwal, J.K., Matthies, L.: Robot-centric activity prediction from first-person videos: What will they do to me? In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 295–302 (2015)
Yao, A., Gall, J., Gool, L.V., Urtasun, R.: Learning probabilistic non-linear latent variable models for tracking complex activities. In Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K.Q., (eds.) Advances in Neural Information Processing Systems, vol. 24. Curran Associates, Inc. (2011)
Google: Tensorflow - poseestimation. https://www.tensorflow.org/lite/examples/pose_estimation/overview
Acknowledgements
The robot and experimental fields were provided by the DONKEY Corporation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ooka, C., Ohya, A., Yorozu, A. (2024). Action Transition Recognition Using Principal Component Analysis for Agricultural Robot Following. In: Lee, SG., An, J., Chong, N.Y., Strand, M., Kim, J.H. (eds) Intelligent Autonomous Systems 18. IAS 2023. Lecture Notes in Networks and Systems, vol 795. Springer, Cham. https://doi.org/10.1007/978-3-031-44851-5_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-44851-5_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44850-8
Online ISBN: 978-3-031-44851-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)