Interfacing Agents through Boundaries of Interaction Dynamics

  • Conference paper
Robotics Research
  • 1114 Accesses

Abstract

We propose our scenario towards the first few stages of understanding the basic principles of human-robot interface. A problem that is applicable across many tasks. We view the problem as the structure of coupling between interaction dynamics of agents. Through a case study of a full body dynamic motion of a simulated humanoid, we point out that there is a sparse global structure, defined as boundaries of dynamics. We propose that an intervention to the current interaction dynamics, as well as information extraction is best made on such boundaries. Then, through learning experiments with mobile robot navigation, and an active vision system, we show non-physical forms of an emergent global structure. They emerged from interferences between multiple sensory-motor flows. A neural network for spatio-temporal correlation learning is presented as a candidate mechanism for capturing and retrieving the global structure. Lastly, an experiment with our humanoid robot on visual mimicking of human motion is briefly presented as an integrating platform.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 85.59
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 105.49
Price includes VAT (France)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. K. Ikeuchi and T. Suehiro. Toward an assembly plan from observation. i. task recognition with polyhedral objects. IEEE Trans. Robotics and Automation, 10(3): 368–385, June 1994.

    Article  Google Scholar 

  2. Y. Kuniyoshi, M. Inaba, and H. Inoue. Learning by watching: Extracting reusable task knowledge from visual observation of human performance. IEEE Trans. Robotics and Automation, 10 (5), 1994.

    Google Scholar 

  3. H. Miyamoto, S. Schaal, F. Gandolfo et al. A kendama learning robot based on bi-directional theory. Neural Networks, 9: 1281–1302, 1996.

    Article  Google Scholar 

  4. I. Eguchi, T. Sato, and T. Mori. Visual behavior understanding as a care function of computerized description of medical care. In Proc. 1996 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS96), pages 1573–1578, November 1996.

    Google Scholar 

  5. H. Kobayashi and F. Hara. Dynamic recognition of basic facial expressions by discrete-time recurrent neural network. In Proc. Int. Joint Conf. on Neural Networks (IJCNN), pages 155–158, 1993.

    Google Scholar 

  6. Y. Kuniyoshi and A. Nagakubo. Humanoid as a research vehicle into flexible complex interaction. In Proc. IEEE Int. Conf. Intelligent Robots and Systems, pages 811–819, 1997.

    Google Scholar 

  7. T. Sato and S. Hirai. Language-aidedrobotic teleoperation system (tails) for advanced teleoperation. IEEE J. Robotics and Automation, 3 (5): 476–481, October 1987.

    Article  Google Scholar 

  8. S. Lee. Intelligent sensing and control for advanced teleoperation. IEEE Control Systems Magazine, 13 (3): 19–28, June 1993.

    Article  Google Scholar 

  9. B. Brunner, K. Arbter, and G. Hirzinger. Task directed programming of sensor based robots. In Proc. IEEFJRSJ/GI Int. Conf. Intelligent Robots and Systems (IROS), pages 1080–1087, 1994.

    Chapter  Google Scholar 

  10. T. Mao. From Gestures to Language. Shin’yosha, 1992. (In Japanese).

    Google Scholar 

  11. Y. Kuniyoshi. Behavior matching by observation for multi-robot cooperation. In G. Giralt and G. Hirzinger, editors, Robotics Research — The Seventh International Symposium, pages 343–352. Springer, 1996.

    Google Scholar 

  12. G. Taga, Y. Yamaguchi, and H. Shimizu. Self-organized control of bipedal locomotion in unpredictable environment. Biol. Cybern, 65: 147–159, 1991.

    Article  MATH  Google Scholar 

  13. A. A. Rizzi and D. E. Koditschek. Further progress in robot juggling: The spatial two-juggle. In Proc. IEEE Int. Conf. Robotics and Automation, pages 919–924, 1993.

    Chapter  Google Scholar 

  14. S. Miyakoshi, G. Taga, Y. Kuniyoshi et al. Three dimensional bipedal step** motion using neural oscillators — towards humanoid motion in the real world. In Proc. IEEE Int. Conf. Intelligent Robots and Systems, 1998.

    Google Scholar 

  15. A. Zelinsky and Y. Kuniyoshi. Learning to coordinate behaviours for robot navigation. Advanced Robotics, 10(2): 143–159, 1996.

    Article  Google Scholar 

  16. Y. Kuniyoshi and L. Berthouze. Neural learning of embodied interaction dynamics. Neural Networks, 11 (7–8): 1259–1276, 1998.

    Article  Google Scholar 

  17. Y. Kuniyoshi, N. Kita, S. Rougeaux et al. Active stereo vision system with foveated wide angle lenses. In S.Z. Li, D.P. Mital, E.K. Teoh, and H. Wang, editors, Recent Developments in Computer Vision, Lecture Notes in Computer Science 1035. Springer–Verlag, 1995. ISBN 3–540–60793–5.

    Google Scholar 

  18. Y. Kuniyoshi, N. Kita, K. Sugimoto et al. A foveated wide angle lens for active vision. In Proc. IEEE Int. Conf. Robotics and Automation, 1995.

    Google Scholar 

  19. S. Rougeaux and Y. Kuniyoshi. Robust real-time tracking on an active vision head. In Proc. IEEE Int. Conf. Intelligent Robot and Systems (IROS), pages 873–879, 1997.

    Google Scholar 

  20. L. Berthouze and Y. Kuniyoshi. Emergence and categorization of coordinated visual behavior through embodied interaction. Autonomous Robots, 5: 369–379, 1998. to appear.

    Article  Google Scholar 

  21. M. Kawato. Feedback-error-learning neural network for supervised motor learning. In R. Eckmiller, editor, Advanced Neural Computers, pages 365–372. Elsevier, Amsterdam, 1990.

    Chapter  Google Scholar 

  22. R. A. Jacobs, M. I. Jordan, S. J. Nowlan et al. Adaptive mixtures of local experts. Neural Computation, 3: 79–87, 1991.

    Article  Google Scholar 

  23. J. Tani and S. Nolfi. Learning to perceive the world as articulated: an approach for hierarchical learning in sensory-motor systems. In R. Pfeifer, B. Blumberg, J.A. Meyer, and S.W. Wilson, editors, Proc. Int. Conf. on Society of Adaptive Behavior (SAB), pages 270–279. MIT Press, 1998.

    Google Scholar 

  24. Y. Shigematsu, G. Matsumoto, and M. Ichikawa. Temporal learning rule and dynamic neural networks model. International Conference on Brain Processes, Theories and Models, pages 164–172, 1995.

    Google Scholar 

  25. J. Piaget. Play, Dreams and Imitation in Childhood. W. W. Norton, New York, 1962. (Original work published 1945).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag London

About this paper

Cite this paper

Kuniyoshil, Y., Nagakubo, A., Berthouze, L., Cheng, G. (2000). Interfacing Agents through Boundaries of Interaction Dynamics. In: Hollerbach, J.M., Koditschek, D.E. (eds) Robotics Research. Springer, London. https://doi.org/10.1007/978-1-4471-0765-1_35

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0765-1_35

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-1254-9

  • Online ISBN: 978-1-4471-0765-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics

Navigation