Neuroinformatics Group

Universität BielefeldTechnische FakultätNI

From action capture to a database of physics-based manual interaction

For as long as the idea of robots has been around, researchers have been studying ways to empower them with the skills needed to interact with their environment in an intelligent way. A large portion of a humans' interaction with the environment is done using hands. With this in mind we are studying ways to enable robots perform manual tasks that humans take for granted. Studying patterns of manual interactions in humans should provide useful insights into how such actions are performed. The goal of the project is to create an incrementally growing database of manual interactions to help put manual intelligence research on a firmer empirical bases. The areas of computational linguistics and natural language processing have benefited from databases such Wordnet [1]. We wish to create a similar database for human hand interactions.

In order to populate such a database necessitates the study of manual interactions in humans. Using such technologies as Vicon motion capture [2] and the Immersion Wireless Cyberglove [3] [see Figures 1 and 2], the database will be populated with an array of multimodal information including: 3D geometry data, joint-angle data, tactile sensor data, stereo vision, sound and eye tracking data. Using these multimodal information sources will allow for the development of physics-based models representing manual interaction. Looking further ahead, it is our hope the results of this research can be used to aid real robots carry out complex tasks of the type that humans perform with ease. The creation of the database can be thought of as the first step, namely observation, in the process of imitation learning [4].

 

Figure 1. Capturing process Figure 2. Manual interaction

 

This project naturally leads us towards several important scientific questions: How should manual interactions be represented for storage, comparison and retrieval? What are suitable similarity measures for manual interactions? What are the elementary building blocks of a manual interaction? How do manual interactions motivated on the perceptual, control and task levels differ? Solving these questions will involve using skills in both psychology and computer science.

 

  1. WordNet: An Electronic Lexical Database (Language, Speech, and Communication), Ed. Christiane Fellbaum, MIT Press; 1998.
  2. Vicon Motion Systems, OMG plc. http://www.vicon.com
  3. Immersion Wireless Cyberglove. http://www.immersion.com/3d/products/cyber_glove.php
  4. Schaal, S. Is imitation learning the route to humanoid robots? Trends Cogn. Sci. 3, 233-242, 1999

 

Related People

helge's picture
Ritter, HelgeSupervisor