Integrating context-free and context-dependent attentional mechanisms for gestural object reference |
| |
Authors: | Email author" target="_blank">Gunther?HeidemannEmail author Robert?Rae Holger?Bekel Ingo?Bax Helge?Ritter |
| |
Affiliation: | (1) Neuroinformatics Group, Faculty of Technology, Bielefeld University, P.O. Box 10 01 31, 33501 Bielefeld, Germany |
| |
Abstract: | We present a vision system for human-machine interaction based on a small wearable camera mounted on glasses. The camera views the area in front of the user, especially the hands. To evaluate hand movements for pointing gestures and to recognise object references, an approach to integrating bottom-up generated feature maps and top-down propagated recognition results is introduced. Modules for context-free focus of attention work in parallel with the hand gesture recognition. In contrast to other approaches, the fusion of the two branches is on the sub-symbolic level. This method facilitates both the integration of different modalities and the generation of auditory feedback.Published online: 5 October 2004Robert Rae: Now at PerFact Innovation, Lampingstr. 8, 33615 Bielefeld, Germany |
| |
Keywords: | Human-machine interaction Gesture recognition Neural networks Focus of attention Auditory feedback |
本文献已被 SpringerLink 等数据库收录! |
|