Scalable and robust multi-people head tracking by combining distributed multiple sensors |
| |
Authors: | Yusuke Matsumoto Toshikazu Wada Shuichi Nishio Takehiro Miyashita Norihiro Hagita |
| |
Affiliation: | 1.ATR Intelligent Robotics and Communication Laboratories,Kyoto,Japan |
| |
Abstract: | In this paper, we present a robust 3D human-head tracking method. 3D head positions are essential for robots interacting with people. Natural interaction behaviors such as making eye contacts require head positions. Past researches with laser range finder (LRF) have been successful in tracking 2D human position with high accuracy in real time. However, LRF trackers cannot track multiple 3D head positions. On the other hand, trackers with multi-viewpoint images can obtain 3D head position. However, vision-based trackers generally lack robustness and scalability, especially in open environments where lightening conditions vary by time. To achieve 3D robust real-time tracking, here we propose a new method that combines LRF tracker and multi-camera tracker. We combine the results from trackers using the LRF results as maintenance information toward multi-camera tracker. Through an experiment in a real environment, we show that our method outperforms toward existing methods, both in its robustness and scalability. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|