A human-machine cooperative system for generating sign language animation using thermal image |
| |
Authors: | Taro Asada Yasunari Yoshitomi Risa Hayashi |
| |
Affiliation: | (1) Dept. of Environmental Information, Graduate School of Human Environment Science, Kyoto Prefectural University, Kyoto, Japan;(2) Division of Environmental Sciences, Graduate School of Life and Environmental Sciences, Kyoto Prefectural University, 1-5 Nakaragicho Shimogamo, Sakyo-ku, Kyoto 606-8522, Japan;(3) Kyoto Shinkin Bank, Kyoto, Japan |
| |
Abstract: | We propose a new approach aimed at sign language animation by skin region detection on an infrared image. To generate several
kinds of animations expressing personality and/or emotion appropriately, conventional systems require many manual operations.
However, a promising way to realize a lower workload is to manually refine an animation made automatically with a dynamic
image of real motion. In the proposed method, a 3D CG model corresponding to a characteristic posture in sign language is
made automatically by pattern recognition on a thermal image, and then a person’s hand in the CG model is set. The hand part
is made manually beforehand. If necessary, the model can be replaced manually by a more appropriate model corresponding to
training key frames and/or the model can be refined manually. In our experiments, a person experienced in using sign language
recognized the Japanese sign language of 71 words expressed as animation with 88.3% accuracy, and three persons experienced
in using sign language also recognized the sign language animation representing three emotions (neutral, happy and angry)
with 88.9% accuracy.
This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January
31–February 2, 2008 |
| |
Keywords: | Japanese sign language Thermal image Computer graphics Model fitting |
本文献已被 SpringerLink 等数据库收录! |
|