A method for expressing human posture as 3DCG using thermal image processing and 3D model fitting |
| |
Authors: | T Asada Y Yoshitomi |
| |
Affiliation: | 1. Department of Environmental Information, Graduate School of Human Environment Science, Kyoto Prefectural University, Shimogamo, Sakyo-ku, Kyoto, 606-8522, Japan 2. Division of Environmental Sciences, Graduate School of Life and Environmental Sciences, Kyoto Prefectural University, Shimogamo, Sakyo-ku, Kyoto, 606-8522, Japan
|
| |
Abstract: | An imitation of human motion has been used as a promising technique for the development of a robot. Some techniques such as
motion capture systems and data-gloves are used for analyzing human motion. However, since those methods involve (a) environmental
restrictions such as the preparation of two or more cameras and the strict control of brightness, and (b) physical restrictions
such as the wearing of markers and/or data-gloves, they are far removed from a method for recognizing human motion in a natural
condition. In this article, we propose a method that makes a 3-dimensional CG (3DCG) by transforming a feature vector of human
posture on a thermal image into a 3DCG model. The 3DCG models for use as training data are made with manual model fitting.
Then human models synthesized by our method are geometrically evaluated in CG space. The average error in position is about
10 cm. Such a relatively small error might be acceptable in some cases e.g., 3DCG animation generation and the imitation of
human motion by a robot. Our method has neither physical nor environmental restrictions. The rotation-angles at each joint
obtained by our method can be used for an imitation of human posture by a robot. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|