Understanding user commands by evaluating fuzzy linguistic information based on visual attention |
| |
Authors: | A G Buddhika P Jayasekara Keigo Watanabe Kiyotaka Izumi |
| |
Affiliation: | (1) Department of Advanced Systems Control Engineering, Graduate School of Science and Engineering, Saga University, Saga, Japan |
| |
Abstract: | This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms
such as “very little” are commonly included in voice commands. Therefore, a robot’s capacity to understand such information
is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on
the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate
fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for
a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based
voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average
distance to the surrounding objects. A situation of object manipulation to rearrange the user’s working space is simulated
to illustrate the system. This is demonstrated with a PA-10 robot manipulator. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|