首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Commanding a humanoid to move objects in a multimodal language   总被引:2,自引:2,他引:0  
This article describes a study on a humanoid robot that moves objects at the request of its users. The robot understands commands in a multimodal language which combines spoken messages and two types of hand gesture. All of ten novice users directed the robot using gestures when they were asked to spontaneously direct the robot to move objects after learning the language for a short period of time. The success rate of multimodal commands was over 90%, and the users completed their tasks without trouble. They thought that gestures were preferable to, and as easy as, verbal phrases to inform the robot of action parameters such as direction, angle, step, width, and height. The results of the study show that the language is fairly easy for nonexperts to learn, and can be made more effective for directing humanoids to move objects by making the language more sophisticated and improving our gesture detector.  相似文献   

2.
This article describes a multimodal command language for home robot users, and a robot system which interprets users’ messages in the language through microphones, visual and tactile sensors, and control buttons. The command language comprises a set of grammar rules, a lexicon, and nonverbal events detected in hand gestures, readings of tactile sensors attached to the robots, and buttons on the controllers in the users’ hands. Prototype humanoid systems which immediately execute commands in the language are also presented, along with preliminary experiments of faceto-face interactions and teleoperations. Subjects unfamiliar with the language were able to command humanoids and complete their tasks with brief documents at hand, given a short demonstration beforehand. The command understanding system operating on PCs responded to multimodal commands without significant delay. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

3.
《Advanced Robotics》2013,27(15):1725-1741
In this paper, we present a wearable interaction system to enhance interaction between a human user and a humanoid robot. The wearable interaction system assists the user and enhances interaction with the robot by intuitively imitating the user motion while expressing multimodal commands to the robot and displaying multimodal sensory feedback. AMIO, the biped humanoid robot of the AIM Laboratory, was used in experiments to confirm the performance and effectiveness of the proposed system, including the overall performance of motion tracking. Through an experimental application of this system, we successfully demonstrated human and humanoid robot interactions.  相似文献   

4.
Success rates in a multimodal command language for home robot users   总被引:1,自引:1,他引:0  
This article considers the success rates in a multimodal command language for home robot users. In the command language, the user specifies action types and action parameter values to direct robots in multiple modes such as speech, touch, and gesture. The success rates of commands in the language can be estimated by user evaluations in several ways. This article presents some user evaluation methods, as well as results from recent studies on command success rates. The results show that the language enables users without much training to command home robots at success rates as high as 88%–100%. It is also shown that multimodal commands combining speech and button-press actions included fewer words and were significantly more successful than single-modal spoken commands.  相似文献   

5.
Enabling a humanoid robot to drive a car requires the development of a set of basic primitive actions. These include walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal, and steering) and moving with the whole body to ingress/egress the car. We present a sensor‐based reactive framework for realizing the central part of the complete task, consisting of driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car or give the robot full or partial control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP‐2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.  相似文献   

6.
ABSTRACT

The design of humanoid robots’ emotional behaviors has attracted many scholars’ attention. However, users’ emotional responses to humanoid robots’ emotional behaviors which differ from robots’ traditional behaviors remain well understood. This study aims to investigate the effect of a humanoid robot’s emotional behaviors on users’ emotional responses using subjective reporting, pupillometry, and electroencephalography. Five categories of the humanoid robot’s emotional behaviors expressing joy, fear, neutral, sadness, or anger were designed, selected, and presented to users. Results show that users have a significant positive emotional response to the humanoid robot’s joy behavior and a significant negative emotional response to the humanoid robot’s sadness behavior, indicated by the metrics of reported valence and arousal, pupil diameter, frontal middle relative theta power, and frontal alpha asymmetry score. The results suggest that humanoid robot’s emotional behaviors can evocate users’ significant emotional response. The evocation might relate to the recognition of these emotional behaviors. In addition, the study provides a multimodal physiological method of evaluating users’ emotional responses to the humanoid robot’s emotional behaviors.  相似文献   

7.
《Advanced Robotics》2013,27(1-2):207-232
In this paper, we provide the first demonstration that a humanoid robot can learn to walk directly by imitating a human gait obtained from motion capture (mocap) data without any prior information of its dynamics model. Programming a humanoid robot to perform an action (such as walking) that takes into account the robot's complex dynamics is a challenging problem. Traditional approaches typically require highly accurate prior knowledge of the robot's dynamics and environment in order to devise complex (and often brittle) control algorithms for generating a stable dynamic motion. Training using human mocap is an intuitive and flexible approach to programming a robot, but direct usage of mocap data usually results in dynamically unstable motion. Furthermore, optimization using high-dimensional mocap data in the humanoid full-body joint space is typically intractable. We propose a new approach to tractable imitation-based learning in humanoids without a robot's dynamic model. We represent kinematic information from human mocap in a low-dimensional subspace and map motor commands in this low-dimensional space to sensory feedback to learn a predictive dynamic model. This model is used within an optimization framework to estimate optimal motor commands that satisfy the initial kinematic constraints as best as possible while generating dynamically stable motion. We demonstrate the viability of our approach by providing examples of dynamically stable walking learned from mocap data using both a simulator and a real humanoid robot.  相似文献   

8.
For a humanoid robot to safely walk in unknown environments, various sensors are used to identify the surface condition and recognize any obstacles. The humanoid robot is not fixed on the surface and the base/orientation of the kinematics change while it is walking. Therefore, if the foot contact changes from the estimated due to the unknown surface condition, the kinematics results are not correct. The robot may not be able to perform the motion commands based on the incorrect surface condition. Some robots have built-in range sensors but it’s difficult to accurately model the surface from the sensor readings because the movement of the robot should be considered and the robot localization should have zero error for correct interpretation of the sensor readings. In this paper, three infrared range sensors are used in order to perceive the floor state. Covariance analysis is incorporated to consider the uncertainties. The accelerometer and gyro sensor are also used in order to detect the moment a foot hits the surface. This information provides correction to the motion planner and robot kinematics when the environment is not modeled correctly.  相似文献   

9.
10.
The ageing population phenomenon is pushing the design of innovative solutions to provide assistance to the elderly. In this context a socially–assistive robot can act as a proactive interface in a smart-home environment, providing multimodal communication channels and generating positive feelings in users. The present paper reports results of a short term and a long term evaluation of a small socially assistive humanoid robot in a smart home environment. Eight elderly people tested an integrated smart–home robot system in five real–world scenarios. Six of the participants experienced the system in two sessions over a two week period; the other two participants had a prolonged experience of eight sessions over a three month period. Results showed that the small humanoid robot was trusted by the participants. A cross–cultural comparison showed that results were not due to the cultural background of the participants. The long term evaluation showed that the participants might engage in an emotional relationship with the robot, but that perceived enjoyment might decrease over time.  相似文献   

11.
CEOs of big companies may travel frequently to give their philosophies and policies to the employees who are working at world wide branches. Video technology makes it possible to give their lectures anywhere and anytime in the world very easily. However, 2-dimentional video systems lack the reality. If we can give natural realistic lectures through humanoid robots, CEOs do not need to meet the employees in person. They can save their time and money for traveling.We propose a substitute robot of remote person. The substitute robot is a humanoid robot that can reproduce the lecturers’ facial expressions and body movements, and that can send the lecturers to everywhere in the world instantaneously with the feeling of being at a live performance. There are two major tasks for the development; they are the facial expression recognition/reproduction and the body language reproduction.For the former task, we proposed a facial expression recognition method based on a neural network model. We recognized five emotions, or surprise, anger, sadness, happiness and no emotion, in real time. We also developed a facial robot to reproduce the recognized emotion on the robot face. Through experiments, we showed that the robot could reproduce the speakers’ emotions with its face.For the latter task, we proposed a degradation control method to reproduce the natural movement of the lecturer even when a robot rotary joint fails. For the fundamental stage of our research for this sub-system, we proposed a control method for the front view movement model, or 2-dimentional model.  相似文献   

12.
13.

In this article we describe results froman experiment of user interaction with autonomous , human - like ( humanoid ) conversational agents . We hypothesize that for embodied conversational agents , nonverbal behaviors related to the process of conversation , what we call envelope feedback, is much more important than other feedback , such as emotional expression . We test this hypothesis by having subjects interact with three autonomous agents , all capable of full - duplex multimodal interaction: able to generate and recognize speech , intonation , facial displays , and gesture . Each agent , however , gave a different kind of feedback: ( 1 ) content - related only , ( 2 ) content + envelope feedback , and ( 3 ) content + emotional . Content-related feedback includes answering questions and executing commands; envelope feedback includes behaviors such as gaze , manual beat gesture , and head movements; emotional feedback includes smiles and looks of puzzlement . Subjects' evaluations of the systemwere collected with a questionnaire , and videotapes of their speech patterns and behaviors were scored according to how often the users repeated themselves , how often they hesitated , and how often they got frustrated . The results confirmour hypothesis that envelope feedback is more important in interaction than emotional feedback and that envelope feedback plays a crucial role in supporting the process of dialog . A secondary result fromthis study shows that users give our multimodal conversational humanoids very high ratings of lifelikeness and fluidity of interaction when the agents are capable of giving such feedback .  相似文献   

14.
This article presents the approaches taken to integrate a novel anthropomorphic robot hand into a humanoid robot. The requisites enabling such a robot hand to use everyday objects in an environment built for humans are presented. Starting from a design that resembles the human hand regarding size and movability of the mechatronical system, a low-level control system is shown providing reliable and stable controllers for single joint angles and torques, entire fingers and several coordinated fingers. Further on, the high-level control system connecting the low-level control system with the rest of the humanoid robot is presented. It provides grasp skills to the superior robot control system, coordinates movements of hand and arm and determines grasp patterns, depending on the object to grasp and the task to execute. Finally some preliminary results of the system, which is currently tested in simulations, will be presented.  相似文献   

15.
在情感机器人研究中,不同个性的面部表情是情感机器人增强真实感的重要基础。为实现情感机器人更加丰富细腻的表情,将人类的个性特征引入情感机器人,分析个性理论和情感模型理论,得知不同个性机器人的情感强度。结合面部动作编码系统中面部表情与机器人控制点之间的映射关系,得到情感机器人不同个性的基本表情实现方法。利用Solidworks建立情感机器人脸部模型,在ANSYS工程软件中将SHFR-Ⅲ情感机器人脸部模型设置为弹性体,通过有限元仿真计算方法,对表情的有限元仿真方法进行了探究,得到实现SHFR-Ⅲ不同个性基本表情的控制区域载荷大小和仿真结果。最后,根据仿真结果,进行SHFR-Ⅲ情感机器人不同个性的表情动作实验。实验结果表明,有限元表情仿真可以指导SHFR-Ⅲ情感机器人实现近似人类的不同个性的基本面部表情。  相似文献   

16.
Natural language commands are generated by intelligent human beings. As a result, they contain a lot of information. Therefore, if it is possible to learn from such commands and reuse that knowledge, it will be a very efficient process. In this paper, learning from such information rich voice commands for controlling a robot is studied. First, new concepts of fuzzy coach-player system and sub-coach are proposed for controlling robots with natural language commands. Then, the characteristics of the subjective human decision making process are discussed and a Probabilistic Neural Network (PNN) based learning method is proposed to learn from such commands and to reuse the acquired knowledge. Finally, the proposed concept is demonstrated and confirmed with experiments conducted using a PA-10 redundant manipulator.  相似文献   

17.
This study develops a face robot with human-like appearance for making facial expressions similar to a specific subject. First, an active drive points (ADPs) model is proposed for establishing a robotic face with less active degree of freedom for bipedal humanoid robots. Then, a robotic face design method is proposed, with the robot possessing similar facial appearance and expressions to that of a human subject. A similarity evaluation method is presented to evaluate the similarity of facial expressions between a robot and a specific human subject. Finally, the proposed facial model and the design methods are verified and implemented on a humanoid robot platform.  相似文献   

18.
This article proposes a multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. The language is designed for untrained users who need support in their daily lives from cost-effective robots. In this language, the users can combine spoken and pointing messages in an interactive manner in order to convey their intentions to the robots. Spoken messages include verb and noun phrases which describe intentions. Pointing messages are given when the user’s finger touches a camera image, a picture containing a robot body, or a button on a touch screen at hand which convey a location in their environment, a direction, a body part of the robot, a cue, a reply to a query, or other information to help the robot. This work presents the philosophy and structure of the language.  相似文献   

19.
国外仿人机器人发展概况   总被引:11,自引:0,他引:11  
李允明 《机器人》2005,27(6):561-568
介绍了国外仿人机器人发展的特点,详细分析了日本、美国和韩国等国几种仿人机器人的主要技术及其技术指标.根据国外的样机设计,作者讨论了仿人机器人各部分自由度的选用,分析了仿人机器人传动和控制设计中的一些问题.就国外仿人机器人发展对中国的启示提出了看法.  相似文献   

20.
Assistance is currently a pivotal research area in robotics, with huge societal potential. Since assistant robots directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance. This paper describes a flexible multimodal interface based on speech and gesture modalities in order to control our mobile robot named Jido. The vision system uses a stereo head mounted on a pan-tilt unit and a bank of collaborative particle filters devoted to the upper human body extremities to track and recognize pointing/symbolic mono but also bi-manual gestures. Such framework constitutes our first contribution, as it is shown, to give proper handling of natural artifacts (self-occlusion, camera out of view field, hand deformation) when performing 3D gestures using one or the other hand even both. A speech recognition and understanding system based on the Julius engine is also developed and embedded in order to process deictic and anaphoric utterances. The second contribution deals with a probabilistic and multi-hypothesis interpreter framework to fuse results from speech and gesture components. Such interpreter is shown to improve the classification rates of multimodal commands compared to using either modality alone. Finally, we report on successful live experiments in human-centered settings. Results are reported in the context of an interactive manipulation task, where users specify local motion commands to Jido and perform safe object exchanges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号