首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Advanced Robotics》2013,27(10):1073-1091
As a way of automatic programming of robot behavior, a method for building a symbolic manipulation task model from a demonstration is proposed. The feature of this model is that it explicitly stores the information about the essential parts of a task, i.e. interaction between a hand and an environmental object, or interaction between a grasped object and a target object. Thus, even in different environments, this method reproduces robot motion as similar as possible to that of humans to complete the task while changing the motion during non-essential parts to adapt to the current environment. To automatically determine the essential parts, a method called attention point analysis is proposed; this method searches for the nature of a task using multiple sensors and estimates the parameters to represent the task. A humanoid robot is used to verify the reproduced robot motion based on the generated task model.  相似文献   

2.
《Advanced Robotics》2013,27(10):1165-1181
Cognitive scientists and developmental psychologists have suggested that development in perceptual, motor and memory functions of human infants as well as adaptive evaluation by caregivers facilitate learning for cognitive tasks by infants. This article presents a robotic approach to understanding the mechanism of how learning for joint attention can be helped by such functional development. A robot learns visuomotor mapping needed to achieve joint attention based on evaluations from a caregiver. The caregiver adjusts the criterion for evaluating the robot's performance from easy to difficult as the performance improves. At the same time, the robot also gradually develops its visual function by sharpening input images. Experiments reveal that the adaptive evaluation by the caregiver accelerates the robot's learning and that the visual development in the robot improves the accuracy of joint attention tasks due to its well-structured visuomotor mapping. These results constructively explain what roles synchronized functional development in infants and caregivers play in task learning by infants.  相似文献   

3.
《Advanced Robotics》2013,27(9-10):1183-1208
Imitating the learning process of a human playing ping-pong is extremely complex. This work proposes a suitable learning strategy. First, an inverse kinematics solution is presented to obtain the smooth joint angles of a redundant anthropomorphic robot arm in order to imitate the paddle motion of a human ping-pong player. As humans instinctively determine which posture is suitable for striking a ball, this work proposes two novel processes: (i) estimating ball states and predicting trajectory using a fuzzy adaptive resonance theory network, and (ii) self-learning the behavior for each strike using a self-organizing map-based reinforcement learning network that imitates human learning behavior. Experimental results demonstrate that the proposed algorithms work effectively when applied to an actual humanoid robot playing ping-pong.  相似文献   

4.
Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human--robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human--robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human--robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.   相似文献   

5.
Controlling someone’s attention can be defined as shifting his/her attention from the existing direction to another. To shift someone’s attention, gaining attention and meeting gaze are two most important prerequisites. If a robot would like to communicate a particular person, it should turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be effective in all situations, especially when the robot and the human are not facing each other or the human is intensely attending to his/her task. Therefore, the robot should perform some actions so that it can attract the target person and make him/her respond to the robot to meet gaze. In this paper, we present a robot that can attract a target person’s attention by moving its head, make eye contact through showing gaze awareness by blinking its eyes, and directs his/her attention by repeating its eyes and head turns from the person to the target object. Experiments using 20 human participants confirm the effectiveness of the robot actions to control human attention.  相似文献   

6.
Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot’s intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.  相似文献   

7.
This paper addresses a biped balancing task in which an unknown external force is exerted, using the so-called ‘ankle strategy’ model. When an external force is periodic, a human adaptively maintains the balance, next learns how much force should be produced at the ankle joint from its repeatability, and finally memorized it as a motion pattern. To acquire motion patterns with balancing, we propose a control and learning method: as the control method, we adopt ground reaction force feedback to cope with an uncertain external force, while, as the learning method, we introduce a motion pattern generator that memorizes the torque pattern of the ankle joint by use of Fourier series expansion. In this learning process, the period estimation of the external force is crucial; this estimation is achieved based on local autocorrelation of joint trajectories. Computer simulations and robot experiments show effective control and learning results with respect to unknown periodic external forces.  相似文献   

8.
《Advanced Robotics》2013,27(1-2):207-232
In this paper, we provide the first demonstration that a humanoid robot can learn to walk directly by imitating a human gait obtained from motion capture (mocap) data without any prior information of its dynamics model. Programming a humanoid robot to perform an action (such as walking) that takes into account the robot's complex dynamics is a challenging problem. Traditional approaches typically require highly accurate prior knowledge of the robot's dynamics and environment in order to devise complex (and often brittle) control algorithms for generating a stable dynamic motion. Training using human mocap is an intuitive and flexible approach to programming a robot, but direct usage of mocap data usually results in dynamically unstable motion. Furthermore, optimization using high-dimensional mocap data in the humanoid full-body joint space is typically intractable. We propose a new approach to tractable imitation-based learning in humanoids without a robot's dynamic model. We represent kinematic information from human mocap in a low-dimensional subspace and map motor commands in this low-dimensional space to sensory feedback to learn a predictive dynamic model. This model is used within an optimization framework to estimate optimal motor commands that satisfy the initial kinematic constraints as best as possible while generating dynamically stable motion. We demonstrate the viability of our approach by providing examples of dynamically stable walking learned from mocap data using both a simulator and a real humanoid robot.  相似文献   

9.
《Advanced Robotics》2013,27(3-4):267-291
This paper is concerned with the reactive robot system (RRS) which has been introduced as a novel way of approaching human–robot interactions by exploiting the capabilities of haptic interfaces to transfer skills (from the robot to unskilled persons). The RRS was implemented based on two levels of interaction. The first level, which implements the first two stages of the learning process, represents the conventional control way of interchanging a set of forces in response to a static read of the contact position of some pre-defined dynamic rules (passive interaction). The second level, which implements the last stage of the learning process, represents an enhanced way of interaction between haptic interfaces and humans. This level adds to robotic system a degree of intelligence which enables the robot to dynamically adapt its behavior depending on user wishes (active interaction). In particular, in this paper, the implementation of the second level of the RRS is described in detail. A set of experiments was performed, applied to Japanese handwriting, to verify if second level of the RRS can interact with humans during the autonomous stage of the learning process. The results demonstrated that our system can still provide assistance to users on the autonomous stage while mostly respecting their intentions without significantly affecting their performance.  相似文献   

10.
《Advanced Robotics》2013,27(6):651-670
In this paper, we experimentally investigated the open-end interaction generated by the mutual adaptation between humans and robot. Its essential characteristic, incremental learning, is examined using the dynamical systems approach. Our research concentrated on the navigation system of a specially developed humanoid robot called Robovie and seven human subjects whose eyes were covered, making them dependent on the robot for directions. We used the usual feed-forward neural network (FFNN) without recursive connections and the recurrent neural network (RNN) for the robot control. Although the performances obtained with both the RNN and the FFNN improved in the early stages of learning, as the subject changed the operation by learning on its own, all performances gradually became unstable and failed. Next, we used a 'consolidation-learning algorithm' as a model of the hippocampus in the brain. In this method, the RNN was trained by both new data and the rehearsal outputs of the RNN not to damage the contents of current memory. The proposed method enabled the robot to improve performance even when learning continued for a long time (open-end). The dynamical systems analysis of RNNs supports these differences and also showed that the collaboration scheme was developed dynamically along with succeeding phase transitions.  相似文献   

11.
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.  相似文献   

12.
Shared attention is a type of communication very important among human beings. It is sometimes reserved for the more complex form of communication being constituted by a sequence of four steps: mutual gaze, gaze following, imperative pointing and declarative pointing. Some approaches have been proposed in Human?Robot Interaction area to solve part of shared attention process, that is, the most of works proposed try to solve the first two steps. Models based on temporal difference, neural networks, probabilistic and reinforcement learning are methods used in several works. In this article, we are presenting a robotic architecture that provides a robot or agent, the capacity of learning mutual gaze, gaze following and declarative pointing using a robotic head interacting with a caregiver. Three learning methods have been incorporated to this architecture and a comparison of their performance has been done to find the most adequate to be used in real experiment. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human in a controlled environment. The experimental results show that the robotic head is able to produce appropriate behavior and to learn from sociable interaction.  相似文献   

13.
Social cues facilitate engagement between interaction participants, whether they be two (or more) humans or a human and an artificial agent such as a robot. Previous work specific to human–agent/robot interaction has demonstrated the efficacy of implemented social behaviours, such as eye-gaze or facial gestures, for demonstrating the illusion of engagement and positively impacting interaction with a human. We describe the implementation of THAMBS, The Thinking Head Attention Model and Behavioural System, which is used to model attention controlling how a virtual agent reacts to external audio and visual stimuli within the context of an interaction with a human user. We evaluate the efficacy of THAMBS for a virtual agent mounted on a robotic platform in a controlled experimental setting, and collect both task- and behavioural-performance variables, along with self-reported ratings of engagement. Our results show that human subjects noticeably engaged more often, and in more interesting ways, with the robotic agent when THAMBS was activated, indicating that even a rudimentary display of attention by the robot elicits significantly increased attention by the human. Back-channelling had less of an effect on user behaviour. THAMBS and back-channelling did not interact and neither had an effect on self-report ratings. Our results concerning THAMBS hold implications for the design of successful human–robot interactive behaviours.  相似文献   

14.
《Advanced Robotics》2013,27(10):1183-1199
Robots have to deal with an enormous amount of sensory stimuli. One solution in making sense of them is to enable a robot system to actively search for cues that help structuring the information. Studies with infants reveal that parents support the learning-process by modifying their interaction style, dependent on their child's developmental age. In our study, in which parents demonstrated everyday actions to their preverbal children (8–11 months old), our aim was to identify objective parameters for multimodal action modification. Our results reveal two action parameters being modified in adult–child interaction: roundness and pace. Furthermore, we found that language has the power to help children structuring actions sequences by synchrony and emphasis. These insights are discussed with respect to the built-in attention architecture of a socially interactive robot, which enables it to understand demonstrated actions. Our algorithmic approach towards automatically detecting the task structure in child-designed input demonstrates the potential impact of insights from developmental learning on robotics. The presented findings pave the way to automatically detect when to imitate in a demonstration task.  相似文献   

15.
Human-Robot Interaction (HRI) is a growing field of research that targets the development of robots which are easy to operate, more engaging and more entertaining. Natural human-like behavior is considered by many researchers as an important target of HRI. Research in Human-Human communications revealed that gaze control is one of the major interactive behaviors used by humans in close encounters. Human-like gaze control is then one of the important behaviors that a robot should have in order to provide natural interactions with human partners. To develop human-like natural gaze control that can integrate easily with other behaviors of the robot, a flexible robotic architecture is needed. Most robotic architectures available were developed with autonomous robots in mind. Although robots developed for HRI are usually autonomous, their autonomy is combined with interactivity, which adds more challenges on the design of the robotic architectures supporting them. This paper reports the development and evaluation of two gaze controllers using a new cross-platform robotic architecture for HRI applications called EICA (The Embodied Interactive Control Architecture), that was designed to meet those challenges emphasizing how low level attention focusing and action integration are implemented. Evaluation of the gaze controllers revealed human-like behavior in terms of mutual attention, gaze toward partner, and mutual gaze. The paper also reports a novel Floating Point Genetic Algorithm (FPGA) for learning the parameters of various processes of the gaze controller.  相似文献   

16.
17.
郭田 《微型电脑应用》2011,27(8):16-19,72
移动机器人对运动目标的感知和跟踪是实现机器人与环境交互的一项重要能力。针对移动机器人以人为目标的跟踪中在复杂动态环境下经常出现的目标丢失和跟踪模式单一的问题,提出了基于机器学习的人物目标识别算法。该算法可以处理复杂环境下的目标检测和定位。同时设计了交互多模型跟踪算法,可以较好的跟踪以不规律模式运动的目标。最后在交龙移动机器人平台上实现了整个系统,验证了人物目标检测和多模式跟踪算法的鲁棒性和优越性。  相似文献   

18.
The ability to follow the gaze of conspecifics is a critical component in the development of social behaviors, and many efforts have been directed to studying the earliest age at which it begins to develop in infants. Developmental and neurophysiological studies suggest that imitative learning takes place once gaze-following abilities are fully established and joint attention can support the shared behavior required by imitation. Accordingly, gaze-following acquisition should be precursory to most machine learning tasks, and imitation learning can be seen as the earliest modality for acquiring meaningful gaze shifts and for understanding the structural substrate of fixations. Indeed, if some early attentional process, based on a suitable combination of gaze shifts and fixations, could be learned by the robot, then several demonstration learning tasks would be dramatically simplified. In this paper, we describe a methodology for learning gaze shifts based on imitation of gaze following with a gaze machine, which we purposefully introduced to make the robot gaze imitation conspicuous. The machine allows the robot to share and imitate gaze shifts and fixations of a caregiver through a mutual vergence. This process is then suitably generalized by learning both the scene salient features toward which the gaze is directed and the way saccadic programming is attained. Salient features are modeled by a family of Gaussian mixtures. These together with learned transitions are generalized via hidden Markov models to account for humanlike gaze shifts allowing to discriminate salient locations.  相似文献   

19.
In human–human communication we can adapt or learn new gestures or new users using intelligence and contextual information. Achieving natural gesture-based interaction between humans and robots, the system should be adaptable to new users, gestures and robot behaviors. This paper presents an adaptive visual gesture recognition method for human–robot interaction using a knowledge-based software platform. The system is capable of recognizing users, static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system learns new users, poses using multi-cluster approach, and combines computer vision and knowledge-based approaches in order to adapt to new users, gestures and robot behaviors. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human–robot interaction. It is implemented using the frame-based Software Platform for Agent and Knowledge Management (SPAK). The effectiveness of this method has been demonstrated by an experimental human–robot interaction system using a humanoid robot ‘Robovie’.  相似文献   

20.
This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号