首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human–robot interaction during general service tasks in home or retail environment has been proven challenging, partly because (1) robots lack high-level context-based cognition and (2) humans cannot intuit the perception state of robots as they can for other humans. To solve these two problems, we present a complete robot system that has been given the highest evaluation score at the Customer Interaction Task of the Future Convenience Store Challenge at the World Robot Summit 2018, which implements several key technologies: (1) a hierarchical spatial concepts formation for general robot task planning and (2) a mixed reality interface to enable users to intuitively visualize the current state of the robot perception and naturally interact with it. The results obtained during the competition indicate that the proposed system allows both non-expert operators and end users to achieve human–robot interactions in customer service environments. Furthermore, we describe a detailed scenario including employee operation and customer interaction which serves as a set of requirements for service robots and a road map for development. The system integration and task scenario described in this paper should be helpful for groups facing customer interaction challenges and looking for a successfully deployed base to build on.  相似文献   

2.
In human–robot interaction scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion–introversion personality dimension. We explore the human–robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported.  相似文献   

3.
If we are to achieve natural human–robot interaction, we may need to complement current vision and speech interfaces. Touch may provide us with an extra tool in this quest. In this paper we demonstrate the role of touch in interaction between a robot and a human. We show how infrared sensors located on robots can be easily used to detect and distinguish human interaction, in this case interaction with individual children. This application of infrared sensors potentially has many uses; for example, in entertainment or service robotics. This system could also benefit therapy or rehabilitation, where the observation and recording of movement and interaction is important. In the long term, this technique might enable robots to adapt to individuals or individual types of user.  相似文献   

4.
To enable a natural and fluent human robot collaboration flow, it is critical for a robot to comprehend their human peers’ on-going actions, predict their behaviors in the near future, and plan its actions correspondingly. Specifically, the capability of making early predictions is important, so that the robot can foresee the precise timing of a turn-taking event and start motion planning and execution early enough to smooth the turn-taking transition. Such proactive behavior would reduce human’s waiting time, increase efficiency and enhance naturalness in collaborative task. To that end, this paper presents the design and implementation of an early turn-taking prediction algorithm, catered for physical human robot collaboration scenarios. Specifically, a robotic scrub nurse system which can comprehend surgeon’s multimodal communication cues and perform turn-taking prediction is presented. The developed algorithm was tested on a collected data set of simulated surgical procedures in a surgeon–nurse tandem. The proposed turn-taking prediction algorithm is found to be significantly superior to its algorithmic counterparts, and is more accurate than human baseline when little partial input is given (less than 30% of full action). After observing more information, the algorithm can achieve comparable performances as humans with a F1 score of 0.90.  相似文献   

5.
In human–human communication we can adapt or learn new gestures or new users using intelligence and contextual information. Achieving natural gesture-based interaction between humans and robots, the system should be adaptable to new users, gestures and robot behaviors. This paper presents an adaptive visual gesture recognition method for human–robot interaction using a knowledge-based software platform. The system is capable of recognizing users, static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system learns new users, poses using multi-cluster approach, and combines computer vision and knowledge-based approaches in order to adapt to new users, gestures and robot behaviors. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human–robot interaction. It is implemented using the frame-based Software Platform for Agent and Knowledge Management (SPAK). The effectiveness of this method has been demonstrated by an experimental human–robot interaction system using a humanoid robot ‘Robovie’.  相似文献   

6.
The timing involved in generating communicative actions and utterances in a face-to-face greeting interaction for application in robot–human and computer-generated (CG) character–human interaction support systems is analyzed by synthesis. First, an analysis of human greeting clarifies the average pause and the average time delay in the utterance to a communicative action. Then, a synthesis-based analysis is performed by using an embodied robot system. This analysis confirms that the variation in the pause and the lag in the utterance to communicative actions produce different communicative effects, for example, a lag of about 0.3 sec is desirable for a familiar greeting and a longer lag is appropriate for a polite greeting. In addition, the synthesis-based analysis performed on a CG character system confirms the timing control effects. These results demonstrate the importance of timing control in embodied interactions as well as the applicability of such interactions in advanced communications with robots and CG characters.  相似文献   

7.
An extensive fuzzy behavior-based architecture is proposed for the control of mobile robots in a multiagent environment. The behavior-based architecture decomposes the complex multirobotic system into smaller modules of roles, behaviors and actions. Fuzzy logic is used to implement individual behaviors, to coordinate the various behaviors, to select roles for each robot and, for robot perception, decision-making, and speed control. The architecture is implemented on a team of three soccer robots performing different roles interchangeably. The robot behaviors and roles are designed to be complementary to each other, so that a coherent team of robots exhibiting good collective behavior is obtained.  相似文献   

8.
针对未知环境中移动机器人的自主导航问题,提出了一种基于人机交互的反应式导航方法。在采用模糊逻辑实现机器人基本智能行为的基础上,利用基于优先级和有限状态机的混合行为协调方法建立"环境刺激-反应"机制,提高机器人的局部自主能力。提出将"人刺激-反应"机制引入机器人系统,提高机器人系统对环境的理解与决策能力。在不同环境模型中利用提出的方法对移向指定目标的机器人自主导航进行了仿真,仿真结果验证了该方法的有效性。  相似文献   

9.
We propose an integrated technique of genetic programming (GP) and reinforcement learning (RL) to enable a real robot to adapt its actions to a real environment. Our technique does not require a precise simulator because learning is achieved through the real robot. In addition, our technique makes it possible for real robots to learn effective actions. Based on this proposed technique, we acquire common programs, using GP, which are applicable to various types of robots. Through this acquired program, we execute RL in a real robot. With our method, the robot can adapt to its own operational characteristics and learn effective actions. In this paper, we show experimental results from two different robots: a four-legged robot "AIBO" and a humanoid robot "HOAP-1." We present results showing that both effectively solved the box-moving task; the end result demonstrates that our proposed technique performs better than the traditional Q-learning method.  相似文献   

10.
This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.  相似文献   

11.
Human–human interaction consists of various nonverbal behaviors that are often emotion-related. To establish rapport, it is essential that the listener respond to reactive emotion in a way that makes sense given the speaker's emotional state. However, human–robot interactions generally fail in this regard because most spoken dialogue systems play only a question-answer role. Aiming for natural conversation, we examine an emotion processing module that consists of a user emotion recognition function and a reactive emotion expression function for a spoken dialogue system to improve human–robot interaction. For the emotion recognition function, we propose a method that combines valence from prosody and sentiment from text by decision-level fusion, which considerably improves the performance. Moreover, this method reduces fatal recognition errors, thereby improving the user experience. For the reactive emotion expression function, the system's emotion is divided into emotion category and emotion level, which are predicted using the parameters estimated by the recognition function on the basis of distributions inferred from human–human dialogue data. As a result, the emotion processing module can recognize the user's emotion from his/her speech, and expresses a reactive emotion that matches. Evaluation with ten participants demonstrated that the system enhanced by this module is effective to conduct natural conversation.  相似文献   

12.
In this article we present a multipart formal design and evaluation of the style-by-demonstration (SBD) approach to creating interactive robot behaviors: enabling people to design the style of interactive robot behaviors by providing an exemplar. We first introduce our Puppet Master SBD algorithm that enables the creation of interactive robot behaviors with a focus on style: Users provide an example demonstration of human–robot interaction and Puppet Master uses this to generate real-time interactive robot output that matches the demonstrated style. We further designed and implemented original interfaces for demonstrating interactive robot style and for interacting with the resulting robot behaviors. Following, we detail a set of studies we performed to appraise users' reactions to and acceptance of the SBD interaction design approach, the effectiveness of the underlying Puppet Master algorithm, and the usability of the demonstration interfaces. Fundamentally, this article investigates the broad questions of how people respond to SBD interaction, how they engage SBD interfaces, how SBD can be practically realized, and how the SBD approach to social human–robot interaction can be employed in future interaction design.  相似文献   

13.
S. Hoshino  K. Maki 《Advanced Robotics》2013,27(17):1095-1109
In order for robots to exist together with humans, safety for the humans has to be strictly ensured. On the other hand, safety might decrease working efficiency of robots. Namely, this is a trade-off problem between human safety and robot efficiency in a field of human–robot interaction. For this problem, we propose a novel motion planning technique of multiple mobile robots. Two artificial potentials are presented for generating repulsive force. The first potential is provided for humans. The von Mises distribution is used to consider the behavioral property of humans. The second potential is provided for the robots. The Kernel density estimation is used to consider the global robot congestion. Through simulation experiments, the effectiveness of the behavior and congestion potentials of the motion planning technique for human safety and robot efficiency is discussed. Moreover, a sensing system for humans in a real environment is developed. From experimental results, the significance of the behavior potential based on the actual humans is discussed. For the coexistence of humans and robots, it is important to evaluate a mutual influence between them. For this purpose, a virtual space is built using projection mapping. Finally, the effectiveness of the motion planning technique for the human–robot interaction is discussed from the point of view of not only robots but also humans.  相似文献   

14.
We developed a robot patient for patient transfer training for simulating a patient’s performance during patient transfer and for enabling nurses to practice their nursing skills on it. To realize the robot patient, we focused on addressing the problems of designing its limb actions to enable it to respond to nurses’ operations. RC servos and electromagnetic brakes were installed in the joints to enable the robot to simulate a patient’s limb actions, such as embracing and remaining standing. To enable the robot to automatically respond to nurses’ operations, an identification method for these operations was developed that used voice commands and the features of the limbs’ posture measured by angle sensors installed in the robot’s joints. The robot patient’s performance was examined by a control test in which four experienced nursing teachers performed patient transfer with the robot patient and a human-simulated patient. The results revealed that the robot patient could successfully simulate the actions of a patient’s limbs according to the nursing teachers’ operations and that it is suitable for nursing skill training.  相似文献   

15.
Research on humanoid robots has produced various uses for their body properties in communication. In particular, mutual relationships of body movements between a robot and a human are considered to be important for smooth and natural communication, as they are in human–human communication. We have developed a semi-autonomous humanoid robot system that is capable of cooperative body movements with humans using environment-based sensors and switching communicative units. Concretely, this system realizes natural communication by using typical behaviors such as: “nodding,” “eye-contact,” “face-to-face,” etc. It is important to note that the robot parts are NOT operated directly; only the communicative units in the robot system are switched. We conducted an experiment using the mentioned robot system and verified the importance of cooperative behaviors in a route-guidance situation where a human gives directions to the robot. The task requires a human participant (called the “speaker”) to teach a route to a “hearer” that is (1) a human, (2) a developed robot that performs cooperative movements, and (3) a robot that does not move at all. This experiment is subjectively evaluated through a questionnaire and an analysis of body movements using three-dimensional data from a motion capture system. The results indicate that the cooperative body movements greatly enhance the emotional impressions of human speakers in a route-guidance situation. We believe these results will allow us to develop interactive humanoid robots that sociably communicate with humans.  相似文献   

16.
This paper presents a technique for a reactive mobile robot to adaptively behave in unforeseen and dynamic circumstances. A robot in nonstationary environments needs to infer how to adaptively behave to the changing environment. Behavior-based approach manages the interactions between the robot and its environment for generating behaviors, but in spite of its strengths of fast response, it has not been applied much to more complex problems for high-level behaviors. For that reason many researchers employ a behavior-based deliberative architecture. This paper proposes a 2-layer control architecture for generating adaptive behaviors to perceive and avoid moving obstacles as well as stationary obstacles. The first layer is to generate reflexive and autonomous behaviors with behavior network, and the second layer is to infer dynamic situations of the mobile robot with Bayesian network. These two levels facilitate a tight integration between high-level inference and low-level behaviors. Experimental results with various simulations and a real robot have shown that the robot reaches the goal points while avoiding stationary or moving obstacles with the proposed architecture.  相似文献   

17.
In this paper, we want to propose the idea that some techniques used for animal training might be helpful for solving human–robot interaction problems in the context of entertainment robotics. We present a model for teaching complex actions to an animal-like autonomous robot based on “clicker training”, a method used efficiently by professional trainers for animals of different species. After describing our implementation of clicker training on an enhanced version of AIBO, Sony’s four-legged robot, we argue that this new method can be a promising technique for teaching unusual behavior and sequences of actions to a pet robot.  相似文献   

18.
Small humanoid robots are becoming more affordable and are now used in fields such as human–robot interaction, ethics, psychology, or education. For non-roboticists, the standard paradigm for robot visual programming is based on the selection of behavioral blocks, followed by their connection using communication links. These programs provide efficient user support during the development of complex series of movements and sequential behaviors. However, implementing dynamic control remains challenging because the data flow between components to enforce control loops, object permanence, the memories of object positions, odometry, and finite state machines has to be organized by the users. In this study, we develop a new programming paradigm, Targets-Drives-Means, which is suitable for the specification of dynamic robotic tasks. In this proposed approach, programming is based on the declarative association of reusable dynamic components. A central memory organizes the information flows automatically and issues related to dynamic control are solved by processes that remain hidden from the end users. The proposed approach has advantages during the implementation of dynamic behaviors, but it requires that users stop conceiving robotic tasks as the execution of a sequence of actions. Instead, users are required to organize their programs as collections of behaviors that run in parallel and compete for activation. This might be considered non-intuitive but we also report the positive outcomes of a usability experiment, which evaluated the accessibility of the proposed approach.  相似文献   

19.
In designing robot systems for human interaction, designers draw on aspects of human behavior that help them achieve specific design goals. For instance, the designer of an educational robot system may use speech, gaze, and gesture cues in a way that enhances its student’s learning. But what set of behaviors improve such outcomes? How might designers of such a robot system determine this set of behaviors? Conventional approaches to answering such questions primarily involve designers carrying out a series of experiments in which they manipulate a small number of design variables and measure the effects of these manipulations on specific interaction outcomes. However, these methods become infeasible when the design space is large and when the designer needs to understand the extent to which each variable contributes to achieving the desired effects. In this paper, we present a novel multivariate method for evaluating what behaviors of interactive robot systems improve interaction outcomes. We illustrate the use of this method in a case study in which we explore how different types of narrative gestures of a storytelling robot improve its users’ recall of the robot’s story, their ability to retell the robot’s story, their perceptions of and rapport with the robot, and their overall engagement in the experiment.  相似文献   

20.
This paper addresses a new method for combination of supervised learning and reinforcement learning (RL). Applying supervised learning in robot navigation encounters serious challenges such as inconsistent and noisy data, difficulty for gathering training data, and high error in training data. RL capabilities such as training only by one evaluation scalar signal, and high degree of exploration have encouraged researchers to use RL in robot navigation problem. However, RL algorithms are time consuming as well as suffer from high failure rate in the training phase. Here, we propose Supervised Fuzzy Sarsa Learning (SFSL) as a novel idea for utilizing advantages of both supervised and reinforcement learning algorithms. A zero order Takagi–Sugeno fuzzy controller with some candidate actions for each rule is considered as the main module of robot's controller. The aim of training is to find the best action for each fuzzy rule. In the first step, a human supervisor drives an E-puck robot within the environment and the training data are gathered. In the second step as a hard tuning, the training data are used for initializing the value (worth) of each candidate action in the fuzzy rules. Afterwards, the fuzzy Sarsa learning module, as a critic-only based fuzzy reinforcement learner, fine tunes the parameters of conclusion parts of the fuzzy controller online. The proposed algorithm is used for driving E-puck robot in the environment with obstacles. The experiment results show that the proposed approach decreases the learning time and the number of failures; also it improves the quality of the robot's motion in the testing environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号