共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Arash Ajoudani Andrea Maria Zanchettin Serena Ivaldi Alin Albu-Schäffer Kazuhiro Kosuge Oussama Khatib 《Autonomous Robots》2018,42(5):957-975
Recent technological advances in hardware design of the robotic platforms enabled the implementation of various control modalities for improved interactions with humans and unstructured environments. An important application area for the integration of robots with such advanced interaction capabilities is human–robot collaboration. This aspect represents high socio-economic impacts and maintains the sense of purpose of the involved people, as the robots do not completely replace the humans from the work process. The research community’s recent surge of interest in this area has been devoted to the implementation of various methodologies to achieve intuitive and seamless human–robot-environment interactions by incorporating the collaborative partners’ superior capabilities, e.g. human’s cognitive and robot’s physical power generation capacity. In fact, the main purpose of this paper is to review the state-of-the-art on intermediate human–robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human–robot collaboration. 相似文献
3.
For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this
paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method
enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user
to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing
incremental neural network and the hidden Markov model. We have added an interactive learning mechanism to HB-SOINN to prevent
a single cluster from running into a failure as a result of polysemy of being assigned more than one meaning. For example,
a sentence: “Keep on going left slowly” has three meanings such as, “Keep on (1)”, “going left (2)”, “slowly (3)”. We experimentally tested the clustering performance of the proposed method against data obtained from measuring gestures
using a motion capture device. The results show that the classification performance of HB-SOINN exceeds that of conventional
clustering approaches. In addition, we have found that the interactive learning function improves the learning performance
of HB-SOINN. 相似文献
4.
Cognition, Technology & Work - Human–robot collaboration in dynamic industrial environments warrants robot flexibility and shifting between tasks. Adaptive robot behavior unavoidably... 相似文献
5.
The premise of human–robot collaboration is that robots have adaptive trajectory planning strategies in hybrid work cell. The aim of this paper is to propose a new online collision avoidance trajectory planning algorithm for moderate dynamic environments to insure human safety when sharing collaborative tasks. The algorithm contains two parts: trajectory generation and local optimization. Firstly, based on empirical Dirichlet Process Gaussian Mixture Model (DPGMM) distribution learning, a neural network trajectory planner called Collaborative Waypoint Planning network (CWP-net) is proposed to generate all key waypoints required for dynamic obstacle avoidance in joint space according to environmental inputs. These points are used to generate quintic spline smooth motion trajectories with velocity and acceleration constraints. Secondly, we present an improved Stochastic Trajectory Optimization for Motion Planning (STOMP) algorithm which locally optimizes the generated trajectories of CWP-net by constraining the trajectory optimization range and direction through the DPGMM model. Simulations and real experiments from an industrial use case of human–robot collaboration in the field of aircraft assembly testing show that the proposed algorithm can smoothly adjust the nominal path online and effectively avoid collisions during the collaboration. 相似文献
6.
7.
In this article, a learning framework that enables robotic arms to replicate new skills from human demonstration is proposed. The learning framework makes use of online human motion data acquired using wearable devices as an interactive interface for providing the anticipated motion to the robot in an efficient and user-friendly way. This approach offers human tutors the ability to control all joints of the robotic manipulator in real-time and able to achieve complex manipulation. The robotic manipulator is controlled remotely with our low-cost wearable devices for easy calibration and continuous motion mapping. We believe that our approach might lead to improving the human-robot skill learning, adaptability, and sensitivity of the proposed human-robot interaction for flexible task execution and thereby giving room for skill transfer and repeatability without complex coding skills. 相似文献
8.
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot. 相似文献
9.
Human–Robot Collaboration (HRC) has a pivotal role in smart manufacturing for strict requirements of human-centricity, sustainability, and resilience. However, existing HRC development mainly undertakes either a human-dominant or robot-dominant manner, where human and robotic agents reactively perform operations by following pre-defined instructions, thus far from an efficient integration of robotic automation and human cognition. The stiff human–robot relations fail to be qualified for complex manufacturing tasks and cannot ease the physical and psychological load of human operators. In response to these realistic needs, this paper presents our arguments on the obvious trend, concept, systematic architecture, and enabling technologies of Proactive HRC, serving as a prospective vision and research topic for future work in the human-centric smart manufacturing era. Human–robot symbiotic relation is evolving with a 5C intelligence — from Connection, Coordination, Cyber, Cognition to Coevolution, and finally embracing mutual-cognitive, predictable, and self-organising intelligent capabilities, i.e., the Proactive HRC. With proactive robot control, multiple human and robotic agents collaboratively operate manufacturing tasks, considering each others’ operation needs, desired resources, and qualified complementary capabilities. This paper also highlights current challenges and future research directions, which deserve more research efforts for real-world applications of Proactive HRC. It is hoped that this work can attract more open discussions and provide useful insights to both academic and industrial practitioners in their exploration of human–robot flexible production. 相似文献
10.
Technological progress increasingly envisions the use of robots interacting with people in everyday life. Human–robot collaboration (HRC) is the approach that explores the interaction between a human and a robot, during the completion of a common objective, at the cognitive and physical level. In HRC works, a cognitive model is typically built, which collects inputs from the environment and from the user, elaborates and translates these into information that can be used by the robot itself. Machine learning is a recent approach to build the cognitive model and behavioural block, with high potential in HRC. Consequently, this paper proposes a thorough literature review of the use of machine learning techniques in the context of human–robot collaboration. 45 key papers were selected and analysed, and a clustering of works based on the type of collaborative tasks, evaluation metrics and cognitive variables modelled is proposed. Then, a deep analysis on different families of machine learning algorithms and their properties, along with the sensing modalities used, is carried out. Among the observations, it is outlined the importance of the machine learning algorithms to incorporate time dependencies. The salient features of these works are then cross-analysed to show trends in HRC and give guidelines for future works, comparing them with other aspects of HRC not appeared in the review. 相似文献
11.
Luka Peternel Nikos Tsagarakis Darwin Caldwell Arash Ajoudani 《Autonomous Robots》2018,42(5):1011-1021
In this paper, we propose a novel method for human–robot collaboration, where the robot physical behaviour is adapted online to the human motor fatigue. The robot starts as a follower and imitates the human. As the collaborative task is performed under the human lead, the robot gradually learns the parameters and trajectories related to the task execution. In the meantime, the robot monitors the human fatigue during the task production. When a predefined level of fatigue is indicated, the robot uses the learnt skill to take over physically demanding aspects of the task and lets the human recover some of the strength. The human remains present to perform aspects of collaborative task that the robot cannot fully take over and maintains the overall supervision. The robot adaptation system is based on the Dynamical Movement Primitives, Locally Weighted Regression and Adaptive Frequency Oscillators. The estimation of the human motor fatigue is carried out using a proposed online model, which is based on the human muscle activity measured by the electromyography. We demonstrate the proposed approach with experiments on real-world co-manipulation tasks: material sawing and surface polishing. 相似文献
12.
Cognition, Technology & Work - 相似文献
13.
《International journal of human-computer studies》2013,71(3):250-260
The idea of robotic companions capable of establishing meaningful relationships with humans remains far from being accomplished. To achieve this, robots must interact with people in natural ways, employing social mechanisms that people use while interacting with each other. One such mechanism is empathy, often seen as the basis of social cooperation and prosocial behaviour. We argue that artificial companions capable of behaving in an empathic manner, which involves the capacity to recognise another's affect and respond appropriately, are more successful at establishing and maintaining a positive relationship with users. This paper presents a study where an autonomous robot with empathic capabilities acts as a social companion to two players in a chess game. The robot reacts to the moves played on the chessboard by displaying several facial expressions and verbal utterances, showing empathic behaviours towards one player and behaving neutrally towards the other. Quantitative and qualitative results of 31 participants indicate that users towards whom the robot behaved empathically perceived the robot as friendlier, which supports our hypothesis that empathy plays a key role in human–robot interaction. 相似文献
14.
This work proposes a shared-control tele-operation framework that adapts its cooperative properties to the estimated skill level of the operator. It is hypothesized that different aspects of an operator’s performance in executing a tele-operated path tracking task can be assessed through conventional machine learning methods using motion-based and task-related features. To identify performance measures that capture motor skills linked to the studied task, an experiment is conducted where users new to tele-operation, practice towards motor skill proficiency in 7 training sessions. A set of classifiers are then learned from the acquired data and selected features, which can generate a skill profile that comprises estimations of user’s various competences. Skill profiles are exploited to modify the behavior of the assistive robotic system accordingly with the objective of enhancing user experience by preventing unnecessary restriction for skilled users. A second experiment is implemented in which novice and expert users execute the path tracking on different pathways while being assisted by the robot according to their estimated skill profiles. Results validate the skill estimation method and hint at feasibility of shared-control customization in tele-operated path tracking. 相似文献
15.
Collaborative robot's lead-through is a key feature towards human–robot collaborative manufacturing. The lead-through feature can release human operators from debugging complex robot control codes. In a hazard manufacturing environment, human operators are not allowed to enter, but the lead-through feature is still desired in many circumstances. To target the problem, the authors introduce a remote human–robot collaboration system that follows the concept of cyber–physical systems. The introduced system can flexibly work in four different modes according to different scenarios. With the utilisation of a collaborative robot and an industrial robot, a remote robot control system and a model-driven display system is designed. The designed system is also implemented and tested in different scenarios. The final analysis indicates a great potential to adopt the developed system in hazard manufacturing environment. 相似文献
16.
17.
18.
Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human–robot interaction 总被引:1,自引:0,他引:1
In human–robot interaction scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion–introversion personality dimension. We explore the human–robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported. 相似文献
19.
20.
In this study, we propose a novel end-to-end system called Human–Machine Collaborative Inspection (HMCI) to enable collaboration between inspectors with Mixed Reality (MR) headsets and a robotic data collection platform (robot) for structural inspections. We utilize the MR headset’s holographic display and precise head tracking to allow inspectors to visualize and localize information (e.g., structural defect) on the real scenes, which are gathered by the robot and processed by an offsite computational server. The primary use case of HMCI is to enable the inspector to visualize, supervise, and improve results produced by automated defect detection algorithms in near real-time. The workflow in HMCI starts with collecting images and depth data to generate 3D maps of the site from the robot. A technique called single-shot localization is developed to create visual anchors for real-time spatial alignment between the robot and the MR headset. The 3D map and images are then sent to the computational server for analysis to detect defects and their locations. Then, the information is received by the MR headset and overlaid on the actual scenes to visualize it with spatial context. An experimental study is conducted in a lab environment to demonstrate HMCI using Microsoft HoloLens 2 (HL2) as the MR headset and Turtlebot2 as the robot. We start with the reconstruction of a 3D environment using a 3D depth sensor (Azure Kinect) on Turtlebot2 and visually detect fiducial markers as regions-of-interest (replicating structural damage) along a predefined inspection path. Then, regions-of-interest are successfully anchored to the real scene and visualized through HL2. To our knowledge, HMCI is one of the first human–machine collaborative systems that can integrate robots and inspectors with the MR headset, which has been developed, tested, and presented for structural inspection. 相似文献