首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, a learning framework that enables robotic arms to replicate new skills from human demonstration is proposed. The learning framework makes use of online human motion data acquired using wearable devices as an interactive interface for providing the anticipated motion to the robot in an efficient and user-friendly way. This approach offers human tutors the ability to control all joints of the robotic manipulator in real-time and able to achieve complex manipulation. The robotic manipulator is controlled remotely with our low-cost wearable devices for easy calibration and continuous motion mapping. We believe that our approach might lead to improving the human-robot skill learning, adaptability, and sensitivity of the proposed human-robot interaction for flexible task execution and thereby giving room for skill transfer and repeatability without complex coding skills.  相似文献   

2.
We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human–human task demonstration can lead to more natural and intuitive interactions with the robot.  相似文献   

3.
Intelligent Service Robotics - This paper introduces an approach to automatic domain modeling for human–robot interaction. The proposed approach is symbolic and intended for semantically...  相似文献   

4.
Li  Tao  Zhang  Sheng 《Microsystem Technologies》2018,24(12):4875-4880
Microsystem Technologies - With the rapid development of artificial intelligence (AI), the human–robot interaction (HRI) has been attracted considerable attention in recent years. The...  相似文献   

5.
6.
In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visicn–force–position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human–robot collaboration.  相似文献   

7.

In this study, we have developed a set of virtual reality (VR) human–robot interaction technology acceptance model for learning direct current and alternating current, aiming to use VR technology to immerse students in the generation, existence, and flow of electricity. We hope that using VR to transform abstract physical concepts into tangible objects will help students learn and comprehend abstract electrical concepts. The VR technology acceptance model was developed using the Unity 3D game kit to be accessed using the HTC Vive VR headset. The scene models, characters, and objects were created using Autodesk 3DS Max and Autodesk Maya, and the 2D graphics were processed in Adobe Photoshop. The results were evaluated using four metrics for our technology acceptance model. The four metrics include the content, design, interface and media content, and practical requirements. The average score of the content is 4.73. The average score of the design is 4.12. The average score of the interface and media content is 4.34. The average score of the practical requirements is 3.72. All the items on the effectiveness questionnaire of the technology acceptance model had average scores in the range 4.25–4.75. Therefore, all teachers were strongly satisfied with the trial teaching activity. The average score of each statement ranged within 3.58–4.03 for the satisfaction with the teaching material contents. Hence, the students were somewhat satisfied with this teaching activity. The average score of each statement ranged from 3.43 to 4.96 for the satisfaction with the implementation of the technology acceptance model. This result shows that the respondents were generally satisfied with the learning outcomes associated with these materials. The average score per question in this questionnaire was 3.92, and most of the questions have an average score greater than 3.8 for the feedback pertaining to satisfaction with the teaching material contents. In summary, a deeply immersive and interactive game was created using tactile somatosensory devices and VR that aim to utilize and enhance the fun and benefits associated with learning from games.

  相似文献   

8.
9.
10.
In this study, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with a pan-tilt stereo camera, so that it can assist in gesture and gait recognition in the field of Human–Robot Interaction (HRI). The proposed system consists of three modules: detection, tracking and silhouette extraction. These modules are robust to camera movements, and they work interactively in near real-time. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift-based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap was estimated in advance and then effectively incorporated into the graph-cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data, and it was shown to detect and track multiple people very well and also produce high-quality silhouettes.
Hyeran ByunEmail:
  相似文献   

11.
This paper proposes a vision-based human arm gesture recognition method for human–robot interaction, particularly at a long distance where speech information is not available. We define four meaningful arm gestures for a long-range interaction. The proposed method is capable of recognizing the defined gestures only with 320×240 pixel-sized low-resolution input images captured from a single camera at a long distance, approximately five meters from the camera. In addition, the system differentiates the target gestures from the users’ normal actions that occur in daily life without any constraints. For human detection at a long distance, the proposed approach combines results from mean-shift color tracking, short- and long-range face detection, and omega shape detection. The system then detects arm blocks using a background subtraction method with a background updating module and recognizes the target gestures based on information about the region, periodical motion, and shape of the arm blocks. From experiments using a large realistic database, a recognition rate of 97.235% is achieved, which is a sufficiently practical level for various pervasive and ubiquitous applications based on human gestures.  相似文献   

12.
The limited understanding of the surrounding environment still restricts the capabilities of robotic systems in real world applications. Specifically, the acquisition of knowledge about the environment typically relies only on perception, which requires intensive ad hoc training and is not sufficiently reliable in a general setting. In this paper, we aim at integrating new acquisition devices, such as tangible user interfaces, speech technologies and vision-based systems, with established AI methodologies, to present a novel and effective knowledge acquisition approach. A natural interaction paradigm is presented, where humans move within the environment with the robot and easily acquire information by selecting relevant spots, objects, or other relevant landmarks. The synergy between novel interaction technologies and semantic knowledge leverages humans’ cognitive skills to support robots in acquiring and grounding knowledge about the environment; such richer representation can be exploited in the realization of robot autonomous skills for task accomplishment.  相似文献   

13.
This paper investigates how social distance can serve as a lens through which we can understand human–robot relationships and develop guidelines for robot design. In two studies, we examine the effects of distance based on physical proximity (proxemic distance), organizational status (power distance), and task structure (task distance) on people׳s experiences with and perceptions of a humanlike robot. In Study 1, participants (n=32) played a card-matching game with a humanlike robot. We manipulated the power distance (supervisor vs. subordinate) and proxemic distance (close vs. distant) between participants and the robot. Participants who interacted with the supervisor robot reported a more positive user experience when the robot was close than when the robot was distant, while interactions with the subordinate robot resulted in a more positive experience when the robot was distant than when the robot was close. In Study 2, participants (n=32) played the game in two different task distances (cooperation vs. competition) and proxemic distances (close vs. distant). Participants who cooperated with the robot reported a more positive experience when the robot was distant than when it was close. In contrast, competing with the robot resulted in a more positive experience when it was close than when the robot was distant. The findings from the two studies highlight the importance of consistency between the status and proxemic behaviors of the robot and of task interdependency in fostering cooperation between the robot and its users. This work also demonstrates how social distance may guide efforts toward a better understanding of human–robot interaction and the development of effective design guidelines.  相似文献   

14.
Although the concept of industrial cobots dates back to 1999, most present day hybrid human–machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human–robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.  相似文献   

15.
This paper presents a control strategy for human–robot interaction with physical contact, recognizing the human intention to control the movement of a non-holonomic mobile robot. The human intention is modeled by mechanical impedance, sensing the human-desired force intensity and the human-desired force direction to guide the robot through unstructured environments. Robot dynamics is included to improve the interaction performance. Stability analysis of the proposed control system is proved by using Lyapunov theory. Real experiments of the human–robot interaction show the performance of the proposed controllers.  相似文献   

16.
We present a novel method for a robot to interactively learn, while executing, a joint human–robot task. We consider collaborative tasks realized by a team of a human operator and a robot helper that adapts to the human’s task execution preferences. Different human operators can have different abilities, experiences, and personal preferences so that a particular allocation of activities in the team is preferred over another. Our main goal is to have the robot learn the task and the preferences of the user to provide a more efficient and acceptable joint task execution. We cast concurrent multi-agent collaboration as a semi-Markov decision process and show how to model the team behavior and learn the expected robot behavior. We further propose an interactive learning framework and we evaluate it both in simulation and on a real robotic setup to show the system can effectively learn and adapt to human expectations.  相似文献   

17.
The design and selection of 3D modeled hand gestures for human–computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human–computer input.  相似文献   

18.
19.
《Advanced Robotics》2013,27(6):651-670
In this paper, we experimentally investigated the open-end interaction generated by the mutual adaptation between humans and robot. Its essential characteristic, incremental learning, is examined using the dynamical systems approach. Our research concentrated on the navigation system of a specially developed humanoid robot called Robovie and seven human subjects whose eyes were covered, making them dependent on the robot for directions. We used the usual feed-forward neural network (FFNN) without recursive connections and the recurrent neural network (RNN) for the robot control. Although the performances obtained with both the RNN and the FFNN improved in the early stages of learning, as the subject changed the operation by learning on its own, all performances gradually became unstable and failed. Next, we used a 'consolidation-learning algorithm' as a model of the hippocampus in the brain. In this method, the RNN was trained by both new data and the rehearsal outputs of the RNN not to damage the contents of current memory. The proposed method enabled the robot to improve performance even when learning continued for a long time (open-end). The dynamical systems analysis of RNNs supports these differences and also showed that the collaboration scheme was developed dynamically along with succeeding phase transitions.  相似文献   

20.
Working with artificial agents is a challenging endeavor, often imposing high levels of workload on human operators who work within these socio-technical systems. We seek to understand these workload demands through examining the literature in major content areas of human–robot interaction. As research on HRI continues to explore a host of issues with operator workload, there is a need to synthesize the extant literature to determine its current state and to guide future research. Within HRI socio-technical systems, we reviewed the empirical literature on operator information processing and action execution. Using multiple resource theory (MRT; Wickens, 2002) as a guiding framework, we organized this review by the operator perceptual and responding demands which are routinely manipulated in HRI studies. We also reviewed the utility of different interventions for reducing the strain on the perceptual system (e.g., multimodal displays) and responses (e.g., automation). Our synthesis of the literature demonstrates that much is known about how to decrease operator workload, but there are specific gaps in knowledge due to study operations and methodology. This work furthers our understanding of workload in complex environments such as those found when working with robots. Principles and propositions are provided for those interested in decreasing operator workload in applied settings and also for future research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号