首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Technological progress increasingly envisions the use of robots interacting with people in everyday life. Human–robot collaboration (HRC) is the approach that explores the interaction between a human and a robot, during the completion of a common objective, at the cognitive and physical level. In HRC works, a cognitive model is typically built, which collects inputs from the environment and from the user, elaborates and translates these into information that can be used by the robot itself. Machine learning is a recent approach to build the cognitive model and behavioural block, with high potential in HRC. Consequently, this paper proposes a thorough literature review of the use of machine learning techniques in the context of human–robot collaboration. 45 key papers were selected and analysed, and a clustering of works based on the type of collaborative tasks, evaluation metrics and cognitive variables modelled is proposed. Then, a deep analysis on different families of machine learning algorithms and their properties, along with the sensing modalities used, is carried out. Among the observations, it is outlined the importance of the machine learning algorithms to incorporate time dependencies. The salient features of these works are then cross-analysed to show trends in HRC and give guidelines for future works, comparing them with other aspects of HRC not appeared in the review.  相似文献   

2.
The control of a robot system using camera information is a challenging task regarding unpredictable conditions, such as feature point mismatch and changing scene illumination. This paper presents a solution for the visual control of a nonholonomic mobile robot in demanding real world circumstances based on machine learning techniques. A novel intelligent approach for mobile robots using neural networks (NNs), learning from demonstration (LfD) framework, and epipolar geometry between two views is proposed and evaluated in a series of experiments. A direct mapping from the image space to the actuator command is conducted using two phases. In an offline phase, NN–LfD approach is employed in order to relate the feature position in the image plane with the angular velocity for lateral motion correction. An online phase refers to a switching vision based scheme between the epipole based linear velocity controller and NN–LfD based angular velocity controller, which selection depends on the feature distance from the pre-defined interest area in the image. In total, 18 architectures and 6 learning algorithms are tested in order to find optimal solution for robot control. The best training outcomes for each learning algorithms are then employed in real time so as to discover optimal NN configuration for robot orientation correction. Experiments conducted on a nonholonomic mobile robot in a structured indoor environment confirm an excellent performance with respect to the system robustness and positioning accuracy in the desired location.  相似文献   

3.
The control of soft continuum robots is challenging owing to their mechanical elasticity and complex dynamics. An additional challenge emerges when we want to apply Learning from Demonstration (LfD) and need to collect necessary demonstrations due to the inherent control difficulty. In this paper, we provide a multi-level architecture from low-level control to high-level motion planning for the Bionic Handling Assistant (BHA) robot. We deploy learning across all levels to enable the application of LfD for a real-world manipulation task. To record the demonstrations, an actively compliant controller is used. A variant of dynamical systems' application that are able to encode both position and orientation then maps the recorded 6D end-effector pose data into a virtual attractor space. A recent LfD method encodes the pose attractors within the same model for point-to-point motion planning. In the proposed architecture, hybrid models that combine an analytical approach and machine learning techniques are used to overcome the inherent slow dynamics and model imprecision of the BHA. The performance and generalization capability of the proposed multi-level approach are evaluated in simulation and with the real BHA robot in an apple-picking scenario which requires high accuracy to control the pose of the robot's end-effector.  相似文献   

4.
A manufacturing system able to perform a high variety of tasks requires different types of resources. Fully automated systems using robots possess high speed, accuracy, tirelessness, and force, but they are expensive. On the other hand, human workers are intelligent, creative, flexible, and able to work with different tools in different situations. A combination of these resources forms a human-machine/robot (hybrid) system, where humans and robots perform a variety of tasks (manual, automated, and hybrid tasks) in a shared workspace. Contrarily to the existing surveys, this study is dedicated to operations management problems (focusing on the applications and features) for human and machine/robot collaborative systems in manufacturing. This research is divided into two types of interactions between human and automated components in manufacturing and assembly systems: dual resource constrained (DRC) and human-robot collaboration (HRC) optimization problems. Moreover, different characteristics of the workforce and machines/robots such as heterogeneity, homogeneity, ergonomics, and flexibility are introduced. Finally, this paper identifies the optimization challenges and problems for hybrid systems. The existing literature on HRC focuses mainly on the robotic point of view and not on the operations management and optimization aspects. Therefore, the future research directions include the design of models and methods to optimize HRC systems in terms of ergonomics, safety, and throughput. In addition, studying flexibility and reconfigurability in hybrid systems is one of the main research avenues for future research.  相似文献   

5.
As one of the critical elements for smart manufacturing, human-robot collaboration (HRC), which refers to goal-oriented joint activities of humans and collaborative robots in a shared workspace, has gained increasing attention in recent years. HRC is envisioned to break the traditional barrier that separates human workers from robots and greatly improve operational flexibility and productivity. To realize HRC, a robot needs to recognize and predict human actions in order to provide assistance in a safe and collaborative manner. This paper presents a hybrid approach to context-aware human action recognition and prediction, based on the integration of a convolutional neural network (CNN) and variable-length Markov modeling (VMM). Specifically, a bi-stream CNN structure parses human and object information embedded in video images as the spatial context for action recognition and collaboration context identification. The dependencies embedded in the action sequences are subsequently analyzed by a VMM, which adaptively determines the optimal number of current and past actions that need to be considered in order to maximize the probability of accurate future action prediction. The effectiveness of the developed method is evaluated experimentally on a testbed which simulates an assembly environment. High accuracy in both action recognition and prediction is demonstrated.  相似文献   

6.
Journal of Intelligent Manufacturing - Robot learning from demonstration (LfD) emerges as a promising solution to transfer human motion to the robot. However, because of the open-loop between the...  相似文献   

7.
The approach of Learning from Demonstrations (LfD) can support human operators especially those without much programming experience to control a collaborative robot (cobot) in an intuitive and convenient means. Gaussian Mixture Model and Gaussian Mixture Regression (GMM and GMR) are useful tools for implementing such a LfD approach. However, well-performed GMM/GMR require a series of demonstrations without trembling and jerky features, which are challenging to achieve in actual environments. To address this issue, this paper presents a novel optimised approach to improve Gaussian clusters then further GMM/GMR so that LfD enabled cobots can carry out a variety of complex manufacturing tasks effectively. This research has three distinguishing innovative characteristics: 1) a Gaussian noise strategy is designed to scatter demonstrations with trembling and jerky features to better support the optimisation of GMM/GMR; 2) a Simulated Annealing-Reinforcement Learning (SA-RL) based optimisation algorithm is developed to refine the number of Gaussian clusters in eliminating potential under-/over-fitting issues on GMM/GMR; 3) a B-spline based cut-in algorithm is integrated with GMR to improve the adaptability of reproduced solutions for dynamic manufacturing tasks. To verify the approach, cases studies of pick-and-place tasks with different complexities were conducted. Experimental results and comparative analyses showed that this developed approach exhibited good performances in terms of computational efficiency, solution quality and adaptability.  相似文献   

8.
Though construction robots have drawn attention in research and practice for decades, human-robot collaboration (HRC) remains important to conduct complex construction tasks. Considering its complexity and uniqueness, it is still unclear how HRC process will impact construction productivity, which is difficult to handle with conventional methods such as field tests, mathematical modeling and physical simulation approaches. To this end, an agent-based (AB) multi-fidelity modeling approach is introduced to simulate and evaluate how HRC influences construction productivity. A high-fidelity model is first proposed for a scenario with one robot. Then, a low-fidelity model is established to extract key parameters that capture the inner relationship among scenarios. The multi-fidelity models work together to simulate complex scenarios. Based on the simulation model, the twofold influence of HRC on productivity, namely the supplement strategy on the worker side, and the design for proactive interaction on the robot side, are fully investigated. Experimental results show that: 1) the proposed approach is feasible and flexible for simulation of complex HRC processes, and can cover multiple collaboration and interaction modes; 2) the influence of the supplement strategy is simple when there is only one robot, where lower Check Interval (CI) and higher Supplement Limit (SL) will improve productivity. But the influence becomes much more complicated when there are more robots due to the internal competition among robots for the limited time of workers; 3) HRC has a scale effect on productivity per robot, which means the productivity improves if there are more robots and workers, even if the human-robot ratio remains the same; 4) introducing proactive interaction between robots and workers could improve productivity significantly, up to 22% in our experiments, which further depends on the supplement strategy and the human-robot ratio. Overall, this research contributes an integrated approach to simulate and evaluate HRC’s impacts on productivity as well as valuable insights on how to optimize HRC for better performance and occupational health. The proposed approach is also useful for the evaluation and development of new robots.  相似文献   

9.
Owing to the fact that the number and complexity of machines is increasing in Industry 4.0, the maintenance process is more time-consuming and labor-intensive, which contains plenty of refined maintenance operations. Fortunately, human-robot collaboration (HRC) can integrate human intelligence into the collaborative robot (cobot), which can realize not merely the nimble and sapiential maintenance operations of personnel but also the reliable and repeated maintenance manipulation of cobots. However, the existing HRC maintenance lacks the precise understand of the maintenance intention, the efficient HRC decision-making for executing robotized maintenance tasks (e.g., repetitive manual tasks) and the convenient interaction interface for executing cognitive tasks (e.g., maintenance preparation and guidance job). Hence, a mixed perception-based human-robot collaborative maintenance approach consisting of three-hierarchy structures is proposed in this paper, which can help reduce the severity of the mentioned problems. In the first stage, a mixed perception module is proposed to help the cobot recognize human safety and maintenance request according to human actions and gestures separately. During the second stage, an improved online deep reinforcement learning (DRL)-enabled decision-making module with the asynchronous structure and the function of anti-disturbance is proposed in this paper, which can realize the execution of robotized maintenance tasks. In the third stage, an augmented reality-assisted (AR) user-friendly interaction interface is designed to help the personnel interact with the cobot and execute the auxiliary maintenance task without the limitation of spatial and human factors. In addition, the auxiliary of maintenance operation can also be supported by the AR-assisted visible guidance. Finally, comparative numerical experiments are implemented in a typical machining workshop, and the experimental results show a competitive performance of the proposed HRC maintenance approach compared with other state-of-the-art methods.  相似文献   

10.
11.
Ambient systems are populated by many heterogeneous devices to provide adequate services to their users. The adaptation of an ambient system to the specific needs of its users is a challenging task. Because human–system interaction has to be as natural as possible, we propose an approach based on Learning from Demonstration (LfD). LfD is an interesting approach to generalize what has been observed during the demonstration to similar situations. However, using LfD in ambient systems needs adaptivity of the learning technique. We present ALEX, a multi-agent system able to dynamically learn and reuse contexts from demonstrations performed by a tutor. The results of the experiments performed on both a real and a virtual robot show interesting properties of our technology for ambient applications.  相似文献   

12.
13.
ABSTRACT

Currently, a large number of industrial robots have been deployed to replace or assist humans to perform various repetitive and dangerous manufacturing tasks. However, based on current technological capabilities, such robotics field is rapidly evolving so that humans are not only sharing the same workspace with robots, but also are using robots as useful assistants. Consequently, due to this new type of emerging robotic systems, industrial collaborative robots or cobots, human and robot co-workers have been able to work side-by-side as collaborators to accomplish tasks in industrial environments. Therefore, new human–robot interaction systems have been developed for such systems to be able to utilize the capabilities of both humans and robots. Accordingly, this article presents a literature review of major recent works on human–robot interactions in industrial collaborative robots, conducted during the last decade (between 2008 and 2017). Additionally, the article proposes a tentative classification of the content of these works into several categories and sub-categories. Finally, this paper addresses some challenges of industrial collaborative robotics and explores future research issues.  相似文献   

14.
Human-robot collaborative (HRC) assembly combines the advantages of robot's operation consistency with human's cognitive ability and adaptivity, which provides an efficient and flexible way for complex assembly tasks. In the process of HRC assembly, the robot needs to understand the operator's intention accurately to assist the collaborative assembly tasks. At present, operator intention recognition considering context information such as assembly objects in a complex environment remains challenging. In this paper, we propose a human-object integrated approach for context-aware assembly intention recognition in the HRC, which integrates the recognition of assembly actions and assembly parts to improve the accuracy of the operator's intention recognition. Specifically, considering the real-time requirements of HRC assembly, spatial-temporal graph convolutional networks (ST-GCN) model based on skeleton features is utilized to recognize the assembly action to reduce unnecessary redundant information. Considering the disorder and occlusion of assembly parts, an improved YOLOX model is proposed to improve the focusing capability of network structure on the assembly parts that are difficult to recognize. Afterwards, taking decelerator assembly tasks as an example, a rule-based reasoning method that contains the recognition information of assembly actions and assembly parts is designed to recognize the current assembly intention. Finally, the feasibility and effectiveness of the proposed approach for recognizing human intentions are verified. The integration of assembly action recognition and assembly part recognition can facilitate the accurate operator's intention recognition in the complex and flexible HRC assembly environment.  相似文献   

15.
Human–robot collaboration (HRC) is characterized by a spatiotemporal overlap between the workspaces of the human and the robot and has become a viable option in manufacturing and other industries. However, for companies considering employing HRC it remains unclear how best to configure such a setup, because empirical evidence on human factors requirements remains inconclusive. As robots execute movements at high levels of automation, they adapt their speed and movement path to situational demands. This study therefore experimentally investigated the effects of movement speed and path predictability of an industrial collaborating robot on the human operator. Participants completed tasks together with a robot in an industrial workplace simulated in virtual reality. A lower level of predictability was associated with a loss in task performance, while faster movements resulted in higher‐rated values for task load and anxiety, indicating demands on the operator exceeding the optimum. Implications for productivity and safety and possible advancements in HRC workplaces are discussed.  相似文献   

16.
This paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our algorithmic architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line, we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces.  相似文献   

17.
Today, feature selection is an active research in machine learning. The main idea of feature selection is to choose a subset of available features, by eliminating features with little or no predictive information, as well as redundant features that are strongly correlated. There are a lot of approaches for feature selection, but most of them can only work with crisp data. Until now there have not been many different approaches which can directly work with both crisp and low quality (imprecise and uncertain) data. That is why, we propose a new method of feature selection which can handle both crisp and low quality data. The proposed approach is based on a Fuzzy Random Forest and it integrates filter and wrapper methods into a sequential search procedure with improved classification accuracy of the features selected. This approach consists of the following main steps: (1) scaling and discretization process of the feature set; and feature pre-selection using the discretization process (filter); (2) ranking process of the feature pre-selection using the Fuzzy Decision Trees of a Fuzzy Random Forest ensemble; and (3) wrapper feature selection using a Fuzzy Random Forest ensemble based on cross-validation. The efficiency and effectiveness of this approach is proved through several experiments using both high dimensional and low quality datasets. The approach shows a good performance (not only classification accuracy, but also with respect to the number of features selected) and good behavior both with high dimensional datasets (microarray datasets) and with low quality datasets.  相似文献   

18.
In this paper we propose a novel approach for intuitive and natural physical human–robot interaction in cooperative tasks. Through initial learning by demonstration, robot behavior naturally evolves into a cooperative task, where the human co-worker is allowed to modify both the spatial course of motion as well as the speed of execution at any stage. The main feature of the proposed adaptation scheme is that the robot adjusts its stiffness in path operational space, defined with a Frenet–Serret frame. Furthermore, the required dynamic capabilities of the robot are obtained by decoupling the robot dynamics in operational space, which is attached to the desired trajectory. Speed-scaled dynamic motion primitives are applied for the underlying task representation. The combination allows a human co-worker in a cooperative task to be less precise in parts of the task that require high precision, as the precision aspect is learned and provided by the robot. The user can also freely change the speed and/or the trajectory by simply applying force to the robot. The proposed scheme was experimentally validated on three illustrative tasks. The first task demonstrates novel two-stage learning by demonstration, where the spatial part of the trajectory is demonstrated independently from the velocity part. The second task shows how parts of the trajectory can be rapidly and significantly changed in one execution. The final experiment shows two Kuka LWR-4 robots in a bi-manual setting cooperating with a human while carrying an object.  相似文献   

19.
This paper proposes a novel approach for physical human-robot interactions (pHRI), where a robot provides guidance forces to a user based on the user performance. This framework tunes the forces in regards to behavior of each user in coping with different tasks, where lower performance results in higher intervention from the robot. This personalized physical human-robot interaction (p2HRI) method incorporates adaptive modeling of the interaction between the human and the robot as well as learning from demonstration (LfD) techniques to adapt to the users' performance. This approach is based on model predictive control where the system optimizes the rendered forces by predicting the performance of the user. Moreover, continuous learning of the user behavior is added so that the models and personalized considerations are updated based on the change of user performance over time. Applying this framework to a field such as haptic guidance for skill improvement, allows a more personalized learning experience where the interaction between the robot as the intelligent tutor and the student as the user, is better adjusted based on the skill level of the individual and their gradual improvement. The results suggest that the precision of the model of the interaction is improved using this proposed method, and the addition of the considered personalized factors to a more adaptive strategy for rendering of guidance forces.   相似文献   

20.
Manufacturing companies are in constant need for improved agility. An adequate combination of speed, responsiveness, and business agility to cope with fluctuating raw material costs is essential for today’s increasingly demanding markets. Agility in robots is key in operations requiring on-demand control of a robot’s tool position and orientation, reducing or eliminating extra programming efforts. Vision-based perception using full-state or partial-state observations and learning techniques are useful to create truly adaptive industrial robots. We propose using a Deep Reinforcement Learning (DRL) approach to solve path-following tasks using a simplified virtual environment with domain randomisation to provide the agent with enough exploration and observation variability during the training to generate useful policies to be transferred to an industrial robot. We validated our approach using a KUKA KR16HW robot equipped with a Fronius GMAW welding machine. The path was manually drawn on two workpieces so the robot was able to perceive, learn and follow it during welding experiments. It was also found that small processing times due to motion prediction (3.5 ms) did not slow down the process, which resulted in smooth robot operations. The novel approach can be implemented onto different industrial robots to carry out different tasks requiring material deposition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号