首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
As one of the critical elements for smart manufacturing, human-robot collaboration (HRC), which refers to goal-oriented joint activities of humans and collaborative robots in a shared workspace, has gained increasing attention in recent years. HRC is envisioned to break the traditional barrier that separates human workers from robots and greatly improve operational flexibility and productivity. To realize HRC, a robot needs to recognize and predict human actions in order to provide assistance in a safe and collaborative manner. This paper presents a hybrid approach to context-aware human action recognition and prediction, based on the integration of a convolutional neural network (CNN) and variable-length Markov modeling (VMM). Specifically, a bi-stream CNN structure parses human and object information embedded in video images as the spatial context for action recognition and collaboration context identification. The dependencies embedded in the action sequences are subsequently analyzed by a VMM, which adaptively determines the optimal number of current and past actions that need to be considered in order to maximize the probability of accurate future action prediction. The effectiveness of the developed method is evaluated experimentally on a testbed which simulates an assembly environment. High accuracy in both action recognition and prediction is demonstrated.  相似文献   

2.
The present study employs deep learning methods to recognize repetitive assembly actions and estimate their operating times. It is intended to monitor the assembly process of workers and prevent assembly quality problems caused by the lack of key operational steps and the irregular operation of workers. Based on the characteristics of the repeatability and tool dependence of the assembly action, the recognition of the assembly action is considered as the tool object detection in the present study. Moreover, the YOLOv3 algorithm is initially applied to locate and judge the assembly tools and recognize the worker's assembly action. The present study shows that the accuracy of the action recognition is 92.8 %. Then, the pose estimation algorithm CPM based on deep learning is used to realize the recognition of human joint. Finally, the joint coordinates are extracted to judge the operating times of repetitive assembly actions. The accuracy rate of judging the operating times for repetitive assembly actions is 82.1 %.  相似文献   

3.
Human–Robot Collaboration is a critical component of Industry 4.0, contributing to a transition towards more flexible production systems that are quickly adjustable to changing production requirements. This paper aims to increase the natural collaboration level of a robotic engine assembly station by proposing a cognitive system powered by computer vision and deep learning to interpret implicit communication cues of the operator. The proposed system, which is based on a residual convolutional neural network with 34 layers and a long-short term memory recurrent neural network (ResNet-34 + LSTM), obtains assembly context through action recognition of the tasks performed by the operator. The assembly context was then integrated in a collaborative assembly plan capable of autonomously commanding the robot tasks. The proposed model showed a great performance, achieving an accuracy of 96.65% and a temporal mean intersection over union (mIoU) of 94.11% for the action recognition of the considered assembly. Moreover, a task-oriented evaluation showed that the proposed cognitive system was able to leverage the performed human action recognition to command the adequate robot actions with near-perfect accuracy. As such, the proposed system was considered as successful at increasing the natural collaboration level of the considered assembly station.  相似文献   

4.
Human-robot collaborative (HRC) assembly has become popular in recent years. It takes full advantage of the strength, repeatability and accuracy of robots and the high-level cognition, flexibility and adaptability of humans to achieve an ergonomic working environment with better overall productivity. However, HRC assembly is still in its infancy nowadays. How to ensure the safety and efficiency of HRC assembly while reducing assembly failures caused by human errors is challenging. To address the current challenges, this paper proposes a novel human-cyber-physical assembly system (HCPaS) framework, which combines the powerful perception and control capacity of digital twin with the virtual-reality interaction capacity of augmented reality (AR) to achieve a safe and efficient HRC environment. Based on the framework, a deep learning-enabled fusion method of HCPaS is proposed from the perspective of robot-level fusion and part-level fusion. Robot-level fusion perceives the pose of robots with the combination of PointNet and iterative closest point (ICP) algorithm, where the status of robots together with their surroundings could be registered into AR environment to improve the human's cognitive ability of complex assembly environment, thus ensuring the safe HRC assembly. Part-level fusion recognizes the type and pose of parts being assembled with a parallel network that takes an extended Pixel-wise Voting Network (PVNet) as the base architecture, on which assembly sequence/process information of the part could be registered into AR environment to provide smart guidance for manual work to avoid human errors. Eventually, experimental results demonstrate the effectiveness and efficiency of the approach.  相似文献   

5.
In the wake of COVID-19, the production demand of medical equipment is increasing rapidly. This type of products is mainly assembled by hand or fixed program with complex and flexible structure. However, the low efficiency and adaptability in current assembly mode are unable to meet the assembly requirements. So in this paper, a new framework of human-robot collaborative (HRC) assembly based on digital twin (DT) is proposed. The data management system of proposed framework integrates all kinds of data from digital twin spaces. In order to obtain the HRC strategy and action sequence in dynamic environment, the double deep deterministic policy gradient (D-DDPG) is applied as optimization model in DT. During assembly, the performance model is adopted to evaluate the quality of resilience assembly. The proposed framework is finally validated by an alternator assembly case, which proves that DT-based HRC assembly has a significant effect on improving assembly efficiency and safety.  相似文献   

6.
As the manufacturing industry becomes more agile, the use of collaborative robots capable of safely working with humans is becoming more prevalent, while adaptable and natural interaction is a goal yet to be achieved. This work presents a cognitive architecture composed of perception and reasoning modules that allows a robot to adapt its actions while collaborating with humans in an assembly task. Human action recognition perception is performed using convolutional neural network models with inertial measurement unit and skeleton tracking data. The action predictions are used for task status reasoning which predicts the time left for each action in a task allowing a robot to plan future actions. The task status reasoning uses a recurrent neural network method which is developed for transferability to new actions and tasks. Updateable input parameters allowing the system to optimise for each user and task with each trial performed are also investigated. Finally, the complete system is demonstrated with the collaborative assembly of a small chair and wooden box, along with a solo robot task of stacking objects performed when it would otherwise be idle. The human actions recognised are using a screw driver, Allen key, hammer and hand screwing, with online accuracies between 83–92%. User trials demonstrate the robot deciding when to start collaborative actions in order to synchronise with the user, as well as deciding when it has time to complete an action on its solo task before a collaborative action is required.  相似文献   

7.
In human-robot collaborative manufacturing, industrial robots would work alongside human workers who jointly perform the assigned tasks seamlessly. A human-robot collaborative manufacturing system is more customised and flexible than conventional manufacturing systems. In the area of assembly, a practical human-robot collaborative assembly system should be able to predict a human worker’s intention and assist human during assembly operations. In response to the requirement, this research proposes a new human-robot collaborative system design. The primary focus of the paper is to model product assembly tasks as a sequence of human motions. Existing human motion recognition techniques are applied to recognise the human motions. Hidden Markov model is used in the motion sequence to generate a motion transition probability matrix. Based on the result, human motion prediction becomes possible. The predicted human motions are evaluated and applied in task-level human-robot collaborative assembly.  相似文献   

8.
Human–Robot Collaboration (HRC) is a term used to describe tasks in which robots and humans work together to achieve a goal. Unlike traditional industrial robots, collaborative robots need to be adaptive; able to alter their approach to better suit the situation and the needs of the human partner. As traditional programming techniques can struggle with the complexity required, an emerging approach is to learn a skill by observing human demonstration and imitating the motions; commonly known as Learning from Demonstration (LfD). In this work, we present a LfD methodology that combines an ensemble machine learning algorithm (i.e. Random Forest (RF)) with stochastic regression, using haptic information captured from human demonstration. The capabilities of the proposed method are evaluated using two collaborative tasks; co-manipulation of an object (where the human provides the guidance but the robot handles the objects weight) and collaborative assembly of simple interlocking parts. The proposed method is shown to be capable of imitation learning; interpreting human actions and producing equivalent robot motion across a diverse range of initial and final conditions. After verifying that ensemble machine learning can be utilised for real robotics problems, we propose a further extension utilising Weighted Random Forest (WRF) that attaches weights to each tree based on its performance. It is then shown that the WRF approach outperforms RF in HRC tasks.  相似文献   

9.
Owing to the fact that the number and complexity of machines is increasing in Industry 4.0, the maintenance process is more time-consuming and labor-intensive, which contains plenty of refined maintenance operations. Fortunately, human-robot collaboration (HRC) can integrate human intelligence into the collaborative robot (cobot), which can realize not merely the nimble and sapiential maintenance operations of personnel but also the reliable and repeated maintenance manipulation of cobots. However, the existing HRC maintenance lacks the precise understand of the maintenance intention, the efficient HRC decision-making for executing robotized maintenance tasks (e.g., repetitive manual tasks) and the convenient interaction interface for executing cognitive tasks (e.g., maintenance preparation and guidance job). Hence, a mixed perception-based human-robot collaborative maintenance approach consisting of three-hierarchy structures is proposed in this paper, which can help reduce the severity of the mentioned problems. In the first stage, a mixed perception module is proposed to help the cobot recognize human safety and maintenance request according to human actions and gestures separately. During the second stage, an improved online deep reinforcement learning (DRL)-enabled decision-making module with the asynchronous structure and the function of anti-disturbance is proposed in this paper, which can realize the execution of robotized maintenance tasks. In the third stage, an augmented reality-assisted (AR) user-friendly interaction interface is designed to help the personnel interact with the cobot and execute the auxiliary maintenance task without the limitation of spatial and human factors. In addition, the auxiliary of maintenance operation can also be supported by the AR-assisted visible guidance. Finally, comparative numerical experiments are implemented in a typical machining workshop, and the experimental results show a competitive performance of the proposed HRC maintenance approach compared with other state-of-the-art methods.  相似文献   

10.
This study presents a graphic modeling and analysis tool for use in constructing an operator's mental model in fault diagnosis tasks. In most automatic and complicated process control systems, human fault diagnosis tasks have become increasingly complex and specialized. The system designer should consider the cognitive process of human operator to avert failure of implement action owing to a lack of compatibility between humans and aiding system interface. Here, an experiment is performed to investigate the nature of human fault diagnosis. A graphic modeling and analysis tool is then proposed to model the continuous process of human fault diagnosis. The approach proposed herein exploits both the line-chart and Petri nets to demonstrate the operator's thoughts and actions. Moreover, results in this study are integrated into an adaptive standard diagnosis model that can assess the operators' mental workload and accurately depict the interactions between human operator and aiding system.Relevance to industryAutomatic intelligent diagnosis systems can not provide satisfactory operating performance. Human diagnosticians are more effective than computer ones. Results in this study offer further insight into an operator behavior in graphic form and also how to design a better aiding system.  相似文献   

11.
An automatic understanding system MAD-READER based on the techniques of image processing, pattern recognition, and artificial intelligence has been developed for mechanical engineering drawings. The principles of the system are presented, which include the methods and techniques of recognition and understanding for topological assembly drawings (TAD). A rule-based generator GEN-PLAN is devised to generate directly assembly plans from TAD assembly drawings. A variety of TAD assembly drawings has been used for testing the generator. So far, GEN-PLAN has been used to recognize TAD assembly drawings which consist of 31 part symbols, and generate their assembly plans. The present generator has shown favorable results.  相似文献   

12.
对于实际战场中目标属性要素呈现出的多样化,传统目标意图识别方法不能够较全面地建立属性之间的相似度模型.为了更好地阐述实际战场的复杂情况,提高目标意图识别的准确度,提出了一种利用改进的空间相似度与属性相似度融合的高维数据相似度模型,以全面地计算目标各种属性状态对态势意图的支持程度,再利用得到的高维数据相似度通过D-S证据理论对目标进行序贯识别.仿真实验表明:该方法具有有效性以及能够提高目标意图识别的准确度,为解决目标战术意图识别提供了新的方法.  相似文献   

13.
Human action recognition, defined as the understanding of the human basic actions from video streams, has a long history in the area of computer vision and pattern recognition because it can be used for various applications. We propose a novel human action recognition methodology by extracting the human skeletal features and separating them into several human body parts such as face, torso, and limbs to efficiently visualize and analyze the motion of human body parts.Our proposed human action recognition system consists of two steps: (i) automatic skeletal feature extraction and splitting by measuring the similarity between neighbor pixels in the space of diffusion tensor fields, and (ii) human action recognition by using multiple kernel based Support Vector Machine. Experimental results on a set of test database show that our proposed method is very efficient and effective to recognize the actions using few parameters.  相似文献   

14.
目的 为了提高视频中动作识别的准确度,提出基于动作切分和流形度量学习的视频动作识别算法。方法 首先利用基于人物肢体伸展程度分析的动作切分方法对视频中的动作进行切分,将动作识别的对象具体化;然后从动作片段中提取归一化之后的全局时域特征和空域特征、光流特征、帧内的局部旋度特征和散度特征,构造一种7×7的协方差矩阵描述子对提取出的多种特征进行融合;最后结合流形度量学习方法有监督式地寻找更优的距离度量算法提高动作的识别分类效果。结果 对Weizmann公共视频集的切分实验统计结果表明本文提出的视频切分方法具有很好的切分能力,能够作好动作识别前的预处理;在Weizmann公共视频数据集上进行了流形度量学习前后的识别效果对比,结果表明利用流形度量学习方法对动作识别效果提升2.8%;在Weizmann和KTH两个公共视频数据集上的平均识别率分别为95.6%和92.3%,与现有方法的比较表明,本文提出的动作识别方法有更好的识别效果。结论 多次实验结果表明本文算法在预处理过程中动作切分效果理想,描述动作所构造协方差矩阵对动作的表达有良好的多特征融合能力,而且光流信息和旋度、散度信息的加入使得人体各部位的运动方向信息具有了更多细节的描述,有效提高了协方差矩阵的描述能力,结合流形度量学习方法对动作识别的准确性有明显提高。  相似文献   

15.
为了进行复杂交互动作识别,提出基于深度信息的特征学习方法,并使用两层分类策略解决相似动作识别问题.该方法从频域的角度分析深度图像动作序列,提取频域特征,利用VAE对特征进行空间特征压缩表示,建立HMM模拟时序变化并进行第一层动作识别.为了解决相似动作识别问题,引入三维关节点特征进行第二层动作识别.实验结果表明,两种特征在动作数据集SBU-Kinect上能够有效地表示姿态含义,策略简单有效,识别准确率较高.  相似文献   

16.
17.
18.
This paper describes an approach to human action recognition based on a probabilistic optimization model of body parts using hidden Markov model (HMM). Our method is able to distinguish between similar actions by only considering the body parts having major contribution to the actions, for example, legs for walking, jogging and running; arms for boxing, waving and clapping. We apply HMMs to model the stochastic movement of the body parts for action recognition. The HMM construction uses an ensemble of body‐part detectors, followed by grouping of part detections, to perform human identification. Three example‐based body‐part detectors are trained to detect three components of the human body: the head, legs and arms. These detectors cope with viewpoint changes and self‐occlusions through the use of ten sub‐classifiers that detect body parts over a specific range of viewpoints. Each sub‐classifier is a support vector machine trained on features selected for the discriminative power for each particular part/viewpoint combination. Grouping of these detections is performed using a simple geometric constraint model that yields a viewpoint‐invariant human detector. We test our approach on three publicly available action datasets: the KTH dataset, Weizmann dataset and HumanEva dataset. Our results illustrate that with a simple and compact representation we can achieve robust recognition of human actions comparable to the most complex, state‐of‐the‐art methods.  相似文献   

19.
深度学习在人物动作识别方面已取得较好的成效,但当前仍然需要充分利用视频中人物的外形信息和运动信息。为利用视频中的空间信息和时间信息来识别人物行为动作,提出一种时空双流视频人物动作识别模型。该模型首先利用两个卷积神经网络分别抽取视频动作片段空间和时间特征,接着融合这两个卷积神经网络并提取中层时空特征,最后将提取的中层特征输入到3D卷积神经网络来完成视频中人物动作的识别。在数据集UCF101和HMDB51上,进行视频人物动作识别实验。实验结果表明,所提出的基于时空双流的3D卷积神经网络模型能够有效地识别视频人物动作。  相似文献   

20.
基于混合特征的人体动作识别改进算法   总被引:1,自引:0,他引:1  
运动特征的选择直接影响人体动作识别方法的识别效果.单一特征往往受到人体外观、环境、摄像机设置等因素的影响不同,其适用范围不同,识别效果也是有限的.在研究人体动作的表征与识别的基础上,充分考虑不同特征的优缺点,提出一种结合全局的剪影特征和局部的光流特征的混合特征,并用于人体动作识别.实验结果表明,该算法得到了理想的识别结果,对于Weizmann数据库中的动作可以达到100%的正确识别率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号