首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The approach of Learning from Demonstrations (LfD) can support human operators especially those without much programming experience to control a collaborative robot (cobot) in an intuitive and convenient means. Gaussian Mixture Model and Gaussian Mixture Regression (GMM and GMR) are useful tools for implementing such a LfD approach. However, well-performed GMM/GMR require a series of demonstrations without trembling and jerky features, which are challenging to achieve in actual environments. To address this issue, this paper presents a novel optimised approach to improve Gaussian clusters then further GMM/GMR so that LfD enabled cobots can carry out a variety of complex manufacturing tasks effectively. This research has three distinguishing innovative characteristics: 1) a Gaussian noise strategy is designed to scatter demonstrations with trembling and jerky features to better support the optimisation of GMM/GMR; 2) a Simulated Annealing-Reinforcement Learning (SA-RL) based optimisation algorithm is developed to refine the number of Gaussian clusters in eliminating potential under-/over-fitting issues on GMM/GMR; 3) a B-spline based cut-in algorithm is integrated with GMR to improve the adaptability of reproduced solutions for dynamic manufacturing tasks. To verify the approach, cases studies of pick-and-place tasks with different complexities were conducted. Experimental results and comparative analyses showed that this developed approach exhibited good performances in terms of computational efficiency, solution quality and adaptability.  相似文献   

2.
Collaborative robots (Cobots), an important component of the Industry 5.0 era, have been rapidly entering a variety of industrial application scenarios. However, employees working with them are reluctant to accept cobots into the workplace. Therefore, the traditional technology acceptance model (TAM) is unsuitable for research on the acceptance of cobots with artificial intelligence and the human-robot interaction process. In addition, anthropomorphism cannot explain the lower employee acceptance with the increase of cobots anthropomorphic from the mechanistic level. Therefore, based on the human-robot interaction phenomenon in the emerging industrial field, combined with the Uncanny Valley effect and intergroup threat theory, 300 subjects were invited to conduct an empirical study using experimental vignette methodology (EVM). The findings are as follows: 1) Perceived competence plays a mediating role in the relationship between cobots anthropomorphic and acceptance of cobots; 2) Perceived competence and perceived threat serially mediates the relationship between cobots anthropomorphic and acceptance of cobots; 3) The cobot use self-efficacy plays a moderating role in the relationship between perceived competence and perceived threat. The research results provide a mechanistic explanation for alleviating the low acceptance of cobots, give measures and methods to improve acceptance of cobots and provide solutions for the promotion and application of cobots in the industrial field.  相似文献   

3.
Collaborative robots (cobots) are robots that are designed to collaborate with humans in an open workspace. In contrast to industrial robots in an enclosed environment, cobots need additional mechanisms to assure humans’ safety in collaborations. It is especially true when a cobot is used in manufacturing environment; since the workload or moving mass is usually large enough to hurt human when a contact occurs. In this article, we are interested in understanding the existing studies on cobots, and especially, the safety requirements, and the methods and challenges of safety assurance. The state of the art of safety assurance of cobots is discussed at the aspects of key functional requirements (FRs), collaboration variants, standardizations, and safety mechanisms. The identified technological bottlenecks are (1) acquiring, processing, and fusing diversified data for risk classification, (2) effectively updating the control to avoid any interference in a real-time mode, (3) developing new technologies for the improvement of HMI performances, especially, workloads and speeds, and (4) reducing the overall cost of safety assurance features. To promote cobots in manufacturing applications, the future researches are expected for (1) the systematic theory and methods to design and build cobots with the integration of ergonomic structures, sensing, real-time controls, and human-robot interfaces, (2) intuitive programming, task-driven programming, and skill-based programming which incorporate the risk management and the evaluations of biomechanical load and stopping distance, and (3) advanced instrumentations and algorithms for effective sensing, processing, and fusing of diversified data, and machine learning for high-level complexity and uncertainty. The needs of the safety assurance of integrated robotic systems are specially discussed with two development examples.  相似文献   

4.
We propose an enhanced grid-density based approach for clustering high dimensional data. Our technique takes objects (or points) as atomic units in which the size requirement to cells is waived without losing clustering accuracy. For efficiency, a new partitioning is developed to make the number of cells smoothly adjustable; a concept of the ith-order neighbors is defined for avoiding considering the exponential number of neighboring cells; and a novel density compensation is proposed for improving the clustering accuracy and quality. We experimentally evaluate our approach and demonstrate that our algorithm significantly improves the clustering accuracy and quality.  相似文献   

5.
6.
Object recognition using laser range finder and machine learning techniques   总被引:1,自引:0,他引:1  
In recent years, computer vision has been widely used on industrial environments, allowing robots to perform important tasks like quality control, inspection and recognition. Vision systems are typically used to determine the position and orientation of objects in the workstation, enabling them to be transported and assembled by a robotic cell (e.g. industrial manipulator). These systems commonly resort to CCD (Charge-Coupled Device) Cameras fixed and located in a particular work area or attached directly to the robotic arm (eye-in-hand vision system). Although it is a valid approach, the performance of these vision systems is directly influenced by the industrial environment lighting. Taking all these into consideration, a new approach is proposed for eye-on-hand systems, where the use of cameras will be replaced by the 2D Laser Range Finder (LRF). The LRF will be attached to a robotic manipulator, which executes a pre-defined path to produce grayscale images of the workstation. With this technique the environment lighting interference is minimized resulting in a more reliable and robust computer vision system. After the grayscale image is created, this work focuses on the recognition and classification of different objects using inherent features (based on the invariant moments of Hu) with the most well-known machine learning models: k-Nearest Neighbor (kNN), Neural Networks (NNs) and Support Vector Machines (SVMs). In order to achieve a good performance for each classification model, a wrapper method is used to select one good subset of features, as well as an assessment model technique called K-fold cross-validation to adjust the parameters of the classifiers. The performance of the models is also compared, achieving performances of 83.5% for kNN, 95.5% for the NN and 98.9% for the SVM (generalized accuracy). These high performances are related with the feature selection algorithm based on the simulated annealing heuristic, and the model assessment (k-fold cross-validation). It makes possible to identify the most important features in the recognition process, as well as the adjustment of the best parameters for the machine learning models, increasing the classification ratio of the work objects present in the robot's environment.  相似文献   

7.
Each single source multicast session (SSMS) transmits packets from a source node s i to a group of destination nodes t i , i=1,2,…,n. An SSMS’s path can be established with a routing algorithm, which constructs multicast path between source and destinations. Also, for each SSMS, the routing algorithm must be performed once. When the number of SSMS increases to N≥2, the routing algorithm must be separately performed N≥2 times because the number of source nodes increase to N≥2 (for each SSMS the routing algorithm must be performed once). This causes that time of computation and bandwidth consumption to grow. To remove this problem, in this paper, we will present a new approach for merging different SSMSs to make a new multicast session, which is performed only with one execution of a routing algorithm. The new approach, merging different sessions together, is based on the optimal resource allocation and Constraint Based Routing (CBR). We will show that as compared to other available routing algorithms, it improves time of computation and bandwidth consumption and increases data rate and network efficiency. The new approach uses CBR and merges more than one single source multicast session (SSMS) problem to one multisource multicast session (MSMS) problem. By solving one MSMS problem instead of solving more than one SSMS, we can obtain an optimal solution that is more efficient than optimal solutions of SSMS problems.  相似文献   

8.
We study the scheduling situation where n tasks with identical processing times have to be scheduled on m parallel processors. Each task is subjected to a release date and requires simultaneously a fixed number of processors. We show that, for each fixed value of m, the problem of minimizing total completion time can be solved in polynomial time. The complexity status of the corresponding problem Pm|ri,pi=p,sizei|∑Ci was unknown before.Scope and purposeThere has been increasing interest in multiprocessor scheduling, i.e., in scheduling models where tasks require several processors (machines) simultaneously. Many scheduling problems fit in this model and a large amount of research has been carried on theoretical multiprocessor scheduling. In this paper we study the situation where tasks, subjected to release dates, have identical processing time and we introduce a dynamic programming algorithm that can compute the minimum total completion time. Although this scheduling problem has been open in the literature for several years, our algorithm is simple and easy to understand.  相似文献   

9.
Human Activity Recognition (HAR) from video data collections is the core application in vision tasks and has a variety of utilizations including object detection applications, video-based behavior monitoring, video classification, and indexing, patient monitoring, robotics, and behavior analysis. Although many techniques are available for HAR in video analysis tasks, most of them are not focusing on behavioral analysis. Hence, a new HAR system analysis the behavioral activity of a person based on the deep learning approach proposed in this work. The most essential aim of this work is to recognize the complex activities that are useful in many tasks that are based on object detection, modelling of individual frame characteristics, and communication among them. Moreover, this work focuses on finding out the human actions from various video resolutions, invariant human poses, and nearness of multi objects. First, we identify the key and essential frames of each activity using histogram differences. Secondly, Discrete Wavelet Transform (DWT) is used in this system to extract coefficients from the sequence of key-frames where the activity is localized in space. Finally, an Adaptive Weighted Flow Net (AWFN) algorithm is proposed in this work for effective video activity recognition. Moreover, the proposed algorithm has been evaluated by comparing it with the existing Visual Geometry Group (VGG-16) convolution neural networks for making performance comparisons. This work focuses on competent deep learning-based feature extraction to discriminate the activities for performing the classification accuracy. The proposed model has been evaluated with VGG-16 using a combination of regular UCF-101 activity datasets and also in very challenging Low-quality videos such as HMDB51. From these investigations, it is proved that the proposed AWFN approach gives higher detection accuracy of 96%. It is approximately 0.3% to 7.88% of higher accuracy than state-of-art methods.  相似文献   

10.
This paper presents a new approach for the voxelization of volumetric scene graphs. The algorithm generates slices of each primitive intended to be voxelized using an FPGA based pixel processor. The Blist representation is used for the volume scene tree which reduces storage requirement for each voxel to the log(H+1) bits. The most important advantage of this voxelization algorithm is that any volume scene tree expression can be evaluated without using any computation or stack. Also the algorithm is not object specific, i.e. the same algorithm can be used for the voxelization of different types of objects (convex and concave objects, polygons, lines and surfaces).  相似文献   

11.
12.
The problem of optimal non-hierarchical clustering is addressed. A new algorithm combining differential evolution and k-means is proposed and tested on eight well-known real-world data sets. Two criteria (clustering validity indexes), namely TRW and VCR, were used in the optimization of classification. The classification of objects to be optimized is encoded by the cluster centers in differential evolution (DE) algorithm. It induced the problem of rearrangement of centers in the population to ensure an efficient search via application of evolutionary operators. A new efficient heuristic for this rearrangement was also proposed. The plain DE variants with and without the rearrangement were compared with corresponding hybrid k-means variants. The experimental results showed that hybrid variants with k-means algorithm are essentially more efficient than the non-hybrid ones. Compared to a standard k-means algorithm with restart, the new hybrid algorithm was found more reliable and more efficient, especially in difficult tasks. The results for TRW and VCR criterion were compared. Both criteria provided the same optimal partitions and no significant differences were found in efficiency of the algorithms using these criteria.  相似文献   

13.
The field of data mining has become accustomed to specifying constraints on patterns of interest. A large number of systems and techniques has been developed for solving such constraint-based mining problems, especially for mining itemsets. The approach taken in the field of data mining contrasts with the constraint programming principles developed within the artificial intelligence community. While most data mining research focuses on algorithmic issues and aims at developing highly optimized and scalable implementations that are tailored towards specific tasks, constraint programming employs a more declarative approach. The emphasis lies on developing high-level modeling languages and general solvers that specify what the problem is, rather than outlining how a solution should be computed, yet are powerful enough to be used across a wide variety of applications and application domains.This paper contributes a declarative constraint programming approach to data mining. More specifically, we show that it is possible to employ off-the-shelf constraint programming techniques for modeling and solving a wide variety of constraint-based itemset mining tasks, such as frequent, closed, discriminative, and cost-based itemset mining. In particular, we develop a basic constraint programming model for specifying frequent itemsets and show that this model can easily be extended to realize the other settings. This contrasts with typical procedural data mining systems where the underlying procedures need to be modified in order to accommodate new types of constraint, or novel combinations thereof. Even though the performance of state-of-the-art data mining systems outperforms that of the constraint programming approach on some standard tasks, we also show that there exist problems where the constraint programming approach leads to significant performance improvements over state-of-the-art methods in data mining and as well as to new insights into the underlying data mining problems. Many such insights can be obtained by relating the underlying search algorithms of data mining and constraint programming systems to one another. We discuss a number of interesting new research questions and challenges raised by the declarative constraint programming approach to data mining.  相似文献   

14.
15.
The features of collaborative robots (cobots), like lightweight, easy programming, and flexibility, meet the production automation requirements in SMEs. However, SME productions are usually in semi-structured or cluttered environments, which raises major challenges in implementing cobot systems in SME production, for instance, increasing the visual perception of cobots, handling diverse tasks, and fast deploying cobot systems, etc. Therefore, we propose an automation framework for SME production by addressing these challenges with cobots to facilitate their production. First, the learning-based vision system is developed and implemented with the You Only Look Once (YOLOv5) for object detection, and with the Convolutional Neural Network cascaded with a Support Vector Machine (CNN-SVM) for quality control of products. Then, the multi-functional gripper system is designed and fabricated to be capable of performing multiple operations and tasks without tool changing, and be able to tolerate a certain level of changes in the environment. After that, a digital twin of the robotic system is developed, which enables the system developer to save time in troubleshooting and debugging, and the customers to have a customized model with all the elements and functions required before system deployment. Finally, the onsite testing of the integrated system is conducted in collaboration with our SME industrial partner, and the test results show that the cobot system can perform the automated production process well and accurately. It is feasible to extend the application of such a cobot system to other SME productions.  相似文献   

16.
Consider the problem of scheduling a task set τ of implicit-deadline sporadic tasks to meet all deadlines on a t-type heterogeneous multiprocessor platform where tasks may access multiple shared resources. The multiprocessor platform has m k processors of type-k, where k∈{1,2,…,t}. The execution time of a task depends on the type of processor on which it executes. The set of shared resources is denoted by R. For each task τ i , there is a resource set R i ?R such that for each job of τ i , during one phase of its execution, the job requests to hold the resource set R i exclusively with the interpretation that (i) the job makes a single request to hold all the resources in the resource set R i and (ii) at all times, when a job of τ i holds R i , no other job holds any resource in R i . Each job of task τ i may request the resource set R i at most once during its execution. A job is allowed to migrate when it requests a resource set and when it releases the resource set but a job is not allowed to migrate at other times. Our goal is to design a scheduling algorithm for this problem and prove its performance. We propose an algorithm, LP-EE-vpr, which offers the guarantee that if an implicit-deadline sporadic task set is schedulable on a t-type heterogeneous multiprocessor platform by an optimal scheduling algorithm that allows a job to migrate only when it requests or releases a resource set, then our algorithm also meets the deadlines with the same restriction on job migration, if given processors $4 \times (1 + \operatorname{MAXP}\times \lceil \frac{\vert P\vert \times\operatorname{MAXP}}{\min \{m_{1}, m_{2}, \ldots, m_{t} \}} \rceil )$ times as fast. (Here $\operatorname{MAXP}$ and |P| are computed based on the resource sets that tasks request.) For the special case that each task requests at most one resource, the bound of LP-EE-vpr collapses to $4 \times (1 + \lceil \frac{\vert R\vert }{\min \{m_{1}, m_{2}, \ldots, m_{t} \}} \rceil )$ . To the best of our knowledge, LP-EE-vpr is the first algorithm with proven performance guarantee for real-time scheduling of sporadic tasks with resource sharing on t-type heterogeneous multiprocessors.  相似文献   

17.
一种基于有限自动机的渐变镜头检测算法   总被引:3,自引:0,他引:3  
渐变镜头检测算法分为两个方面:渐变边界帧的判定和边界帧的组合。前者判断某一帧是否符合渐变边界帧的每件,后者判断一段包含边界帧的视像是否是渐变。以往的算法侧重解决边界帧的判定,忽视了边界帧的组合。本文定义了渐变检测容忍度的概念,并提出了一种基于有限自动机的渐变镜头检测方法,利用了自动机多状态的记忆性,提高了算法的适应性和鲁棒性。在TRECVID2004的SBD项目中,本渐变镜头检测系统取得了渐变检测性能第一的好成绩。  相似文献   

18.
The first purpose of this paper is to describe a new mathematical approach for the computation of an irredundant primary decomposition of a given polynomial ideal I. This presentation will be formed of three parts: a decomposition of the associated radical ideal I to an intersection of prime ideals Pi, then the determination of ideals Iiwhose radical is prime (equal to Pi), and finally, the extraction of the possible embedded components included in Ii. The second is to give an implementation of this algorithm via a new software component, called The Central Control2, in which we implemented distributed algorithms performing the basic operations of algebraic geometry.  相似文献   

19.
We consider the use of lock-free techniques for implementing shared objects in real-time Pfair-scheduled multiprocessor systems. Lock-free objects are more economical than locking techniques when implementing relatively simple objects such as buffers, stacks, queues, and lists. However, the use of such objects on real-time multiprocessors is generally considered impractical due to the need for analytical real-time guarantees. In this paper, we explain how the quantum-based nature of Pfair scheduling enables the effective use of such objects on real-time multiprocessors and present analysis specific to Pfair-scheduled systems. In addition, we show that analytical improvements can be obtained by using such objects in conjunction with group-based scheduling techniques. In this approach, a group of tasks is scheduled as a single entity (called a supertask in the Pfair literature). Such grouping prevents tasks from executing simultaneously, and hence from executing in parallel. Consequently, grouping tasks can improve the worst-case scenario with respect to object contention. Grouping also enables the use of less costly uniprocessor algorithms when all tasks sharing an object reside within the same group. We illustrate these optimizations with a case study that focuses on shared queues. Finally, we present and experimentally evaluate a simple heuristic for grouping tasks in order to reduce object contention. Though the analysis presented herein focuses specifically on Pfair-scheduled systems, the observations and techniques should be applicable to other quantum-scheduled systems as well.  相似文献   

20.
This paper introduces Hk-medoids, a modified version of the standard k-medoids algorithm. The modification extends the algorithm for the problem of clustering complex heterogeneous objects that are described by a diversity of data types, e.g. text, images, structured data and time series. We first proposed an intermediary fusion approach to calculate fused similarities between objects, SMF, taking into account the similarities between the component elements of the objects using appropriate similarity measures. The fused approach entails uncertainty for incomplete objects or for objects which have diverging distances according to the different component. Our implementation of Hk-medoids proposed here works with the fused distances and deals with the uncertainty in the fusion process. We experimentally evaluate the potential of our proposed algorithm using five datasets with different combinations of data types that define the objects. Our results show the feasibility of the our algorithm, and also they show a performance enhancement when comparing to the application of the original SMF approach in combination with a standard k-medoids that does not take uncertainty into account. In addition, from a theoretical point of view, our proposed algorithm has lower computation complexity than the popular PAM implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号