首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
工业机器人被称为“制造业皇冠顶端的明珠”,是智能制造领域最具代表性的装备。工业机器人的故障及精度退化问题突出,严重影响企业的安全生产和经济效益。为此,工业机器人故障诊断与健康预测方法逐渐成为研究热点。本文首先简要介绍了工业机器人的系统构成,并分析了其核心部件的典型失效模式;其次,从知识驱动和数据驱动两个维度出发,综述工业机器人故障诊断方法;然后,在组件级和系统级层面综述了工业机器人性能退化监测及剩余使用寿命估计方法;最后,对工业机器人故障诊断及健康预测方法的未来发展趋势进行了展望,指出大数据环境下的工业机器人智能诊断将是未来的技术发展趋势。  相似文献   

2.
本文分析了过程工业故障诊断自动化的特点及研究现状,探讨了目前过程工业故障诊断存在的难点和关键技术。简介了智能工程的一般内容,并引入智能工程的理论和技术,讨论了集成化智能故障诊断系统的设计原理、系统结构及实现方法。最后,结合某化工厂烧碱生产线故障诊断,介绍了我们在集成化智能故障诊断系统方面所做的工作。  相似文献   

3.
钢铁工业过程多智能体系统研究   总被引:2,自引:0,他引:2  
结合钢铁工业过程的特点,通过分析冶金过程工业智能控制的发展状况,提出了采用多Agent技术,建立钢铁工业多智能体系统,从而实现钢铁工业的分布式智能控制,分析了系统建立的必要性和可行性,并给出了该系统的结构和实现方法。  相似文献   

4.
由于复杂过程与一般工业过程的本质区别,使得传统的依赖对象精确数学模型的故障诊断方法难以取得令人满意的结果。而智能技术具有无需建立对象精确模型的优势,并且可以充分利用人类专家的经验知识,因此利用智能故障诊断模型研究适合复杂过程实现的故障诊断技术既是必要的也是可行的。通过比较目前研究较多的智能故障诊断模型,提出基于模糊逻辑、神经网络与专家系统的智能集成故障诊断模型,以便有效地解决复杂过程的故障诊断问题,并具体分析了模型组成机理、结构和网络学习算法,从而为复杂过程的故障诊断技术研究提供了新的途径。  相似文献   

5.
文章研究了基于特征值的智能故障诊断技术,具体阐述了特征值的提取方法,以及特征参数的建立方法及定义,并描述了智能故障诊断系统模型结构与开发策略。  相似文献   

6.
随着近几年来互联网技术的不断发展,大数据技术开始应用到了各行各业的生产中,大数据技术已经与当前的互联网、物联网建设有了直接关联。大数据技术也因此成为了物联网信息智能采集系统设计的重要技术内容之一。在与大数据环境下,物联网信息智能采集方法的研究工作成为重点,本文对大数据环境下物联网信息智能采集方法进行了详细研究。  相似文献   

7.
监管视频数据的剧增,导致现有系统无法满足社会的监管需求,为此提出大数据在监管中心智能化系统中的应用研究.此研究着重分析大数据技术在监管中心智能化系统—智能视频分析模块的应用,基于大数据技术设计目标行为分析软件,包括监管视频数据处理层与应用层.监管视频数据处理层采用数据整理技术—Retinex算法降噪、增强视频图像,监管...  相似文献   

8.
智能故障诊断技术研究综述与展望   总被引:6,自引:0,他引:6  
智能故障诊断是人工智能领域基于知识处理的故障模式识别,其发展为工业设备持续正常运行提供良好的技术支撑.首先阐述了故障诊断技术的研究意义和基本概念,然后从原理、应用、优缺点以及方法改进等方面总结了目前应用广泛的几种智能故障诊断技术,最后对各种智能故障诊断技术进行了分析和展望,同时给出了诊断技术的选择建议.  相似文献   

9.
《工矿自动化》2013,(11):106-109
针对现有提升机监测系统仅限于现场就地监测,部分故障很难被及时发现并处理的问题,提出了基于物联网的提升机群远程监测与智能故障诊断系统设计方案。该系统利用分布式实时中间件技术传输系统运行数据,实现了异步网络环境下分布式进程间的交互通信,从而实现了矿井提升机群的网络化监测;建立了网络化的矿井提升机交互式智能故障诊断体系架构,实现了提升机故障诊断交互式推理,优化了提升机故障诊断的方法,提高了故障诊断效率。  相似文献   

10.
大数据技术与电力生产过程相结合是现阶段火电厂发展的主流方向。基于此,文章首先设计搭建了一款基于大数据挖掘的发电设备状态监测与故障诊断系统,将系统架构划分为现场设备层、基础控制层、大数据智能控制层和智能管理层4个层级;随后探究了该系统“数据采集-模型开发-状态监测-故障诊断-迭代反馈”的闭环链条技术,并从知识库和大数据分析两个角度分别提供了技术服务流程,为电力生产企业的高质量运行发展提供保障。  相似文献   

11.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

12.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

13.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

18.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

19.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

20.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号