首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An effective neuro-fuzzy paradigm for machinery condition healthmonitoring   总被引:2,自引:0,他引:2  
An innovative neuro-fuzzy network appropriate for fault detection and classification in a machinery condition health monitoring environment is proposed. The network, called an incremental learning fuzzy neural (ILFN) network, uses localized neurons to represent the distributions of the input space and is trained using a one-pass, on-line, and incremental learning algorithm that is fast and can operate in real time. The ILFN network employs a hybrid supervised and unsupervised learning scheme to generate its prototypes. The network is a self-organized structure with the ability to adaptively learn new classes of failure modes and update its parameters continuously while monitoring a system. To demonstrate the feasibility and effectiveness of the proposed network, numerical simulations have been performed using some well-known benchmark data sets, such as the Fisher's Iris data and the Deterding vowel data set. Comparison studies with other well-known classifiers were performed and the ILFN network was found competitive with or even superior to many existing classifiers. The ILFN network was applied on the vibration data known as Westland data set collected from a U.S. Navy CH-46E helicopter test stand, in order to assess its efficiency in machinery condition health monitoring. Using a simple fast Fourier transform (FFT) technique for feature extraction, the ILFN network has shown promising results. With various torque levels for training the network, 100% correct classification was achieved for the same torque Levels of the test data.  相似文献   

2.
回环检测对于视觉同步定位和建图(visual simultaneous localization and mapping,VSLAM)系统减小累计误差和重定位具有重要意义。为缩短回环检测在线运行时间,同时满足准确率召回率需求,提出了一种基于宽度自编码器的快速回环检测算法(fast loop closure detection-broad autoencoder,FLCD-BA)。该检测算法改进了宽度学习网络,通过无监督的方式从输入数据中自主学习数据特征,进而运用于回环检测任务。与传统的深度学习方法不同,该网络使用伪逆的岭回归算法求解权重矩阵,通过增量学习的方法实现网络的快速重构,从而避免了整个网络的重复训练。所提算法在三个公开数据集上进行了实验,无须使用GPU设备,且网络的训练时间相比词袋模型以及深度学习的方法有较大缩短。实验结果表明该算法在检测回环时具有较高的准确率和召回率,测试中每帧的平均运行时间仅需21 ms,为视觉SLAM系统的回环检测提供了一种新算法。  相似文献   

3.
In this paper, an Ellipsoid ARTMAP (EAM) network model based on incremental learning algorithm is proposed to realize online learning and tool condition monitoring. The main characteristic of EAM model is that hyper-ellipsoid is used for geometric representation of categories which can depict the sample distribution robustly and accurately. Meanwhile, adaptive resonance based strategy can realize the update of the hyper-ellipsoid node locally and monotonically. Therefore, the model has strong incremental learning ability, which guarantees the constructed classifier can learn new knowledge without forgetting the original information. Based on incremental EAM model, a tool condition monitoring system is realized. In this system, features are firstly extracted from the force and vibration signals to depict dynamic features of tool wear process. Then, fast correlation based filter (FCBF) method is introduced to select the minimum redundant features adaptively so as to decrease the feature redundancy and improve classifier robustness. Based on the selected features, EAM based incremental classifier is constructed to realize recognition of the tool wear states. To show the effectiveness of the proposed method, multi-teeth milling experiments of Ti-6Al-4V alloy were carried out. Moreover, to estimate the generation error of the classifiers accurately, a five-fold cross validation method is utilized. By comparison with the commonly used Fuzzy ARTMAP (FAM) classifier, it can be shown that the averaging recognition rate of EAM initial classifier can reach 98.67%, which is higher than FAM. Moreover, the incremental learning ability of EAM is also analyzed and compared with FAM using the new data coming from different cutting passes and tool wear category. The results show that the updated EAM classifier can get higher classification accuracy on the original knowledge while realizing effective online learning of the new knowledge.  相似文献   

4.
构造型神经网络模型通过将样本映射到单位超球面上并用覆盖方法进行识别,具有计算速度快、识别率高、几何意义明显等优点。但是常用的基于交叉覆盖的方法在首次构造完成后,难以再继续进行修改和加强,从而阻碍了网络的再学习能力。文章提出了该构造型神经网络的一种双交叉覆盖方法,一方面吸收了原交叉覆盖的优点,一方面提供了良好的再学习能力。通过实验验证,该方法可以较好地运用到构造型神经网络的增量学习中。  相似文献   

5.
An incremental online semi-supervised active learning algorithm, which is based on a self-organizing incremental neural network (SOINN), is proposed. This paper describes improvement of the two-layer SOINN to a single-layer SOINN to represent the topological structure of input data and to separate the generated nodes into different groups and subclusters. We then actively label some teacher nodes and use such teacher nodes to label all unlabeled nodes. The proposed method can learn from both labeled and unlabeled samples. It can query the labels of some important samples rather than selecting the labeled samples randomly. It requires neither prior knowledge, such as the number of nodes, nor the number of classes. It can automatically learn the number of nodes and teacher vectors required for a current task. Moreover, it can realize online incremental learning. Experiments using artificial data and real-world data show that the proposed method performs effectively and efficiently.  相似文献   

6.
刘建军  胡卫东  郁文贤 《计算机仿真》2009,26(7):192-194,227
以实现RBF网络的增量学习能力和提高其增量学习的稳健性为目的,给出了一种RBF网络增量学习算法.算法首先对初始数据集进行聚类得到初始的RBF网络结构,然后采用GAP-RBF算法中的隐层节点调整策略动态调整网络结构实现RBF网络增量学习.RBF网络的初始化降低了初始数据集样本训练顺序对RBF网络性能的影响,增强了其增量学习的稳健性.IRIS数据集和雷达实测数据集仿真实验表明,算法具有较好的增量学习能力.  相似文献   

7.
大数据具有高速变化特性,其内容与分布特征均处于动态变化之中,目前的前馈神经网络模型是一种静态学习模型,不支持增量式更新,难以实时学习动态变化的大数据特征。针对这个问题,提出一种支持增量式更新的大数据特征学习模型。通过设计一个优化目标函数对参数进行快速增量式更新,为了在更新过程中保持网络的原始知识,最小化平方误差函数。对于特征变化频繁的数据,通过增加隐藏层神经元数目网络对结构进行更新,使得更新后的网络能够实时学习动态变化大数据的特征。在对网络参数与结构更新之后,通过权重矩阵SVD分解对更新后的网络结构进行优化,删除冗余的网络连接,增强网络模型的泛化能力。实验结果表明提出的模型能够在尽可能保持网络模型原始知识的基础上,通过不断更新神经网络的参数与结构实时学习动态大数据的特征。  相似文献   

8.
信度网结构在线学习算法   总被引:2,自引:0,他引:2  
刘启元  张聪  沈一栋  汪成亮 《软件学报》2002,13(12):2297-2304
提出一种新的信度网结构在线学习算法.其核心思想是,利用新样本对信度网结构和参数不断进行增量式修改,以逐步逼近真实模型.本算法分为两个步骤:首先分别利用参数增量修改律和添加边、删除边、边反向3种结构增量修改律,并结合新采集的样本,对当前信度网模型进行增量式修改;然后利用结果选择判定准则,从增量式修改所得的后代信度网集合中选择一个合适的信度网作为本次迭代结果.该结果在与当前样本的一致性和与上一代模型的距离之间达到一个合理的折衷.实验结果表明,本算法能有效地实现信度网结构的在线学习.由于在线学习不需要历史样本,  相似文献   

9.
钟锐  吴怀宇  何云 《计算机科学》2018,45(6):308-313
传统的人脸识别模型采用离线方式进行训练,同时由于人脸特征维数较高导致算法的实时性不足。文中分别从人脸特征与分类器两方面来构建快速的人脸识别算法。首先使用 SDM(Supervised Descent Method)算法进行人脸特征点定位,提取每个人脸特征点邻域内的局部(Multi Block-Center Symmetric Local Binary Patterns,MB-CSLBP)特征,并将所有的人脸特征点邻域特征以串联的方式构成局部融合特征,即所提出的局部融合MB-CSLBP特征LFP-MB-CSLBP(Local Fusion Feature of MB-CSLBP)。将以上特征送入分层增量树HI-tree(Hierarchical Incremental tree)中进行人脸识别模型的在线训练。分层增量树是使用分层聚类算法来实现增量式学习的,因此其能够以在线的方式对识别模型进行训练,具有较高的实时性与准确性。最后在3种不同的人脸库以及摄像头采集的人脸视频上对算法的识别率与实时性进行测试。实验结果表明,相比于当前其他算法,所提算法具有较高的人脸识别率与实时性。  相似文献   

10.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.  相似文献   

11.
针对基于深度学习的人脸识别模型难以在嵌入式设备进行部署和实时性能差的问题,深入研究了现有的模型压缩和加速算法,提出了一种基于知识蒸馏和对抗学习的神经网络压缩算法。算法框架由三部分组成,预训练的大规模教师网络、轻量级的学生网络和辅助对抗学习的判别器。改进传统的知识蒸馏损失,增加指示函数,使学生网络只学习教师网络正确识别的分类概率;鉴于中间层特征图具有丰富的高维特征,引入对抗学习策略中的判别器,鉴别学生网络与教师网络在特征图层面的差异;为了进一步提高学生网络的泛化能力,使其能够应用于不同的机器视觉任务,在训练的后半部分教师网络和学生网络相互学习,交替更新,使学生网络能够探索自己的最优解空间。分别在CASIA WEBFACE和CelebA两个数据集上进行验证,实验结果表明知识蒸馏得到的小尺寸学生网络相较全监督训练的教师网络,识别准确率仅下降了1.5%左右。同时将本研究所提方法与面向特征图知识蒸馏算法和基于对抗学习训练的模型压缩算法进行对比,所提方法具有较高的人脸识别准确率。  相似文献   

12.
混合型学习模型HLM中的增量学习算法   总被引:4,自引:0,他引:4  
混合型学习模型HLM将概念获取算法HMCAP和神经网络算法FTART有机结合,能学习多概念和连续属性,其增量学习算法建立在二叉混合判定树结构和FTART网络的基础上,在给系统增加新的实例时,只需进行一遍增量学习调整原结构,不用重新生成判定树和神经网络,即可提高学习精度,速度快、效率高.本文主要介绍该模型中的增量学习算法.  相似文献   

13.
本文针对传统的增量学习算法无法处理后采集到的样本中含有新增特征的问题,设计适应样本特征维数增加的训练算法。在基于最小二乘支持向量机的基础上,提出了特征增量学习算法。该算法充分利用先前训练得到的分类器的结构参数,仅对新增特征采用最小二乘支持向量机进行学习。实验结果表明,该算法能够在保证分类精度的同时,有效效地提高训练速度并降低存储空间。  相似文献   

14.
快速支持向量机增量学习算法   总被引:3,自引:0,他引:3  
支持向量机对数据的学习往往因为规模过大造成学习困难,增量学习通过把数据集分割成历史样本集和新增样本集,利用历史样本集的几何分布信息,通过定义样本的遗忘因子,提取历史样本集中的那些可能成为支持向量的边界向量进行初始训练.在增量学习过程中对学习样本的知识进行积累,有选择地淘汰学习样本.实验结果表明,该算法在保证学习的精度和推广能力的同时,提高了训练速度,适合于大规模分类和在线学习问题.  相似文献   

15.
A self-constructing neural fuzzy inference network (SONFIN) with online learning ability is proposed in this paper. The SONFIN is inherently a modified Takagi-Sugeno-Kang (TSK)-type fuzzy rule-based model possessing neural network learning ability. There are no rules initially in the SONFIN. They are created and adapted as online learning proceeds via simultaneous structure and parameter identification. In the structure identification of the precondition part, the input space is partitioned in a flexible way according to an aligned clustering-based algorithm. As to the structure identification of the consequent part, only a singleton value selected by a clustering method is assigned to each rule initially. Afterwards, some additional significant terms selected via a projection-based correlation measure for each rule will be added to the consequent part incrementally as learning proceeds. The combined precondition and consequent structure identification scheme can set up an economic and dynamically growing network, a main feature of the SONFIN. In the parameter identification, the consequent parameters are tuned optimally by either least mean squares or recursive least squares algorithms and the precondition parameters are tuned by a backpropagation algorithm. To enhance the knowledge representation ability of the SONFIN, a linear transformation for each input variable can be incorporated into the network so that much fewer rules are needed or higher accuracy can be achieved  相似文献   

16.
针对经典支持向量机在增量学习中的不足,提出一种基于云模型的最接近支持向量机增量学习算法。该方法利用最接近支持向量机的快速学习能力生成初始分类超平面,并与k近邻法对全部训练集进行约简,在得到的较小规模的精简集上构建云模型分类器直接进行分类判断。该算法模型简单,不需迭代求解,时间复杂度较小,有较好的抗噪性,能较好地体现新增样本的分布规律。仿真实验表明,本算法能够保持较好的分类精度和推广能力,运算速度较快。  相似文献   

17.
特征选择是模式识别与数据挖掘的关键问题之一,它可以移除数据集中的冗余和不相关特征以提升学习性能。基于最大相关最小冗余准则,提出一种新的基于相关性与冗余性分析的半监督特征选择方法(S2R2),S2R2方法独立于任何分类学习算法。该方法首先对无监督相关度信息度量进行分析与扩充,然后结合信息增益,设计一种半监督特征相关性与冗余性度量,可以有效识别与移除不相关和冗余特征,最后采用增量搜索技术贪婪地构建特征子集,避免搜索指数级大小的解空间,提高算法的运行效率。本文还提出S2R2方法的快速过滤版本,FS2R2,以更好地应对大规模特征选择问题。多个标准数据集上的实验结果表明了所提方法的有效性和优越性。  相似文献   

18.
This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples, INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation. We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison with other well known machine learning algorithms.  相似文献   

19.
Liang  Shunpan  Pan  Weiwei  You  Dianlong  Liu  Ze  Yin  Ling 《Applied Intelligence》2022,52(12):13398-13414

Multi-label learning has attracted many attentions. However, the continuous data generated in the fields of sensors, network access, etc., that is data streams, the scenario brings challenges such as real-time, limited memory, once pass. Several learning algorithms have been proposed for offline multi-label classification, but few researches develop it for dynamic multi-label incremental learning models based on cascading schemes. Deep forest can perform representation learning layer by layer, and does not rely on backpropagation, using this cascading scheme, this paper proposes a multi-label data stream deep forest (VDSDF) learning algorithm based on cascaded Very Fast Decision Tree (VFDT) forest, which can receive examples successively, perform incremental learning, and adapt to concept drift. Experimental results show that the proposed VDSDF algorithm, as an incremental classification algorithm, is more competitive than batch classification algorithms on multiple indicators. Moreover, in dynamic flow scenarios, the adaptability of VDSDF to concept drift is better than that of the contrast algorithm.

  相似文献   

20.
现有的网络表示学习算法主要为基于浅层神经网络的网络表示学习和基于神经矩阵分解的网络表示学习。基于浅层神经网络的网络表示学习又被证实是分解网络结构的特征矩阵。另外,现有的大多数网络表示学习仅仅从网络的结构学习特征,即单视图的表示学习;然而,网络本身蕴含有多种视图。因此,文中提出了一种基于多视图集成的网络表示学习算法(MVENR)。该算法摈弃了神经网络的训练过程,将矩阵的信息融合和分解思想融入到网络表示学习中。另外,将网络的结构视图、连边权重视图和节点属性视图进行了有效的融合,弥补了现有网络表示学习中忽略了网络连边权重的不足,解决了基于单一视图训练时网络特征稀疏的问题。实验结果表明,所提MVENR算法的性能优于网络表示学习中部分常用的联合学习算法和基于结构的网络表示学习算法,是一种简单且高效的网络表示学习算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号