首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Incremental learning methods with retrieving of interfered patterns   总被引:7,自引:0,他引:7  
There are many cases when a neural-network-based system must memorize some new patterns incrementally. However, if the network learns the new patterns only by referring to them, it probably forgets old memorized patterns, since parameters in the network usually correlate not only to the old memories but also to the new patterns. A certain way to avoid the loss of memories is to learn the new patterns with all memorized patterns. It needs, however, a large computational power. To solve this problem, we propose incremental learning methods with retrieval of interfered patterns (ILRI). In these methods, the system employs a modified version of a resource allocating network (RAN) which is one variation of a generalized radial basis function (GRBF). In ILRI, the RAN learns new patterns with a relearning of a few number of retrieved past patterns that are interfered with the incremental learning. We construct ILRI in two steps. In the first step, we construct a system which searches the interfered patterns from past input patterns stored in a database. In the second step, we improve the first system in such a way that the system does not need the database. In this case, the system regenerates the input patterns approximately in a random manner. The simulation results show that these two systems have almost the same ability, and the generalization ability is higher than other similar systems using neural networks and k-nearest neighbors.  相似文献   

3.
Incremental backpropagation learning networks   总被引:2,自引:0,他引:2  
How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network", which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains.  相似文献   

4.
We consider the problems of computing aggregation queries in temporal databases and of maintaining materialized temporal aggregate views efficiently. The latter problem is particularly challenging since a single data update can cause aggregate results to change over the entire time line. We introduce a new index structure called the SB-tree, which incorporates features from both segment-trees and B-trees. SB-trees support fast lookup of aggregate results based on time and can be maintained efficiently when the data change. We extend the basic SB-tree index to handle cumulative (also called moving-window) aggregates, considering separatelycases when the window size is or is not fixed in advance. For materialized aggregate views in a temporal database or warehouse, we propose building and maintaining SB-tree indices instead of the views themselves.Received: 20 March 2001, Accepted: 21 March 2001, Published online: 17 September 2003This work was supported by the National Science Foundation under grant IIS-9811947 and by NASA Ames under grant NCC2-5278.Edited by R. Snodgrass  相似文献   

5.
针对极端学习机(extreme learning machine,ELM)结构设计问题,基于隐含层激活函数及其导函数提出一种前向神经网络结构增长算法.首先以Sigmoid函数为例给出了一类基函数的派生特性:导函数可以由其原函数表示.其次,利用这种派生特性提出了ELM结构设计方法,该方法自动生成双隐含层前向神经网络,其第1隐含层的结点随机逐一生成.第2隐含层的输出由第1隐含层新添结点的激活函数及其导函数确定,输出层权值由最小二乘法分析获得.最后给出了所提算法收敛性及稳定性的理论证明.对非线性系统辨识及双螺旋分类问题的仿真结果证明了所提算法的有效性.  相似文献   

6.
Incremental learning with sample queries   总被引:8,自引:0,他引:8  
The classical theory of pattern recognition assumes labeled examples appear according to unknown underlying class conditional probability distributions where the pattern classes are picked randomly in a passive manner according to their a priori probabilities. This paper presents experimental results for an incremental nearest-neighbor learning algorithm which actively selects samples from different pattern classes according to a querying rule as opposed to the a priori probabilities. The amount of improvement of this query-based approach over the passive batch approach depends on the complexity of the Bayes rule  相似文献   

7.
Self-adaptation is an inherent part of any natural and intelligent system. Specifically, it is about the ability of a system to reconcile its requirements or goal of existence with the environment it is interacting with, by adopting an optimal behavior. Self-adaptation becomes crucial when the environment changes dynamically over time. In this paper, we investigate self-adaptation of classification systems at three levels: (1) natural adaptation of the base learners to change in the environment, (2) contributive adaptation when combining the base learners in an ensemble, and (3) structural adaptation of the combination as a form of dynamic ensemble. The present study focuses on neural network classification systems to handle a special facet of self-adaptation, that is, incremental learning (IL). With IL, the system self-adjusts to accommodate new and possibly non-stationary data samples arriving over time. The paper discusses various IL algorithms and shows how the three adaptation levels are inherent in the system's architecture proposed and how this architecture is efficient in dealing with dynamic change in the presence of various types of data drift when applying these IL algorithms.  相似文献   

8.
9.
10.
Incremental online learning in high dimensions   总被引:4,自引:0,他引:4  
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.  相似文献   

11.
Sugiyama M  Ogawa H 《Neural computation》2000,12(12):2909-2940
The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models. The effectiveness of this method is shown through computer simulations. The other is the optimal sampling method in trigonometric polynomial models. This method precisely specifies the optimal sampling locations.  相似文献   

12.
分析了支持向量的性质和增量学习过程,提出了一种新的增量学习算法,舍弃了对最终分类无用的样本,在保证测试精度的同时减少了训练时间.最后的数值实验和应用实例说明该算法是可行、有效的.  相似文献   

13.

This work presents the application of a multistrategy approach to some document processing tasks. The application is implemented in an enhanced version of the incremental learning system INTHELEX. This learning module has been embedded as a learning component in the system architecture of the EU project COLLATE, which deals with the annotation of cultural heritage documents. Indeed, the complex shape of the material handled in the project has suggested that the addition of multistrategy capabilities is needed to improve effectiveness and efficiency of the learning process. Results proving the benefits of these strategies in specific classfication tasks are reported in the experimentation presented in this work.  相似文献   

14.
15.
Knowledge and Information Systems - In the last decades, temporal networks played a key role in modelling, understanding, and analysing the properties of dynamic systems where individuals and...  相似文献   

16.
Incremental 3D reconstruction using Bayesian learning   总被引:1,自引:1,他引:0  
We present a novel algorithm for 3D reconstruction in this paper, converting incremental 3D reconstruction to an optimization problem by combining two feature-enhancing geometric priors and one photometric consistency constraint under the Bayesian learning framework. Our method first reconstructs an initial 3D model by selecting uniformly distributed key images using a view sphere. Then once a new image is added, we search its correlated reconstructed patches and incrementally update the result model by optimizing the geometric and photometric energy terms. The experimental results illustrate our method is effective for incremental 3D reconstruction and can be further applied for large-scale datasets or to real-time reconstruction.  相似文献   

17.
顾苏杭  王士同 《控制与决策》2020,35(9):2081-2093
提出利用特征增量学习和数据风格信息双知识表达约束的模糊K平面聚类(ISF-KPC)算法.为了获得更好的泛化性,聚类前利用高斯核函数对原输入特征进行增长式的特征扩维.考虑数据集中来源于同一聚类的样本具有相同的风格,以矩阵的形式表达数据风格信息,并采用迭代的方式确定每个聚类的风格矩阵.大量实验结果表明,双知识表达约束的ISF-KPC与对比算法相比能够取得竞争性的聚类性能,尤其在具有典型风格数据集上能够取得优异的聚类性能.  相似文献   

18.
Incremental nonlinear dimensionality reduction by manifold learning   总被引:6,自引:0,他引:6  
Understanding the structure of multidimensional patterns, especially in unsupervised cases, is of fundamental importance in data mining, pattern recognition, and machine learning. Several algorithms have been proposed to analyze the structure of high-dimensional data based on the notion of manifold learning. These algorithms have been used to extract the intrinsic characteristics of different types of high-dimensional data by performing nonlinear dimensionality reduction. Most of these algorithms operate in a "batch" mode and cannot be efficiently applied when data are collected sequentially. In this paper, we describe an incremental version of ISOMAP, one of the key manifold learning algorithms. Our experiments on synthetic data as well as real world images demonstrate that our modified algorithm can maintain an accurate low-dimensional representation of the data in an efficient manner.  相似文献   

19.
In this paper we introduce an incremental non-negative matrix factorization (INMF) scheme in order to overcome the difficulties that conventional NMF has in online processing of large data sets. The proposed scheme enables incrementally updating its factors by reflecting the influence of each observation on the factorization appropriately. This is achieved via a weighted cost function which also allows controlling the memorylessness of the factorization. Unlike conventional NMF, with its incremental nature and weighted cost function the INMF scheme successfully utilizes adaptability to dynamic data content changes with a lower computational complexity. Test results reported for two video applications, namely background modeling in video surveillance and clustering, demonstrate that INMF is capable of online representing data content while reducing dimension significantly.  相似文献   

20.
目前的文本单类别分类算法在进行增量学习时需要进行大量的重复计算,提出了一种新的用于文本的单类别分类算法,在不降低分类效果的同时,有效地减少了加入新样本学习时所需的计算量,从而比较适合于需要进行增量学习的情况。该方法已进行了测试实验,获得了较好的实验结果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号