首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Incremental learning with sample queries   总被引:8,自引:0,他引:8  
The classical theory of pattern recognition assumes labeled examples appear according to unknown underlying class conditional probability distributions where the pattern classes are picked randomly in a passive manner according to their a priori probabilities. This paper presents experimental results for an incremental nearest-neighbor learning algorithm which actively selects samples from different pattern classes according to a querying rule as opposed to the a priori probabilities. The amount of improvement of this query-based approach over the passive batch approach depends on the complexity of the Bayes rule  相似文献   

2.
Self-adaptation is an inherent part of any natural and intelligent system. Specifically, it is about the ability of a system to reconcile its requirements or goal of existence with the environment it is interacting with, by adopting an optimal behavior. Self-adaptation becomes crucial when the environment changes dynamically over time. In this paper, we investigate self-adaptation of classification systems at three levels: (1) natural adaptation of the base learners to change in the environment, (2) contributive adaptation when combining the base learners in an ensemble, and (3) structural adaptation of the combination as a form of dynamic ensemble. The present study focuses on neural network classification systems to handle a special facet of self-adaptation, that is, incremental learning (IL). With IL, the system self-adjusts to accommodate new and possibly non-stationary data samples arriving over time. The paper discusses various IL algorithms and shows how the three adaptation levels are inherent in the system's architecture proposed and how this architecture is efficient in dealing with dynamic change in the presence of various types of data drift when applying these IL algorithms.  相似文献   

3.
Ensuring models’ consistency is a key concern when using a model‐based development approach. Therefore, model inconsistency detection has received significant attention over the last years. To be useful, inconsistency detection has to be sound, efficient, and scalable. Incremental detection is one way to achieve efficiency in the presence of large models. In most of the existing approaches, incrementalization is carried out at the expense of the memory consumption that becomes proportional to the model size and the number of consistency rules. In this paper, we propose a new incremental inconsistency detection approach that only consumes a small and model size‐independent amount of memory. It will therefore scale better to projects using large models and many consistency rules. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Incremental backpropagation learning networks   总被引:2,自引:0,他引:2  
How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network", which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains.  相似文献   

5.
Incremental learning methods with retrieving of interfered patterns   总被引:7,自引:0,他引:7  
There are many cases when a neural-network-based system must memorize some new patterns incrementally. However, if the network learns the new patterns only by referring to them, it probably forgets old memorized patterns, since parameters in the network usually correlate not only to the old memories but also to the new patterns. A certain way to avoid the loss of memories is to learn the new patterns with all memorized patterns. It needs, however, a large computational power. To solve this problem, we propose incremental learning methods with retrieval of interfered patterns (ILRI). In these methods, the system employs a modified version of a resource allocating network (RAN) which is one variation of a generalized radial basis function (GRBF). In ILRI, the RAN learns new patterns with a relearning of a few number of retrieved past patterns that are interfered with the incremental learning. We construct ILRI in two steps. In the first step, we construct a system which searches the interfered patterns from past input patterns stored in a database. In the second step, we improve the first system in such a way that the system does not need the database. In this case, the system regenerates the input patterns approximately in a random manner. The simulation results show that these two systems have almost the same ability, and the generalization ability is higher than other similar systems using neural networks and k-nearest neighbors.  相似文献   

6.
7.
针对极端学习机(extreme learning machine,ELM)结构设计问题,基于隐含层激活函数及其导函数提出一种前向神经网络结构增长算法.首先以Sigmoid函数为例给出了一类基函数的派生特性:导函数可以由其原函数表示.其次,利用这种派生特性提出了ELM结构设计方法,该方法自动生成双隐含层前向神经网络,其第1隐含层的结点随机逐一生成.第2隐含层的输出由第1隐含层新添结点的激活函数及其导函数确定,输出层权值由最小二乘法分析获得.最后给出了所提算法收敛性及稳定性的理论证明.对非线性系统辨识及双螺旋分类问题的仿真结果证明了所提算法的有效性.  相似文献   

8.
9.
10.
A neural model for temporal pattern generation is used and analyzed for training with multiple complex sequences in a sequential manner. The network exhibits some degree of interference when new sequences are acquired. It is proven that the model is capable of incrementally learning a finite number of complex sequences. The model is then evaluated with a large set of highly correlated sequences. While the number of intact sequences increases linearly with the number of previously acquired sequences, the amount of retraining due to interference appears to be independent of the size of existing memory. The model is extended to include a chunking network which detects repeated subsequences between and within sequences. The chunking mechanism substantially reduces the amount of retraining in sequential training. Thus, the network investigated here constitutes an effective sequential memory. Various aspects of such a memory are discussed.  相似文献   

11.
Sugiyama M  Ogawa H 《Neural computation》2000,12(12):2909-2940
The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models. The effectiveness of this method is shown through computer simulations. The other is the optimal sampling method in trigonometric polynomial models. This method precisely specifies the optimal sampling locations.  相似文献   

12.
分析了支持向量的性质和增量学习过程,提出了一种新的增量学习算法,舍弃了对最终分类无用的样本,在保证测试精度的同时减少了训练时间.最后的数值实验和应用实例说明该算法是可行、有效的.  相似文献   

13.

This work presents the application of a multistrategy approach to some document processing tasks. The application is implemented in an enhanced version of the incremental learning system INTHELEX. This learning module has been embedded as a learning component in the system architecture of the EU project COLLATE, which deals with the annotation of cultural heritage documents. Indeed, the complex shape of the material handled in the project has suggested that the addition of multistrategy capabilities is needed to improve effectiveness and efficiency of the learning process. Results proving the benefits of these strategies in specific classfication tasks are reported in the experimentation presented in this work.  相似文献   

14.
Incremental online learning in high dimensions   总被引:4,自引:0,他引:4  
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.  相似文献   

15.
Incremental recovery in main memory database systems   总被引:5,自引:0,他引:5  
Recovery activities, like checkpointing and restart, in traditional database management systems are performed in a quiescent state where no transactions are active. This approach impairs the performance of online transaction processing systems, especially when a large volatile memory is used. An incremental scheme for performing recovery in main memory database systems (MMDBs), in parallel with transaction execution, is presented. A page-based incremental restart algorithm that enables the resumption of transaction processing as soon as the system is up is proposed. Pages are recovered individually and according to the demands of the post-crash transactions. A method for propagating updates from main memory to the backup database on disk is also provided. The emphasis is on decoupling the I/O activities related to the propagation to disk from the forward transaction execution in memory. The authors also construct a high-level recovery manager based on operation logging on top of the page-based algorithms. The proposed algorithms are motivated by the characteristics of large MMDBs, and exploit the technology of nonvolatile RAM  相似文献   

16.
许明英  尉永清  赵静 《计算机应用》2011,31(9):2530-2533
贝叶斯分类器形成初期,训练集不完备,生成的分类器性能不理想且不能动态跟踪用户需求。针对此缺陷,提出一种结合反馈信息的贝叶斯分类增量学习方法。为有效降低特征间的冗余性,提高反馈特征子集的代表能力,用一种基于遗传算法的改进特征选择方法选取反馈集中最优特征子集修正分类器。通过实验分析了算法的性能,结果证明该算法能明显优化分类效果,且整体稳定性较好。  相似文献   

17.
Simulated annealing can be viewed as a process that generates a sequence of Markov chains, i.e., it keeps no memory about the states visited in the past of the process. This property makes simulated annealing time-consuming in exploring needless states and difficult in controlling the temperature and transition number. In this paper, we propose a new annealing model with memory that records important information about the states visited in the past. After mapping applications onto a physical system containing particles with discrete states, the new annealing method systematically explores the configuration space, learns the energy information of it, and converges to a well-optimized state. Such energy information is encoded in a learning scheme. The scheme generates states distributed in Boltzmann-style probability according to the energy information recorded in it. Moreover, with the assistance of the learning scheme, controlling over the annealing process become simple and deterministic. From qualitative and quantitative analyses in this paper, we can see that this convenient framework provides an efficient technique for combinatorial optimization problems and good confidence in the solution quality  相似文献   

18.
目前的文本单类别分类算法在进行增量学习时需要进行大量的重复计算,提出了一种新的用于文本的单类别分类算法,在不降低分类效果的同时,有效地减少了加入新样本学习时所需的计算量,从而比较适合于需要进行增量学习的情况。该方法已进行了测试实验,获得了较好的实验结果。  相似文献   

19.
马旭淼  徐德 《控制与决策》2024,39(5):1409-1423
机器人的应用场景正在不断更新换代,数据量也在日益增长.传统的机器学习方法难以适应动态的环境,而增量学习技术能够模拟人类的学习过程,使机器人能利用旧知识来加快新任务的学习,在不遗忘旧技能的前提下学习新的技能.目前对于机器人增量学习的相关研究仍然较少,对此,主要介绍机器人增量学习研究进展.首先,对增量学习进行简介;其次,从参数和模型的角度出发,将当前机器人增量学习主流方法分为变参数方法、变模型方法、混合方法3类,分别对每一类进行论述,并给出相应的增量学习技术在机器人领域中的应用实例;然后,对机器人增量学习中常用的数据集和评价指标进行介绍;最后,对增量学习未来的发展趋势进行展望.  相似文献   

20.
Incremental 3D reconstruction using Bayesian learning   总被引:1,自引:1,他引:0  
We present a novel algorithm for 3D reconstruction in this paper, converting incremental 3D reconstruction to an optimization problem by combining two feature-enhancing geometric priors and one photometric consistency constraint under the Bayesian learning framework. Our method first reconstructs an initial 3D model by selecting uniformly distributed key images using a view sphere. Then once a new image is added, we search its correlated reconstructed patches and incrementally update the result model by optimizing the geometric and photometric energy terms. The experimental results illustrate our method is effective for incremental 3D reconstruction and can be further applied for large-scale datasets or to real-time reconstruction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号