首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
For many applications such as compliant, accurate robot tracking control, dynamics models learned from data can help to achieve both compliant control performance as well as high tracking quality. Online learning of these dynamics models allows the robot controller to adapt itself to changes in the dynamics (e.g., due to time-variant nonlinearities or unforeseen loads). However, online learning in real-time applications - as required in control - cannot be realized by straightforward usage of off-the-shelf machine learning methods such as Gaussian process regression or support vector regression. In this paper, we propose a framework for online, incremental sparsification with a fixed budget designed for fast real-time model learning. The proposed approach employs a sparsification method based on an independence measure. In combination with an incremental learning approach such as incremental Gaussian process regression, we obtain a model approximation method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques. Implementation on a real Barrett WAM robot demonstrates the applicability of the approach in real-time online model learning for real world systems.  相似文献   

2.
This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples, INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation. We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison with other well known machine learning algorithms.  相似文献   

3.
Recent machine learning challenges require the capability of learning in non-stationary environments. These challenges imply the development of new algorithms that are able to deal with changes in the underlying problem to be learnt. These changes can be gradual or trend changes, abrupt changes and recurring contexts. As the dynamics of the changes can be very different, existing machine learning algorithms exhibit difficulties to cope with them. Several methods using, for instance, ensembles or variable length windowing have been proposed to approach this task.In this work we propose a new method, for single-layer neural networks, that is based on the introduction of a forgetting function in an incremental online learning algorithm. This forgetting function gives a monotonically increasing importance to new data. Due to the combination of incremental learning and increasing importance assignment the network forgets rapidly in the presence of changes while maintaining a stable behavior when the context is stationary.The performance of the method has been tested over several regression and classification problems and its results compared with those of previous works. The proposed algorithm has demonstrated high adaptation to changes while maintaining a low consumption of computational resources.  相似文献   

4.
Incremental learning has been developed for supervised classification, where knowledge is accumulated incrementally and represented in the learning process. However, labeling sufficient samples in each data chunk is of high cost, and incremental technologies are seldom discussed in the semi-supervised paradigm. In this paper we advance an Incremental Semi-Supervised classification approach via Self-Representative Selection (IS3RS) for data streams classification, by exploring both the labeled and unlabeled dynamic samples. An incremental self-representative data selection strategy is proposed to find the most representative exemplars from the sequential data chunk. These exemplars are incrementally labeled to expand the training set, and accumulate knowledge over time to benefit future prediction. Extensive experimental evaluations on some benchmarks have demonstrated the effectiveness of the proposed framework.  相似文献   

5.
Shaping multi-agent systems with gradient reinforcement learning   总被引:1,自引:0,他引:1  
An original reinforcement learning (RL) methodology is proposed for the design of multi-agent systems. In the realistic setting of situated agents with local perception, the task of automatically building a coordinated system is of crucial importance. To that end, we design simple reactive agents in a decentralized way as independent learners. But to cope with the difficulties inherent to RL used in that framework, we have developed an incremental learning algorithm where agents face a sequence of progressively more complex tasks. We illustrate this general framework by computer experiments where agents have to coordinate to reach a global goal. This work has been conducted in part in NICTA’s Canberra laboratory.  相似文献   

6.
针对实际应用中数据的批量到达,以及系统的存储压力和学习效率低等问题,提出一种基于信念修正思想的SVR增量学习算法。首先从历史样本信息中提取信念集,根据信念集和新增数据的特点选择相应的信念集建立支持向量回归模型并进行预测;然后对信念集进行修正,调整当前认知状态,使该算法对在线和批处理增量学习都有很好的适应性。在标准数据集上的测试验证了算法的良好性能;在某机场噪声实测数据上的对比实验也表明,该算法的性能明显优于传统学习算法和一般增量学习算法。  相似文献   

7.
Task-incremental learning (Task-IL) aims to enable an intelligent agent to continuously accumulate knowledge from new learning tasks without catastrophically forgetting what it has learned in the past. It has drawn increasing attention in recent years, with many algorithms being proposed to mitigate neural network forgetting. However, none of the existing strategies is able to completely eliminate the issues. Moreover, explaining and fully understanding what knowledge and how it is being forgotten during the incremental learning process still remains under-explored. In this paper, we propose KnowledgeDrift, a visual analytics framework, to interpret the network forgetting with three objectives: (1) to identify when the network fails to memorize the past knowledge, (2) to visualize what information has been forgotten, and (3) to diagnose how knowledge attained in the new model interferes with the one learned in the past. Our analytical framework first identifies the occurrence of forgetting by tracking the task performance under the incremental learning process and then provides in-depth inspections of drifted information via various levels of data granularity. KnowledgeDrift allows analysts and model developers to enhance their understanding of network forgetting and compare the performance of different incremental learning algorithms. Three case studies are conducted in the paper to further provide insights and guidance for users to effectively diagnose catastrophic forgetting over time.  相似文献   

8.
A diversity maintaining population-based incremental learning algorithm   总被引:1,自引:0,他引:1  
In this paper we propose a new probability update rule and sampling procedure for population-based incremental learning. These proposed methods are based on the concept of opposition as a means for controlling the amount of diversity within a given sample population. We prove that under this scheme we are able to asymptotically guarantee a higher diversity, which allows for a greater exploration of the search space. The presented probabilistic algorithm is specifically for applications in the binary domain. The benchmark data used for the experiments are commonly used deceptive and attractor basin functions as well as 10 common travelling salesman problem instances. Our experimental results focus on the effect of parameters and problem size on the accuracy of the algorithm as well as on a comparison to traditional population-based incremental learning. We show that the new algorithm is able to effectively utilize the increased diversity of opposition which leads to significantly improved results over traditional population-based incremental learning.  相似文献   

9.
针对基于无监督特征提取的目标检测方法效率不高的问题,提出一种在无标记数据集中准确检测前景目标的方法.其基本出发点是:正确的特征聚类结果可以指导目标特征提取,同时准确提取的目标特征可以提高特征聚类的精度.该方法首先对无标记样本图像进行局部特征提取,然后根据最小化特征距离进行无监督特征聚类.将同一个聚类内的图像两两匹配,将特征匹配的重现程度作为特征权重,最后根据更新后的特征权重指导下一次迭代的特征聚类.多次迭代后同时得到聚类结果和前景目标.实验结果表明,该方法有效地提高Caltech-256数据集和Google车辆图像的检测精度.此外,针对目前绝大部分无监督目标检测方法不具备增量学习能力这一缺点,提出了增量学习方法实现,实验结果表明,增量学习方法有效地提高了计算速度.  相似文献   

10.
11.
目的 人体目标再识别的任务是匹配不同摄像机在不同时间、地点拍摄的人体目标。受光照条件、背景、遮挡、视角和姿态等因素影响,不同摄相机下的同一目标表观差异较大。目前研究主要集中在特征表示和度量学习两方面。很多度量学习方法在人体目标再识别问题上了取得了较好的效果,但对于多样化的数据集,单一的全局度量很难适应差异化的特征。对此,有研究者提出了局部度量学习,但这些方法通常需要求解复杂的凸优化问题,计算繁琐。方法 利用局部度量学习思想,结合近几年提出的XQDA(cross-view quadratic discriminant analysis)和MLAPG(metric learning by accelerated proximal gradient)等全局度量学习方法,提出了一种整合全局和局部度量学习框架。利用高斯混合模型对训练样本进行聚类,在每个聚类内分别进行局部度量学习;同时在全部训练样本集上进行全局度量学习。对于测试样本,根据样本在高斯混合模型各个成分下的后验概率将局部和全局度量矩阵加权结合,作为衡量相似性的依据。特别地,对于MLAPG算法,利用样本在各个高斯成分下的后验概率,改进目标损失函数中不同样本的损失权重,进一步提高该方法的性能。结果 在VIPeR、PRID 450S和QMUL GRID数据集上的实验结果验证了提出的整合全局—局部度量学习方法的有效性。相比于XQDA和MLAPG等全局方法,在VIPeR数据集上的匹配准确率提高2.0%左右,在其他数据集上的性能也有不同程度的提高。另外,利用不同的特征表示对提出的方法进行实验验证,相比于全局方法,匹配准确率提高1.3%~3.4%左右。结论 有效地整合了全局和局部度量学习方法,既能对多种全局度量学习算法的性能做出改进,又能避免局部度量学习算法复杂的计算过程。实验结果表明,对于使用不同的特征表示,提出的整合全局—局部度量学习框架均可对全局度量学习方法做出改进。  相似文献   

12.
In the reinforcement learning, policy evaluation aims to predict long-term values of a state under a certain policy. Since high-dimensional representations become more and more common in the reinforcement learning, how to reduce the computational cost becomes a significant problem to the policy evaluation. Many recent works focus on adopting matrix sketching methods to accelerate least-square temporal difference (TD) algorithms and quasi-Newton temporal difference algorithms. Among these sketching methods, the truncated incremental SVD shows better performance because it is stable and efficient. However, the convergence properties of the incremental SVD is still open. In this paper, we first show that the conventional incremental SVD algorithms could have enormous approximation errors in the worst case. Then we propose a variant of incremental SVD with better theoretical guarantees by shrinking the singular values periodically. Moreover, we employ our improved incremental SVD to accelerate least-square TD and quasi-Newton TD algorithms. The experimental results verify the correctness and effectiveness of our methods.  相似文献   

13.
14.
为了解决v型支持向量回归机(v-supportvector regression,v-SVR)对偶问题的目标函数中增加的额外线性项从而导致无法产生有效初始解的问题和在绝缘增量调整过程中可能存在的解路径不可行更新问题,提出了精确增量式在线v-SVR学习算法.首先基于v-SVR的等价形式,利用提前调整,宽松的绝缘增量调整和精确的恢复调整有效地解决了v-SVR对偶问题存在的上述问题.然后分别对算法的可行性和有限收敛性进行了理论分析.最后在四个基准测试数据集上的仿真结果进一步验证了该算法的每一步调整都是可靠的,经过有限次数调整最终收敛到最小化问题的最优解,而且与批处理学习算法相比,随着训练样本的增加,算法在缩短学习时间上的优势显著.  相似文献   

15.
连续空间增量最近邻时域差分学习   总被引:1,自引:1,他引:0  
针对连续空间强化学习问题,提出一种基于局部加权学习的增量最近邻时域差分(TD)学习框架。通过增量方式在线选取部分已观测状态构建实例词典,采用新观测状态的范围最近邻实例逼近其值函数与策略,并结合TD算法对词典中各实例的值函数和资格迹迭代更新。就框架各主要组成部分给出多种设计方案,并对其收敛性进行理论分析。对24种方案组合进行仿真验证的实验结果表明, SNDN组合具有较好的学习性能和计算效率。  相似文献   

16.
付治  王红军  李天瑞  滕飞  张继 《软件学报》2020,31(4):981-990
聚类是机器学习领域中的一个研究热点,弱监督学习是半监督学习中一个重要的研究方向,有广泛的应用场景.在对聚类与弱监督学习的研究中,提出了一种基于k个标记样本的弱监督学习框架.该框架首先用聚类及聚类置信度实现了标记样本的扩展.其次,对受限玻尔兹曼机的能量函数进行改进,提出了基于k个标记样本的受限玻尔兹曼机学习模型.最后,完成了对该模型的推理并设计相关算法.为了完成对该框架和模型的检验,选择公开的数据集进行对比实验,实验结果表明,基于k个标记样本的弱监督学习框架实验效果较好.  相似文献   

17.
Domain adaptation learning(DAL) methods have shown promising results by utilizing labeled samples from the source(or auxiliary) domain(s) to learn a robust classifier for the target domain which has a few or even no labeled samples.However,there exist several key issues which need to be addressed in the state-of-theart DAL methods such as sufficient and effective distribution discrepancy metric learning,effective kernel space learning,and multiple source domains transfer learning,etc.Aiming at the mentioned-above issues,in this paper,we propose a unified kernel learning framework for domain adaptation learning and its effective extension based on multiple kernel learning(MKL) schema,regularized by the proposed new minimum distribution distance metric criterion which minimizes both the distribution mean discrepancy and the distribution scatter discrepancy between source and target domains,into which many existing kernel methods(like support vector machine(SVM),v-SVM,and least-square SVM) can be readily incorporated.Our framework,referred to as kernel learning for domain adaptation learning(KLDAL),simultaneously learns an optimal kernel space and a robust classifier by minimizing both the structural risk functional and the distribution discrepancy between different domains.Moreover,we extend the framework KLDAL to multiple kernel learning framework referred to as MKLDAL.Under the KLDAL or MKLDAL framework,we also propose three effective formulations called KLDAL-SVM or MKLDAL-SVM with respect to SVM and its variant μ-KLDALSVM or μ-MKLDALSVM with respect to v-SVM,and KLDAL-LSSVM or MKLDAL-LSSVM with respect to the least-square SVM,respectively.Comprehensive experiments on real-world data sets verify the outperformed or comparable effectiveness of the proposed frameworks.  相似文献   

18.
This paper presents a novel tracking algorithm which integrates two complementary trackers. Firstly, an improved Bayesian tracker(B-tracker) with adaptive learning rate is presented. The classification score of B-tracker reflects tracking reliability, and a low score usually results from large appearance change. Therefore, if the score is low, we decrease the learning rate to update the classifier fast so that B-tracker can adapt to the variation and vice versa. In this way, B-tracker is more suitable than its traditional version to solve appearance change problem. Secondly, we present an improved incremental subspace learning method tracker(Stracker). We propose to calculate projected coordinates using maximum posterior probability, which results in a more accurate reconstruction error than traditional subspace learning tracker. Instead of updating at every time, we present a stopstrategy to deal with occlusion problem. Finally, we present an integrated framework(BAST), in which the pair of trackers run in parallel and return two candidate target states separately. For each candidate state, we define a tracking reliability metrics to measure whether the candidate state is reliable or not, and the reliable candidate state will be chosen as the target state at the end of each frame. Experimental results on challenging sequences show that the proposed approach is very robust and effective in comparison to the state-of-the-art trackers.  相似文献   

19.
Flexible latent variable models for multi-task learning   总被引:1,自引:1,他引:0  
Given multiple prediction problems such as regression or classification, we are interested in a joint inference framework that can effectively share information between tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this paper we propose a probabilistic framework which can support a set of latent variable models for different multi-task learning scenarios. We show that the framework is a generalization of standard learning methods for single prediction problems and it can effectively model the shared structure among different prediction tasks. Furthermore, we present efficient algorithms for the empirical Bayes method as well as point estimation. Our experiments on both simulated datasets and real world classification datasets show the effectiveness of the proposed models in two evaluation settings: a standard multi-task learning setting and a transfer learning setting.  相似文献   

20.
针对增量学习模型在更新阶段的识别效果不稳定的问题,提出一种基于目标均衡度量的核增量学习方法。通过设置经验风险均值最小化的优化目标项,设计了均衡度量训练数据个数的优化目标函数,以及在增量学习训练条件下的最优求解方案;再结合基于重要性分析的新增数据有效选择策略,最终构建出了一种轻量型的增量学习分类模型。在跌倒检测公开数据集上的实验结果显示:当已有代表性方法的识别精度下滑至60%以下时,所提方法仍能保持95%以上的精度,同时模型更新的计算消耗仅为3 ms。实验结果表明,所提算法在显著提高增量学习模型更新阶段识别能力稳定性的同时,大大降低了时间消耗,可有效实现云服务平台中关于可穿戴设备终端的智能应用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号